forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
ByeGzlrKwH
Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network
[ "Taiji Suzuki", "Hiroshi Abe", "Tomoaki Nishimura" ]
One of the biggest issues in deep learning theory is the generalization ability of networks with huge model size. The classical learning theory suggests that overparameterized models cause overfitting. However, practically used large deep models avoid overfitting, which is not well explained by the classical approaches. To resolve this issue, several attempts have been made. Among them, the compression based bound is one of the promising approaches. However, the compression based bound can be applied only to a compressed network, and it is not applicable to the non-compressed original network. In this paper, we give a unified frame-work that can convert compression based bounds to those for non-compressed original networks. The bound gives even better rate than the one for the compressed network by improving the bias term. By establishing the unified frame-work, we can obtain a data dependent generalization error bound which gives a tighter evaluation than the data independent ones.
[ "Generalization error", "compression based bound", "local Rademacher complexity" ]
Accept (Spotlight)
https://openreview.net/pdf?id=ByeGzlrKwH
https://openreview.net/forum?id=ByeGzlrKwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "AlBAgatu-w", "SkgfGuL3or", "HJgKa3Rijr", "BJl6G22osr", "Hyepa7Msir", "SJeWa-KcoB", "BJeq9Wt9jH", "BkgeHbYcjr", "SkeOxWYcjS", "BygRLlp3cS", "Bkg4y5FCYS", "Byevc7PaYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798742183, 1573836810312, 1573805248678, 1573796884656, 1573753797419, 1573716408840, 1573716370419, 1573716279541, 1573716208321, 1572814934382, 1571883484302, 1571808143293 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2165/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2165/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2165/Authors" ], [ "ICLR.cc/2020/Conference/Paper2165/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2165/Authors" ], [ "ICLR.cc/2020/Conference/Paper2165/Authors" ], [ "ICLR.cc/2020/Conference/Paper2165/Authors" ], [ "ICLR.cc/2020/Conference/Paper2165/Authors" ], [ "ICLR.cc/2020/Conference/Paper2165/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2165/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2165/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper has a few interesting contributions: (a) a bound for un-compressed networks in terms of the compressed network (this is in contrast to some prior work, which only gives bounds on the compressed network); (b) the use of local Rademacher complexity to try to squeeze as much as possible out of the connection; (c) an application of the bound to a specific interesting favorable condition, namely low-rank structure.\\n\\nAs a minor suggestion, I'd like to recommend that the authors go ahead and use their allowed 10th body page!\", \"title\": \"Paper Decision\"}", "{\"title\": \"Comments on the revised submission\", \"comment\": \"Thank you for clarifying the notation and answering my questions. I found the new empirical evaluation of intrinsic dimensionality and comparison to Arora et al. bound to be a valuable contribution (Appendix D). However, I do believe that the presentation of the theoretical results and the notation in the main text could be made easier to follow still (I personally think that it is confusing to denote quite different quantities with the same letter but different subscripts or superscripts).\\n\\nOverall, I think the paper improved and therefore I increased my score to \\\"accept\\\".\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"It helps a lot!\"}", "{\"title\": \"We have fixed the issue about Eq.(5)\", \"comment\": \"Thank you for your thorough exposition.\\nWe have realized that you are absolutely correct.\\nThis issue can be easily fixed by replacing $\\\\dot{R}_{r}(\\\\widehat{\\\\mathcal{F}} - \\\\widehat{\\\\mathcal{G}})$ with $\\\\dot{R}_{r}(\\\\psi(\\\\widehat{\\\\mathcal{F}}) - \\\\psi(\\\\widehat{\\\\mathcal{G}}))$ as an upper bound of Eq.(5). This is further bounded by\\n$$\\n\\\\dot{R}_{r}(\\\\psi(\\\\widehat{\\\\mathcal{F}}) -\\\\psi(\\\\widehat{\\\\mathcal{G}})) \\\\leq\\n\\\\frac{C}{n}\\n+\\nC \\\\mathrm{E}_{D_n}\\\\left[ \\\\int_{1/n}^{\\\\hat{\\\\gamma}_n} \\\\sqrt{\\\\frac{\\\\log(N( \\\\widehat{\\\\mathcal{G}},\\\\|\\\\cdot\\\\|_{n},\\\\epsilon/2))}{n}} d \\\\epsilon\\n+\\n\\\\int_{1/n}^{\\\\hat{\\\\gamma}_n} \\\\sqrt{\\\\frac{\\\\log(N( \\\\widehat{\\\\mathcal{G}},\\\\|\\\\cdot\\\\|_{n},\\\\epsilon/2))}{n}} d\\\\epsilon \\\\right].\\n$$\\nPlease check Eq.(6) of the revised version. We also used this upper bound to bound $\\\\dot{R}_{r}(\\\\widehat{\\\\mathcal{F}} - \\\\widehat{\\\\mathcal{G}})$ in the previous version and all the remaining arguments (Theorems 2,3 and 4) are derived from the the Dudley integral bound appearing in the right hand side instead of $\\\\dot{R}_r$ itself. Therefore, this modification does not affect the remaining arguments. According to this modification, we fixed the main text and the proofs. They are just minor modifications.\\n\\nAlthough we used only the Dudley integral bound to show Theorems 2,3 and 4, we used the local Rademacher complexity $\\\\dot{R}_{\\\\dot{r}}(\\\\widehat{\\\\mathcal{F}} - \\\\widehat{\\\\mathcal{G}})$ in Theorem 1 to avoid a heavy notation related to the covering number appearing in the Dudley integral. Unexpectedly, this caused a mistake, but the $\\\\dot{R}_{\\\\dot{r}}$ term can be replaced by the Dudley integral anyway.\\n\\nFinally, we would like to remark that the local Rademacher complexity $\\\\dot{R}_{r}(\\\\widehat{\\\\mathcal{F}} - \\\\widehat{\\\\mathcal{G}})$ is still required to bridge $\\\\|f-g\\\\|_n$ and $\\\\|f-g\\\\|_{L_2}$. Thus, it remains in the main text.\\n\\nWe appreciate your insightful comment.\"}", "{\"title\": \"How to prove the covering number inequality used in Eq. (5)?\", \"comment\": \"Thanks for the response. I agree that the Lipschitz continuity of $\\\\psi$ implies $|\\\\psi(f(x))-\\\\psi(g(x))|\\\\le|f(x)-g(x)|$ and $\\\\|\\\\psi(f)-\\\\psi(g)\\\\|_n\\\\le\\\\|f-g\\\\|_n$. However, to prove Eq. (5), it looks like the following inequality is used:\\n$$\\\\mathcal{N}(\\\\{\\\\psi(f)-\\\\psi(g)|f\\\\in\\\\widehat{\\\\mathcal{F}},g\\\\in\\\\widehat{\\\\mathcal{G}},\\\\|f-g\\\\|_{L_2}\\\\le r\\\\},\\\\|\\\\cdot\\\\|_n,\\\\epsilon)\\\\le\\\\mathcal{N}(\\\\{f-g|f\\\\in\\\\widehat{\\\\mathcal{F}},g\\\\in\\\\widehat{\\\\mathcal{G}},\\\\|f-g\\\\|_{L_2}\\\\le r\\\\},\\\\|\\\\cdot\\\\|_n,\\\\epsilon).$$\\nI do not see how to prove it? It is not enough to only use $\\\\|\\\\psi(f)-\\\\psi(g)\\\\|_n\\\\le\\\\|f-g\\\\|_n$; what we need to show should be something like given $h$ such that $\\\\|(f-g)-h\\\\|_n\\\\le\\\\epsilon$, it also holds that $\\\\|(\\\\psi(f)-\\\\psi(g))-\\\\phi(h)\\\\|_n\\\\le\\\\epsilon$ for some transformation $\\\\phi$.\\n\\nTo put it simply, I agree that given a function class $\\\\mathcal{F}$ and a Lipschitz function $\\\\psi$, the covering number of $\\\\psi(\\\\mathcal{F})$ is bounded by the covering number of $\\\\mathcal{F}$; however, I do not see why the above inequality is true, since we are considering $\\\\psi(f)-\\\\psi(g)$, not $\\\\psi(f-g)$.\\n\\nIn detail, in the original review I gave a special example where the above covering number inequality is not true. Consider the case $n=1$, and the set $A:=\\\\{(z+1,z)|z\\\\in[-1,+1]\\\\}$. Then $\\\\{x-y|(x,y)\\\\in A\\\\}$ only contains a single number $1$, and thus has covering number $1$. On the other hand, let $\\\\psi$ denote the sigmoid function $e^x/(1+e^x)$ which is Lipschitz, then $\\\\{\\\\psi(x)-\\\\psi(y)|(x,y)\\\\in A\\\\}=[\\\\frac{1}{2}-\\\\frac{1}{1+e},\\\\frac{\\\\sqrt{e}-1}{\\\\sqrt{e}+1}]$, whose covering number is larger than $1$. This example is too special since in $A$, the two coordinates are not independent, and probably we can prove the above covering number inequality when $f$ and $g$ are freely chosen from $\\\\widehat{\\\\mathcal{F}}$ and $\\\\widehat{\\\\mathcal{G}}$; however they further need to satisfy the condition $\\\\|f-g\\\\|_{L_2}\\\\le r$, which makes the situation more complicated.\\n\\nIn addition, it looks to me that Eq. (5) is important to bound the bias term, which is after all the key term this paper tries to bound, as the title suggests \\\"compression based bound for non-compressed networks\\\". Therefore the discussion of Eq. (5) is not just a technical one, but could affect the big picture.\"}", "{\"title\": \"Reply from authors\", \"comment\": \"Thank you for your suggestive comments, which clarify our paper's contribution.\\n\\n> One thing I hope the authors could clarify is the novelty in the proof of Theorem 1, because it seems the techniques used here such as entropy integral and peeling are all well-known. It would be better if the authors could give a comparison between this paper and papers with similar techniques.\\n\\nIn the literature, the local Rademacher complexity has been used to derive a fast leaning rate for a model the complexity of which is pre-determined. Howerver, the current setting is that the complexity of the trained model is more data-dependent, and the local Rademacher complexity is used to \\\"bridge\\\" the complexity of the trained network and the set of the small sized networks, G, while typical usage of the technique is directly bound the population excess risk. Hence, the usage of the local Rademacher complexity is quite different from the existing studies. In particular, we need to use the ratio type empirical process. Moreover, the bound is not restricted to the empirical risk minimization and it can be applied to any estimator as long as it produces a compressible network. We think our usage of the technique is interesting and this technique has not been employed in the literature of the generalization error analysis of overparameterized neural networks. Hence, although using the local Rademacher complexity might seem classical, we believe that it still provides an important insight to understand the generalization error analysis of deep learning.\\n\\n> Another question is, in the statement of Theorem 1, the term is marked as part of 'main term' and the term is marked as part of 'fast term'. I hope the authors could give a more detailed explanation of why \\\\dot{r} could be seen as a faster term than a constant, which seems not sensible to me.\\n\\nIf \\\\hat{r} is fixed independent of the sample size, then \\\\dot{r} is just a constant. However, by balancing the bias and variance terms, we may decrease \\\\hat{r} as the sample size increases. Indeed, we took $\\\\hat{r} =(\\\\sum_l m_l/L)^{-\\\\frac{1}{4/\\\\beta + 2(1-1/2\\\\alpha)}}$ in Theorem 4 which can be small if $m_l$ is relatively large (in particular, m_l increases as n goes up). However, we also realized that the terminology \\\"fast term\\\" would be confusing. We changed this to \\\"bias term.\\\"\\n\\n> Also, I hope the authors could explain why $\\\\Phi(\\\\dot{r})$ is a 'faster term', because it seems this term is not faster than $\\\\sqrt{1/n}$.\\nLemma 2 shows that the main term of $\\\\Phi(r)$ is proportional to $O(r^{1-q}\\\\sqrt{1/n})$ under some assumptions. Thus, if $\\\\dot{r}$ is much smaller than 1, then $\\\\Phi(\\\\dot{r})$ can be smaller than \\\\sqrt{1/n}. At least, we can show r^{*2} is o(1/\\\\sqrt{n}) in a typical setting. This is why we used the terminology of \\\"faster term.\\\"\\n\\nFor the reasoning we have mentioned above, we employed the terminology \\\"faster term.\\\" However, by balancing the bias and variance term, some part of the faster term becomes the same rate as the main term (this is mainly due to the term related to $\\\\hat{r}$). We thought this terminology facilitates the understanding, but it could cause some confusion. Hence, we have changed the faster term to \\\"bias term.\\\" We also modified the expositions after Theorem 1 so that there does not appear confusion.\"}", "{\"title\": \"Reply from authors\", \"comment\": \"Thank you very much for your several comments. We have revised our paper according to your comments.\\n\\n> I appreciate that the outlines of the proofs are included in the main text, which helps the reader follow the ideas.\\nThank you very much for your suggestion. We will definitely add the outline of the proof in the final version.\\n\\n> I think the paper could be improved immensely by some empirical analysis of the rank of compressed standard vision networks and rank of activation covariance matrices.\\nThank you for pointing out this. We have added a numerical experiments about the eigenvalue distributions and intrinsic dimensionality of practically used VGG-19 network in Appendix D. We can see that the eigenvalues of the covariance and weight matrices decrease rapidly and the intrinsic dimensionality can be much smaller than the actual number of parameters. We also included a comparison with Arora et al. 2018. Our suggested quantity gives favorably tight evaluation compared with their numerical results.\", \"citation_issues\": \"> In the introduction, paragraph 2, the authors cite Neyshabur et al. 2019 for the observation that networks generalize well despite being overparameterized. It seems like an odd choice.\\n> Why is Barttlet\\u2019s \\u201899 paper [\\u201cSize of the weights\\u2026\\u201d] not cited? Or at least Neyshabur et al. 2015?\\n\\nThank you very much for pointing out the citation issue. We cited Neyshabur et al. 2019 as a good pointer to the recent literature of generalization error analysis and numerical experiments on overparameterized networks. However, we agree with your opinion that the papers you mentioned should be cited. We have cited them in the revised version.\\n\\n> Then the authors mention that classical learning theory cannot explain the phenomena mentioned above, [...]\\n> The authors need to be more precise and add citations (I am assuming that the authors are talking about VC bounds for worst-case ERM generalization).\\n\\nYes, we intended that the \\\"classical learning theory\\\" is the VC-dimension type worst case analysis. The VC-dimension of networks with depth L and width W is lower bounded by L^2W^2, which yields O(\\\\sqrt{L^2W^2/n}) of the generalization error bound (Harvey et al. 2017). We have modified this part as \\\"well explained by a classical VC-dimension type theory (Harvey et al. 2017))\\\" by citing the paper, Harvey et al. (2017).\\n\\n> In the third paragraph, where the authors talk about norm-based bounds being lose, it seems that Nagarajan and Kolter 2019 should be cited (not only at the end), as well as Dzigaite and Roy 2017 (they look into the looseness of path-norm and margin-based bounds).\\n\\nThank you very much for the citation information. We could have missed some relevant papers, but we have cited them in the revised version.\\n\\n> Could the authors comment more on how the bound in Theorem 2 is superior to VC dimension bound and whether conditions under which the bound is tight are realistic for standard compressed vision networks.\\n\\nOur bound is always tighter than VC-dimension bound, but VC-dimension bound is recovered if we let \\\\alpha and \\\\beta = 0 small as an extreme case (this is not directly obtained by the presented theorems, but can be seen from the proof). As long as the singular value decay satisfies the assumptions, our bound can be tighter than VC-dimension bound (please remark that this does not necessarily imply the matrices are close to rank \\\"1\\\"). To show this is realistic, we have conducted numerical experiments (Appendix D). We can see that both of the weight matrices and the covariance matrices show rapid decrease of spectrum.\\n\\n\\n> In general, I found the notation a bit hard to follow.\\nThank you very much for reading our paper in details. We will do our best to make the notation more concise in the final version.\\n\\n> Other minor comments:\\n> In section 2, marginal distributions over x and y are introduced. Are those used in the main text?\\nThank you for pointing out this. They are used only in Assumptions 1 and 2 for clarifying the support of the distributions. We have moved the definition just before the assumptions.\\n\\n> Is that a definition of \\\\mu with the dot on top in assumption 5, or is this mu with the dot defined earlier? Using notation := would make it clearer whether the quantity is being defined.\\nYes, you are absolutely correct. We have added \\\":=\\\"\\\" in the right hand side.\\n\\n> In Section 3, \\u201cThe main difference from the\\u2026\\u201d paragraph, there is \\\\Psi(\\\\dot r) used. Where is that defined?\\nWe appreciate your correction. This was typo. $\\\\Psi(\\\\dot r)$ is defined in Appendix A, but this was not defined before Section 3. We replaced this by $\\\\dot{R}_{\\\\dot{r}}(\\\\widehat{\\\\mathcal{F}} - \\\\widehat{\\\\mathcal{G}})$ in the revised version.\"}", "{\"title\": \"Reply from authors\", \"comment\": \"Thank you for your suggestive comments.\\n\\n(1)\", \"q\": \"it looks like the local Rademacher complexity of F can be controlled directly using Lemma 2. The question then is how compression helps in the analysis?\", \"a\": \"Yes, you are right. The local Rademacher complexity of F appears only in the bias term. The variance term (the main term) is controlled by the (global) Rademacher complexity of G. If we don't consider compression, the main term (the global Rademacher complexity of G) must be replaced by that of \\\"F\\\" which is much larger than that of G. However, through compression, this becomes the complexity of G, which yields large improvement. We would like to notice that a naive analysis without local Rademacher complexity produces an additional term of a global Rademacher complexity of F to bound the bias term. Our technique resolves this issue by using the ratio type empirical process.\"}", "{\"title\": \"Revised version has been uploaded\", \"comment\": \"Dear reviewers,\\n\\nThank you very much for your insightful comments. We have revised our manuscript according to your comments. The main modifications are as follows:\\n1. We have added numerical evaluation in Appendix D. It evaluates the eigenvalue distribution of VGG-19 trained on CIFAR-10 and computed the intrinsic dimensionality of that.\\n2. We have added some missing important citations.\\n3. We have modified the intuitive explanations of the general bound (Theorem 1).\\n\\nSincerely yours,\\nAuthors.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper obtains a compression-based generalization bound (Theorem 1) for the original network, while prior work gives bounds for the compressed network. The general bound given by Theorem 1 is further applied to networks with low-rank weight matrices (Theorem 2 and Corollary 1) or low-rank covariance matrices (Theorem 3 and 4). In some cases, the bound given by Theorem 1 for the original network could be better than the bound for the compressed network.\\n\\nIn terms of proof techniques, Lemma 2 is a general result to control the local Rademacher complexity using upper bounds on the covering numbers, which is interesting and could be useful in other problems.\\n\\nOn the other hand, there are two technical concerns.\\n(1) In eq. (5), the covering number of {\\\\phi(f)-\\\\phi(g)} is bounded by the covering number of {f-g}, which is not necessarily true. For example, in the 1-dimensional case, it is possible that f-g is always 1, while \\\\phi(f)-\\\\phi(g) is not a constant. This example might appear since f and g are not freely chosen from F and G; they further need to satisfy the condition that |f-g|_{L_2} is bounded by r. If the claim in eq. (5) is indeed true, a proof is needed.\\n(2) Despite the issue in (1), many bounds in the paper may actually be okay, since in the proofs the covering numbers of F (the original networks) are used (e.g., in eq. (6) and Lemma 2). Therefore it looks like the local Rademacher complexity of F can be controlled directly using Lemma 2. The question then is how compression helps in the analysis?\\n\\nI hope the above points can be clarified, and I would like to participate in the discussion.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents novel theoretical results on generalization bounds via compression. Similar ideas in the last few years appeared, but only bounds on a compressed network were obtained. In contrast, the current submission gives a bound on the original (uncompressed) network in terms of the complexity of the compressed network class.\\n\\nOverall, the paper seems to be well-written. I appreciate that the outlines of the proofs are included in the main text, which helps the reader follow the ideas. The result is novel and quite interesting. The new bounds seem to be still quite far from giving tight generalization theory, but I believe the paper provides some nice theoretical results for other researchers to improve upon. I think the paper could be improved immensely by some empirical analysis of the rank of compressed standard vision networks and rank of activation covariance matrices. There are also some citation issues (see detailed comments below).\", \"citation_issues\": \"In the introduction, paragraph 2, the authors cite Neyshabur et al. 2019 for the observation that networks generalize well despite being overparameterized. It seems like an odd choice. Why is Barttlet\\u2019s \\u201899 paper [\\u201cSize of the weights\\u2026\\u201d] not cited? Or at least Neyshabur et al. 2015? \\nThen the authors mention that classical learning theory cannot explain the phenomena mentioned above, and classical theory \\u201c.. suggests \\u201d that overparameterized models cause overfitting\\u2026\\u201d. The authors need to be more precise and add citations (I am assuming that the authors are talking about VC bounds for worst-case ERM generalization).\\nIn the third paragraph, where the authors talk about norm-based bounds being lose, it seems that Nagarajan and Kolter 2019 should be cited (not only at the end), as well as Dzigaite and Roy 2017 (they look into the looseness of path-norm and margin-based bounds).\\n\\nCould the authors comment more on how the bound in Theorem 2 is superior to VC dimension bound and whether conditions under which the bound is tight are realistic for standard compressed vision networks. Having weight matrices to be close to rank 1 seems unrealistic.I would like to see some sort of empirical evidence if the authors believe that this is the case. And for larger ranks, the bound seems to be close to VC bound.\\n\\nIn general, I found the notation a bit hard to follow and had to constantly be looking through the paper to find the definitions of various quantities. Having three different r\\u2019s, multiple mu\\u2019s with dots, bars, stars, etc., was definitely confusing and required extra attention to detail.\", \"other_minor_comments\": \"In section 2, marginal distributions over x and y are introduced. Are those used in the main text?\\nIs that a definition of \\\\mu with the dot on top in assumption 5, or is this mu with the dot defined earlier? Using notation := would make it clearer whether the quantity is being defined.\\nIn Section 3, \\u201cThe main difference from the\\u2026\\u201d paragraph, there is \\\\Psi(\\\\dot r) used. Where is that defined?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provides generalization bound based on the compression arguments. A key contribution is that instead of bounding the\\npopulation risk for the compressed model, this paper manages to give bounds on the non-compressed network following a unified analysis framework. Also, this paper applies the unified framework to low-rank assumption on weigh matrices and covariance matrices.\\n\\nOverall, I believe this paper should be accepted because of its contribution to our understanding of generalization theory. The central contribution here is using localized Rademacher complexity to bound the $L_2$ norm between original network and compressed network. However, I think there are several issues unclear and in need of clarification.\\n\\nOne thing I hope the authors could clarify is the novelty in the proof of Theorem 1, because it seems the techniques used here such as entropy integral and peeling are all well-known. It would be better if the authors could give a comparison between this paper and papers with similar techniques.\\n\\nAnother question is, in the statement of Theorem 1, the term $\\\\sqrt{M \\\\frac{2t}{n}}$ is marked as part of 'main term' and the term $C \\\\dot r \\\\sqrt{\\\\frac{t}{n}}$ is marked as part of 'fast term'. I hope the authors could give a more detailed explanation of why $\\\\dot r$ could be seen as a faster term than a constant, which seems not sensible to me. \\n\\nAlso, I hope the authors could explain why $\\\\Phi(\\\\sqrt{2(\\\\hat r ^2 + r ^2 _*)})$ is a 'faster term', because it seems this term is not faster than $\\\\sqrt{\\\\frac{1}{n}}$.\\n\\nThe questions above essentially concern how sharp this general bound could be. I would appreciate it if the authors could give a thorough response to questions mentioned above. This can help me achieve a better understanding and a more precise evaluation on this paper.\"}" ] }
rJxGGlSKwH
Sentence embedding with contrastive multi-views learning
[ "Antoine Simoulin" ]
In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge. Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations. We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views. Formally, multiple views of the same sentence are mapped to close representations. On the contrary, views from other sentences are mapped further. By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form.
[ "contrastive", "multi-views", "linguistic", "embedding" ]
Reject
https://openreview.net/pdf?id=rJxGGlSKwH
https://openreview.net/forum?id=rJxGGlSKwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "e6-HrXDXw4", "BJejdUk3qr", "BygW3uM6KH", "HJlsoiwNtB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798742154, 1572759154651, 1571788969395, 1571220387138 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2164/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2164/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2164/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method to learn sentence representations that incorporates linguistic knowledge in the form of dependency trees using contrastive learning. Experiments on SentEval and probing tasks show that the proposed method underperform baseline methods.\\n\\nAll reviewers agree that the results are not strong enough to support the claim of the paper and have some concerns about the scalability of the implementation. They also agree that the writing of the paper can be improved (details included in their reviews below). \\n\\nThe authors acknowledged these concerns and mentioned that they will use them to improve the paper for future work, so I recommend rejecting this paper for ICLR.\", \"title\": \"Paper Decision\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper describes a self-supervised sentence embedding approach that incorporates a different view from plain text where some extent of linguistic knowledge is incorporated through the application of tree LSTM. The training procedure is standard contrastive framework where the model is encouraged to distinguish between context sentence (sentences appearing close to the target sentence) and negative samples. Evaluations are conducted on 1) downstream tasks, but with a simple logistic regression model on top of sentence embeddings; 2) probing tasks that more focus on surface information prediction, syntactic and semantic tasks; and 3) qualitative analysis with nearest 5 sentences.\\n\\nAlthough the experiments are thorough, I am in favor of rejecting this paper with the following reasons:\\n\\nFirst, the proposed model is trained with 4.6M sentences among 78M available for 33 hours. It is unclear why authors stop the training at this early stage but the results on all three evaluations seem to be inferior to the state-of-the-art by a big margin. I am happy to raise my score if authors can show the results of a well trained proposed model.\\n\\nSecond, the paper has some room for improvement in terms of clarity, to name a few:\\n1) Authors can strengthen the motivation for multi-views learning in related work; \\n2) Formula 1 for softmax is wrong;\\n3) Contrastive LSTM and contrastive tree LSTM are not clearly defined in the paper, although the former should refer to quick-thoughts and the latter means the proposed method;\\n4) In qualitative analysis, for the last example, there is exactly the same candidate with similarity score 0.012. According to cosine similarity, wouldn\\u2019t this be 0 and also show up in the baseline model regardless of the embeddings?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"Overview: This work proposes to learn sentence embeddings using both contrastive learning and multiple \\\"views\\\" of sentences. This work largely builds off of [1], including using the same objective, but uses a multi-view approach to modeling.\", \"They apply the concept of multi-view models, specifically combining tree and linear LSTMs to learning sentence representations.\", \"They prepare a new, large-scale book dataset, which is useful because the previously commonly used book dataset was taken down for legal reason.\", \"They provide a fairly broad set of analyses on their model, both quantitative and qualitative, performance-driven and analysis-driven.\", \"Review: The ideas and models presented in this paper are not new, while the supporting experiments are not very well done or convincing. Overall, I recommend rejecting this work.\", \"The models are contrastively learned in that they are trained to embed \\\"similar\\\" sentences nearby in the embedding space, and \\\"dissimilar\\\" sentence far away, where \\\"similar\\\" sentences are defined as consecutive sentences. This method of learning textual representations is well-established in the NLP literature, mostly prominently in recent years with word embedding models like Skip-Gram and in sentence embedding models like in [1], [2], [3] (the next sentence prediction task), and several more.\", \"In practice, the multiple views of each sentence that this paper considers boils down to encoding the sentence with a bidirectional LSTM and a TreeLSTM and concatenating the representations from each encoder. This idea again has been established in the literature ([4], [5], [6]).\", \"The experiments don't seem setup to demonstrate that the multiple views are beneficial over a single view. In Table 1, there are rows for just an LSTM or just a TreeLSTM, but they seem to be trained with labeled data whereas the proposed method is trained self-supervised. A more informative comparison to demonstrate the value of using multiple views would be to train the LSTM and TreeLSTM with the same objective (and ideally model size). Overall, I don't think the claims in the paper are well-supported by the model proposed or the experiments.\", \"I have a number of concerns about the experiments.\", \"\\\"Models are trained on a single epoch on the entire corpus without any train-test split\\\": so there is no early stopping? Why stop training after one epoch? Was there any indication you were overfitting the data?\", \"\\\"The training phase was stopped after 33 hours of training\\\": Why stop there? Computational constraints? Later comments suggest this is quite premature (\\\"training phase was completed on only 4.6M sentences among the 78M available\\\").\", \"The results seem to indicate that this method underperforming recent work significantly.\", \"Areas of improvement\", \"Some of the language in the introduction and conclusion are a bit of a stretch. Using a linear and tree LSTM (based on dependency parses) doesn't really represent a \\\"diversity of linguistic structures\\\".\", \"Related work: There's no mention of pretained language models, which could be seen as a form of representation learning for language, and have been hugely impactful in NLP.\", \"Method\", \"Missing negative in the log likelihood\", \"Why do you use inner product if other works \\\"report excellent results\\\" with other scoring functions?\", \"\\\"assumes the underlying structure of the sentence to be a sequence, while allowing for long term dependencies\\\": If anything, the treeLSTM more easily allows for long-term dependencies than the linear LSTM.\", \"\\\"Negative examples are obtained using the dependency Tree LSTM\\\": I'm not totally sure how the negatives are obtained here.\", \"\\\"The target sequence is encoded using the sequential Tree LSTM, while the positive and negative samples are encoded using the ChildSum Tree LSTM\\\": why are the sentences not all encoded with the same encoder?\", \"It looks really odd that most of Table 1 is empty. Given your model, I imagine it can't have been that difficult to evaluate more baselines (BiLSTM and TreeLSTM) on the rest of the tasks.\", \"It'd be nice if you could clearly indicate in Table 1 which method is yours.\", \"Results and Analysis\", \"The standard evaluation setting for sentence embeddings would be GLUE or SuperGLUE.\", \"A glaringly missing baseline is BERT (or any of its relatives), which is also self-supervised.\", \"The results are underwhelming, and as the author admits, somewhat premature as training didn't seem to finish.\", \"5.2: what are the contrastive LSTM and Tree LSTM? Are those the learned encoders from the \\\"Contrastive Tree\\\" in Table 1, or are they trained from scratch?\", \"I don't think the analyses in Sections 5.2 and 5.3 or Figure 2 are particularly useful.\", \"There are a noticeable number of typos. For example, in the abstract: \\\"this linguist[ic] diversity\\\" and \\\"better capture semantic[s]\\\". It'd be worthwhile to look over the paper closely for typos.\", \"[1] AN EFFICIENT FRAMEWORK FOR LEARNING SENTENCE REPRESENTATIONS. Lajanugen Logeswaran and Honglak Lee\", \"[2] Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning. Yacine Jernite,\\u00a0Samuel R. Bowman,\\u00a0David Sontag\", \"[3] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin,\\u00a0Ming-Wei Chang,\\u00a0Kenton Lee,\\u00a0Kristina Toutanova\", \"[4] Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang.\", \"[5] Enhanced LSTM for Natural Language Inference. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen.\", \"[6] Improving Sentence Representations with Consensus Maximisation. Shuai Tang, Virginia R. de Sa.\"]}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper proposes a new sentence embedding method. The novelty is to use dependency trees as examples in the self-supervised method based on contrastive learning. The idea to use linguistic knowledge in the design of sentence embeddings is attractive. The sentence representation is computed by a bi-LSTM and dependency tree representations are computed by Tree LSTM. The softmax classifier is trained using the negative log-likelihood loss.\", \"In my opinion, the paper could not be accepted. As said before, the idea is attractive but the paper lacks motivations for the choice of dependency trees as additional linguistic knowledge. Indeed, the goal of the proposed algorithm should be made more precise. It is, in my opinion, very difficult to do better than existing sentence embedding methods and the proposed method should be used for specific downstream tasks where the structure of sentences is meaningful. Moreover, the proposed method do not scale well and empirical results on classical downstream tasks are not convincing. Last, in my opinion, the redaction of the paper should be improved and the bibliography should be updated. For instance, the best up-to-date sentence embedding methods are not cited (ELMO and BERT).\", \"Detailed comments.\", \"Abstract and introduction. The description of the contribution is not precise enough. Please make precise what are \\\"multiple views\\\", \\\"different linguistic views\\\". Please explain why you choose dependency trees and explain why their use can improve sentence embeddings.\", \"Related work. Please consider only word embeddings and sentence embeddings because the literature is sufficiently large in the last few years. Please update your related work with methods such as ELMO and BERT and subsequent work. Also recent papers study how BERT embeddings embed structural information and these should be discussed as you consider dependency trees in the construction of sentence embeddings.\", \"The method does not scale well. The paper does not propose ideas to solve this problem. Why don't you consider the approach used in Logeswaran et al.\", \"The qualitative analysis shows that similar sentences have a similar structure. This is not surprising because dependency trees are used for learning. But this should give ideas of downstream tasks for which the approach could be fruitful.\", \"Many typos.\"]}" ] }
BJgZGeHFPH
Dynamics-Aware Embeddings
[ "William Whitney", "Rajat Agarwal", "Kyunghyun Cho", "Abhinav Gupta" ]
In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and actions. These embeddings capture the structure of the environment's dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.
[ "representation learning", "reinforcement learning", "rl" ]
Accept (Poster)
https://openreview.net/pdf?id=BJgZGeHFPH
https://openreview.net/forum?id=BJgZGeHFPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "KfodKMAat", "Bkl-M2D2sr", "SkeVUnNnjS", "HyeWCm6oiB", "BJlOLl_jjB", "r1llrMWjjH", "SkgGlfbooB", "HyeooW-oor", "BJeoXWbsoB", "HyxtfU92qB", "r1lVq9f-5H", "ByxvmHURtr", "SkeQW6MAKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798742124, 1573841929347, 1573829707858, 1573798857108, 1573777488277, 1573749304499, 1573749225546, 1573749155221, 1573749027288, 1572804113261, 1572051595831, 1571869983409, 1571855611200 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2163/Authors" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2163/Authors" ], [ "ICLR.cc/2020/Conference/Paper2163/Authors" ], [ "ICLR.cc/2020/Conference/Paper2163/Authors" ], [ "ICLR.cc/2020/Conference/Paper2163/Authors" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2163/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies how self-supervised objectives can improve representations for efficient RL. The reviewers are generally in agreement that the method is interesting, the paper is well-written, and the results are convincing. The paper should be accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for the clarification.\"}", "{\"title\": \"Re: off-policy correction\", \"comment\": \"In this work the abstract action space is learned ahead of time, though this need not be true in general. When learning the action space online, you are exactly right that the shifting map between abstract and raw actions needs to be corrected. HIRO is a good option for this. Since we have an encoder $e_a$ which maps from raw actions to abstract actions, we could also relabel the upcoming set of $k$ actions as $\\\\tilde{z}_a = e_z(a_{t:t+k-1})$. This should improve on the expense and bias of HIRO's sampling-based relabeling (Appendices A and C.3 of Nachum et al. 2018).\"}", "{\"title\": \"Off-policy correction\", \"comment\": \"Thank you for the clarifications. My understanding is that the abstract action space is learned over time and thus keeps changing. Is this correct? In that case, it seems like the low-level action sequence corresponding to a high-level action would change over time. This could be corrected for e.g. using the approach of HIRO (Nachum et al. 2018).\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the comments! You have addressed both my main concerns, and I think the added Appendix C is quite interesting. I wonder if learning the right value of k is a direction for future work.\\n\\nI am increasing my score to \\\"Accept\\\"\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your review!\\n\\nWe agree that a study of performance with varying $k$ is useful and we have added one as a new Appendix C. We find that there is an optimal setting of $k$ which is large enough to enable efficient exploration while still representing the optimal policy with high fidelity.\\n\\nThe \\\"Pves\\\" method of Jonschkowski et al. is an interesting approach and we have added it to our related work. Thanks for the reference!\\n\\nThe actions in each environment set the torque of the actuators. For each environment there is one action dimension for each joint, i.e. 2D actions for the Reacher family and 7D actions for the 7DoF family. The action scales are bounded within [-1, 1]. In our approach we set the embedded action dimension to be the same as the raw action dimension. We will make this more clear in Appendix A.\\n\\nAt present the greatest limitation of our approach as described is the pretraining on a fixed dataset. We chose to use a fixed dataset to disentangle the tasks of representation learning and policy learning, and to simplify comparisons with the other representation learning methods in section 6.2. However, DynE is compatible with online learning and one can use an exploration strategy from the literature to collect data or even modify our approach as follows:\\n1. Add each transition observed to the representation learning dataset and periodically retrain $e_s$, $e_a$, and $d_a$ according to sections 2.2 and 3.1.\\n2. When updating the policy and Q function, recompute the embedded states $e_s(s)$ using the updated state encoder $e_s$.\\n3. When updating the policy and Q function, the encoded actions $z_t$ which were emitted by the policy may now map to a different sequence of actions $d_a(z_t) = a_t, \\\\dots, a_{t+k-1}$ than when that transition was added to the replay, making an update using $z_t$ incorrect. Instead we must re-encode the actions in the replay at policy update time: $z_t = e_a(a_t, \\\\dots, a_{t+k-1})$.\\n\\nWe also found PPO's performance on Thrower surprising. Thrower appears to be a fairly different task than Pusher and Striker; DynE-TD3 and PPO solve Thrower quite quickly, while TD3 and SAC fail entirely. We also note that the scale of the Thrower plot is distorted by the divergence of TD3 and SAC, and zooming in reveals that DynE-TD3 converges to a better solution than PPO: https://i.imgur.com/527l9hZ.png\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for the review!\\n\\nBecause of the temporally abstract actions the multi-step update is off-policy. Conditioning the critic on the latent action $z_t$ and the current step $i$ of that action is equivalent to conditioning on the remaining raw actions $a_t, \\\\dots, a_{t+k-i}$ of that sequence. Given this information the probability of those next $k-i$ actions is 1 and no off-policy correction is needed; in effect, it remains a single action. Thank you for pointing out that this was not made explicit in the paper. We have added this to section 3.2.\\n\\nWe agree that a comparison on more domains would be useful and we will add one to the final version of this paper, though this experiment may not be completed by the end of this discussion period due to computational cost (each of the three plots in Figure 6 corresponds to > 1000 GPU-hours). We also note that Pusher, Striker, and Thrower are standard benchmarks from OpenAI Gym.\\n\\nAll of the model-free baselines have previously been tested on MuJoCo environments with hyperparameters selected by their authors. We use the MuJoCo hyperparameters from the papers or implementations of each method we compare to. Across all of the representation learning results which use TD3, including DynE-TD3 in Figure 5 and all methods in Figure 6, we use the hyperparameters from the original TD3 paper without modification.\", \"re\": \"Markov observations, we stack four frames to ensure that the Markov property holds (Appendix A.1).\"}", "{\"title\": \"Response to Reviewer #4\", \"comment\": \"We appreciate the comments!\\n\\nRegarding the samples used for representation learning, the simplest comparison is to offset the DynE curves by 100K steps, including the random transitions in their sample cost. In all environments but the simplest (Reacher Vertical), the policies trained with DynE still learn faster than the baselines (see footnote on page 6). Note also that in Figure 5, DynE only pretrains on data from the leftmost task in each row, demonstrating that transfer further improves its sample efficiency.\\n\\nThe latent action representation does provide significant gains on top of the learned state representation. However, we also find substantial gains from the DynE state representation, as shown by the gap between S-DynE and DARLA. We observe that the improvement from DARLA to S-DynE is similar in scale to the improvement from S-DynE to SA-DynE.\\n\\nWe agree that a study of performance with varying $k$ is useful and we have added one as a new Appendix C. We find that there is an optimal setting of $k$ which is large enough to enable efficient exploration while still representing the optimal policy with high fidelity.\\n\\nWe chose to compare to TD3 on pixels because it allowed for the most direct comparison to our results and none of the model-free methods work well from pixels anyway. As the pixel experiments are quite computationally intensive to run we found it more informative to compare against other representation learning algorithms.\\n\\nIn principle one could perform MPC with this learned model. However, we would not expect it to perform well as our objective is designed to induce useful representations and not to make accurate long-term predictions. In particular, successful model-based RL methods like Hafner et al. (2019) and Chua et al. (2018) directly optimize their models with multi-step prediction objectives, with Hafner et al. making multi-step predictions without decoding back to observations.\\n\\nWe do think combining learned temporally abstract action representations with MPC is an interesting future direction as it would allow more efficient rollouts and planning.\\n\\n\\nHafner, Danijar, et al. \\\"Learning Latent Dynamics for Planning from Pixels.\\\" International Conference on Machine Learning. 2019.\\nChua, Kurtland, et al. \\\"Deep reinforcement learning in a handful of trials using probabilistic dynamics models.\\\" Advances in Neural Information Processing Systems. 2018.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your comments!\\n\\nRegarding the pre-training of embeddings, in this work the main focus is on the objectives used for representation learning. Separating exploration and representation learning allows us to directly compare the various representation learning techniques in section 6.2. Once we have such an objective, exploration and representation learning can be combined online. Our method is compatible with such online representation learning. One specific implementation would involve the following steps:\\n1. Add each transition observed to the representation learning dataset and periodically retrain $e_s$, $e_a$, and $d_a$ according to sections 2.2 and 3.1.\\n2. When updating the policy and Q function, recompute the embedded states $e_s(s)$ using the updated state encoder $e_s$.\\n3. When updating the policy and Q function, the encoded actions $z_t$ which were emitted by the policy may now map to a different sequence of actions $d_a(z_t) = a_t, \\\\dots, a_{t+k-1}$ than when that transition was added to the replay, making an update using $z_t$ incorrect. Instead we may re-encode the actions in the replay at policy update time: $z_t = e_a(a_t, \\\\dots, a_{t+k-1})$.\\nWith these modifications it is possible to learn the representations and policy at the same time.\\n\\nThe updates in Section 3.2 are off-policy because they depend on the current policy $\\\\mu$, but crucially not on the behavior policy $\\\\pi$ which collected the data. This is the same for all algorithms in the DPG family. See Silver et al. (2014) for details, especially section 4.2.\\n\\nAs you point out, updating on only $N/k$ observations in the abstract MDP might outperform learning in the original MDP despite having fewer samples. However, as we show in section 3.2, we can update on all $N$ samples while still using the embedded MDP by augmenting Q with an abstract step input $i$.\\n\\nWe agree that a comparison of performance with varying $k$ is useful and we have added one as a new Appendix C. We find that increasing $k$ helps up to a certain point, beyond which performance falls off.\", \"multi_step_baseline_updates\": \"PPO uses generalized advantage estimation (Schulman et al. 2015), a multi-step return estimator similar to TD($\\\\lambda$). TD3 and SAC use one-step returns. In the off-policy setting, unweighted multi-step returns are not guaranteed to converge (Harutyunyan et al. 2016), and techniques such as importance weighting are not available with deterministic policies like TD3 (as the density is a delta function).\\n\\n\\nSilver, David, et al. \\\"Deterministic Policy Gradient Algorithms.\\\" International Conference on Machine Learning. 2014.\\nSchulman, John, et al. \\\"High-dimensional continuous control using generalized advantage estimation.\\\" arXiv preprint arXiv:1506.02438 (2015).\\\\\\nHarutyunyan, Anna, et al. \\\"Q($\\\\lambda$) with Off-Policy Corrections.\\\" International Conference on Algorithmic Learning Theory. Springer, Cham, 2016.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: The paper proposes training dynamics-aware embeddings of the state and k-action sequences to aid the sample efficiency of reinforcement learning algorithms. The authors propose learning a low-dimensional representation of the state space, $z_s$ as a well as a temporally extended action embedding, $z_a$. The latter will be used in conjunction with a higher level policy that plans in this abstract action-space, $z_a$. By using these two embeddings, the authors test the proposed system on a set of Mujoco tasks and show improved results.\", \"positives\": \"1) Fairly simple objective, in line with previous work on unsupervised learning methods to representation learning in RL (like DARLA, variational intrinsic control, etc).\\n2) The temporally extended nature of the action embedding makes is particularly attractive for HRL systems as a continuous space of options (via $z_a$).\", \"questions_and_points_of_improvements\": \"1) Main concern: The need to pre-train the embedding before the RL task, I strongly believe limits the applicability of the proposed algorithm. The embeddings are trained under a uniformly random policy, which in many cases in RL is not informative enough to reach, with decent probability, many of the states of interest. Thus the embedding will reflect only a small subset of the state/action-space. Thus it will be highly depend on the tasks under consideration if this is enough variety for generalisation across the stationary distribution of more informed RL policies. Implicitly, the authors are making a continuity assumptions over the state and action space.\\n(To be more precise: A particular failure case of the action embedding would be if say one of the action (down) has no effect in the part of the space where the uniform policy has explored. Now this becomes an important action in a level down the line where the agents needs to go down a tunnel -- example from Atari's Pitfall. In this case, under the embedding training, since the down has had no effect in training, this action will not be represented at all. This would means the RL algorithm could not ever learn to use it). \\nThe co-evolution of the representation and the RL policy, I think it's paramount especially when dealing with exploration.\\n\\n2) Q: Section 3.2: \\\"we extend... to work with temporally extended actions while maintaining off-policy updates ..\\\". Can the authors expand on how this is done? Both updates in this section seem to be on policy ($\\\\mu$).\\n\\n3) Q: Section 3.2: \\\"Q can only be trained on $N/k$ observations. This has a substantial impact on sample efficiency\\\". Note that this is actually an explicit trade-off between reduced number of samples we see ($N/k$) and the increased horizon in propagating information, due to the effective k step update. This trade-off need not be optimal for $k=1$.\\n\\n4) Notes of experiments:\\na) It is hard to assess the difficulty of the exploration problems investigated. This relates to point 1) and the implicit assumptions highlighted there. \\nb) It would have been nice to have a study on $k$ and it's impact on the sample complexity. The larger the $k$ the harder the representation learning problem becomes; and possibly the larger the number of samples needed to learn in this combinatoric space. How does this trade-off with the benefits one could potentially get in the RL phase?\\nc) For the comparison algorithms: where any of these using a temporal extended update rule? Or are all of them 1-step TD like algorithms? It would good to separate the effect of the multiple-step update in Sec. 3.3 and the exploration in this abstract action space.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents an approach to learning state and action representations through self-supervision, such that these representation can be used for downstream reinforcement learning. In particular, the proposed approach learns a time-dilated dynamics model on data collected via self-supervision, where given s_t, and actions (a_t, ..., a_{t+K}) predicts s_{t+K}. The input state and action trajectory and each encoded into latent distributions, which are then used to reconstruct the future state. Then, they demonstrate that using TD3 with the latent action space outperforms existing model-free methods and existing state representation techniques.\\n\\nOverall the paper is well motivated and clearly written. The key contribution seems to be in learning the latent distribution over multi-step action trajectories, which seems to be important for performance. Lastly the experiments and ablations are thorough and well explained.\\n\\nMy main comments have to do with (1) the fairness of the comparison to existing model-free RL, (2) an analysis of the temporal abstraction for learning the action distribution.\\n\\n(1): The proposed method first pre trains the latent dynamics model on 100K steps of random data, then trains the proposed TD3 using this action distribution (and the modified critic to support 1-step Q values). While this does outperform the model-free RL methods trained from scratch, it is also using 100K steps worth of experience that the others don't have access to, which makes it not quite a fair comparison. If you pretrain the critic of TD3 or SAC with the 100K samples, do you still observe the same performance gains?\\n\\n(2): From the ablation study and comparison to other state representation learning techniques in Figure 6, it seems like the most important aspect of the proposed method is using the latent action distribution. This makes sense as it captures longer action sequences, and thus likely is the reason for better exploration and performance. As a result the exact choice of K seems very important. In the Appendix it states that for the Thrower task K=8, and elsewhere K=4. Do the authors have a sense for how performance changes with choice of K? I think a plot which compares performance over different choices of K would be very valuable.\", \"some_smaller_comments\": \"- The comparison to other model-free RL methods is done only on low dimensional states, while the ablations are done on pixels. Is this because the model-free comparisons did not work at all on pixels?\\n- Is it possible to perform model predictive control with the learned model, and how does it compare to existing latent model based RL methods (Hafner et al.)\\n- One more recent work that may be worth comparing to is SLAC (Lee et al.) which also learns a stochastic latent dynamics model, and learns a policy in the latent space of the model. The latent space is of states however, and not actions. \\n\\n______________________\\n\\nAlex X. Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine. Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model\\n\\nDanijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson. Learning Latent Dynamics for Planning from Pixels\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The author propose a representation learning method based on predictive information. They compress a start state and an action sequence to predict the following state. Since the latent space is factorized between state and action sequence, it can be used as an abstract action space to accelerate model-free algorithms.\", \"strengths\": [\"While the state representation is a simple successor representation, the action abstraction is a simple method that seems novel.\", \"The multi-step return is a nice way of handling variable horizons in the context of temporally abstract actions.\", \"It is nice to see that the representation learning method can accelerate learning not just from pixels but also when learning from low-dimensional inputs.\", \"The method description and overall writing is very clear.\"], \"weaknesses\": [\"Doesn't the multi-step return render the update on-policy, since the reward sequence is tied to the data collecting policy? If so, it might be worth to apply off-policy corrections from the literature. If not, this should be explained in Section 3.2.\", \"A comparison across more domains would be desirable. While there are 6 visual tasks, they share only two environments. The paper could be strengthened by comparison on standard benchmarks such as Gym or DMControl. I'm willing to raise my score when these or comparable results are added.\", \"I could not find a clear description of how the hyper parameters of baseline methods were selected, so it is unclear how much of the benefit comes from tuning.\"], \"comments\": [\"Equation numbers are missing on page 4.\", \"An assumption of the work is that the pixel observations are Markovian. Maybe I missed this in the paper, but was there any frame stacking that would make this hold at least approximately?\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents DynE, a self-supervised approach for learning dynamics-aware state and action representations. DynE learns an encoding of individual states and action sequences, assigning nearby embeddings to those that have similar outcomes. This is achieved via a reconstruction objective that predicts the outcome of a sequence of \\\"k\\\" actions from a given state, along with losses that encourage compression in the learned latent representations. Additionally, a learned decoder allows for reconstruction of minimum-norm action sequences from the high-level latent action embedding. Combining DynE state and action embeddings with an actor-critic agent operating directly in the learned high-level action space leads to significant speedups in both from-scratch and transfer learning (on 2D and 3D OpenAI Gym tasks), leading to better data efficiency compared to model-free baselines. Additionally, the learned action and state embeddings lend themselves to better exploration and consistent value prediction, respectively.\\n\\nThe paper is very well written and the approach looks quite promising. A few comments:\\n1. The approach is well validated but additional ablation results can help quantify the effect of different components. For example, it would be useful to see the effect of varying \\\"k\\\", the number of actions to be encoded for generating the action embedding. \\n2. A related paper that learns state representations that are physically consistent and dynamics-aware is this work:\\nJonschkowski, Rico, et al. \\\"Pves: Position-velocity encoders for unsupervised learning of structured state representations.\\\" arXiv preprint arXiv:1705.09805 (2017).\\nHere the state representation is learned to implicitly encode physical consistency via self-supervised losses that mimic constraints such as controlability, inertia, conservation of mass etc. Combining such additional self-supervised losses can help structure the state embedding learning further, albeit at the cost of introducing additional hyperparameters during optimization.\\n3. It would be useful to know what the actions are (and their dimensions) for the tasks considered in the paper.\\n4. The paper would benefit from a short discussion on the limitations of the proposed approach and potential to scale to more complicated tasks.\\n5. Fig. 5, bottom right: It is not clear why PPO (blue) performs significantly better on this task compared to the other 7DoF tasks considering that the thrower should be more complex than the pusher and striker. PPO also seems to match the data efficiency of DynE-TD3. Is this correct? \\n\\nOverall, I find the approach quite interesting and promising. I would suggest an accept.\", \"typos\": \"1. Intro, 2nd para, 2nd line, many samples to learn than a better one\\n2. Fig. 1, the pixel representation is very unintuitive\"}" ] }
BJgxzlSFvr
AN ATTENTION-BASED DEEP NET FOR LEARNING TO RANK
[ "Diego Klabjan", "Baiyang Wang" ]
In information retrieval, learning to rank constructs a machine-based ranking model which given a query, sorts the search results by their degree of relevance or importance to the query. Neural networks have been successfully applied to this problem, and in this paper, we propose an attention-based deep neural network which better incorporates different embeddings of the queries and search results with an attention-based mechanism. This model also applies a decoder mechanism to learn the ranks of the search results in a listwise fashion. The embeddings are trained with convolutional neural networks or the word2vec model. We demonstrate the performance of this model with image retrieval and text querying data sets.
[ "learning to rank", "deep learning" ]
Reject
https://openreview.net/pdf?id=BJgxzlSFvr
https://openreview.net/forum?id=BJgxzlSFvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "r53gJhyrWv", "BygEsdrNqr", "rJedKG9y5B", "BJxrToa2YB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798742095, 1572259996186, 1571951231932, 1571769276968 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2161/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2161/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2161/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"All three reviewers felt the paper should be rejected and no rebuttal was offered. So the paper is rejected.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed an attention-based deep neural network for implementing 'learning to rank' algorithm. Particularly, the proposed method implements a listwise approach which outputs the ranks for all search results given a query. The search results are claimed to be sorted by their degree of relevance or importance to the query. However, it is not clear to me how the ranking was decided in equation 6 by the softmax function. For example, as per section 4, the documents of the same topic are considered related, then how the proposed model was trained with one document having higher relevance than others in the same topic category.\\n\\nThere are other confusions that need to be addressed for better understanding. For example, how softmax probabilities can be used as an embedding as described in the line: \\u201cFrom training this model, we may take the softmax probabilities as the embedding, and create different embeddings with different neural network structures. \\u201d Also, what does the line means: \\u201cthe number of documents of the same topic is uniformly distributed from 3 to 7, the number of documents of the same superclass but different topics is also uniformly distributed from 3 to 7, and the remaining documents are of different super classes.\\u201d\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to use attention mechanism for combining different embeddings of the queries and search results. Besides, a decoder mechanism is used to do listwise ranking for the results. The experiments show that the proposed approach outperforms some classic learning-to-rank baselines.\", \"this_paper_is_below_the_bar_of_acceptance_for_the_following_reasons\": \"1.\\tLimited technical contribution: some previous papers have explored the idea of learning attention weights for combining different embeddings, and simply applying this idea to learning-to-rank application does not seem to be a big contribution.\\n\\n2.\\tChoice of datasets: the datasets used in this paper are typically used for tesing classification models rather than ranking models. In these datasets, for each query image/doc, there are many images/docs of the same class that could be considered relevant, which makes the ranking task less challenging. Since the paper focuses on learning-to-rank problem, probably the authors should consider include more datasets dedicated to learning-to-rank problems.\\n\\n3.\\tInsufficient baselines: the baseline methods used in the paper are not very recent (e.g., OASIS, RankSVM and LambdaMart have been proposed for more than 10 years). There have been many neural-network based retrieval/ranking methods proposed in the past 5 years. Hence, the experimental results could be more convincing if the paper include more \\n\\n4.\\tLack of justification for the model architecture: some design choices of the model are not well-motivated/justified. For example, how does the decoder mechanism using multiple states in the model (listwise) help improve the ranking results compared to pairwise ranking? Ablation study could help whether such decoder mechanism help show the usefulness of this module.\\n\\n5.\\tParameter sensitivity study: study on how hyper-parameter values affects the model performance could also help.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors propose to use attention to combine multiple input representations for both query and search results in the learning to rank task. When these representations are embeddings from differentiable functions, they can be jointly learned with the neural network which predicts rankings. A limited set of experiments suggest the proposed approach very mildly outperforms benchmark approaches.\\n\\nMajor comments\\n\\nTo the best of my knowledge, this is the first paper to apply attention to the learning to rank problem. However, the main methodological innovation seems to be the use of attention to create and train an ensemble of models; this has been previously explored in the literature (e.g., [Kim et al., ECCV 2018]).\\n\\nThe paper is also missing important context in that it omits developments in using deep learning for the learning to rank problem (e.g., [Pang et al., CIKM 2017; Ai et al., WWW 2018]). The experimental evaluation does not include any other deep methods; thus, it is not clear if the (very minor) improvement in performance are due to the deep models or the proposed attention approach.\\n\\nThe datasets used in the experiments are not appropriate for evaluating learning to rank algorithms. A variety of learning to rank datasets are available, and these should be used rather than (or in addition to) the toy datasets considered here. Examples: http://arogozhnikov.github.io/2015/06/26/learning-to-rank-software-datasets.html, http://quickrank.isti.cnr.it/istella-dataset/, https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/, \\n\\nMinor comments\\n\\nConcerning Section 3.3, in what sense is SGD used to \\u201ccalibrate\\u201d the model? It seems as though the authors just mean it is used to \\u201ctrain\\u201d the model. However, is there some other meaning of calibration (e.g., in the sense of a Brier score) here?\\n\\nIn Table 1, what is the meaning of a dropout p value of 1? In most deep learning frameworks (e.g., Keras and PyTorch), this would mean all nodes are dropped out.\\n\\nIn what sense are the \\u201c5 randomized runs\\u201d for the experiments randomized? Are different train, test splits used? or just different random seeds? or something else?\\n\\nHow is it that the error rates are higher when using superclasses for evaluation?\\n\\nTypos, etc.\\n\\nThe paper has several significant problems with the \\u201c\\\\cite\\u201ds and \\u201c\\\\ref\\u201ds in the paper. First, the \\u201c\\\\cite\\u201ds should presumably be \\u201c\\\\citep\\u201ds or something since the references are not set off from the rest of the text. Second, the paper includes references to equation numbers which are not present in the paper, such as \\u201cequation (12)\\u201d. It seems as the equations are in the paper, but are included in some unnumbered environment (\\u201c\\\\begin{align*}\\u201d or some such). This makes it very difficult to track down to which equations the authors intend to refer. Third, the reference numbers to figures and tables in the text is wrong. For example, the text refers to \\u201cTables 8 and 9\\u201d for 20 newsgroups (at the end of Section 4). Clearly, this is supposed to be Tables 6 and 7. It seems like the authors moved the CIFAR-10 discussion to the appendix but did not update the references in the text.\\n\\nTables 2 and 4 are exactly the same.\\n\\nFigure 4 is not referenced in the text.\\n\\nIt would be helpful to put Figure 2 a bit closer to where it is discussed in the text.\\n\\nThe references are not consistently formatted.\\n\\n\\u201ccomponents of for each\\u201d -> \\u201ccomponents for each\\u201d\\n\\nPlease define acronyms like MAP at least once.\"}" ] }
HkgeGeBYDB
RaPP: Novelty Detection with Reconstruction along Projection Pathway
[ "Ki Hyun Kim", "Sangwoo Shim", "Yongsub Lim", "Jongseob Jeon", "Jeongwoo Choi", "Byungchan Kim", "Andre S. Yoon" ]
We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance. Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks.
[ "Novelty Detection", "Anomaly Detection", "Outlier Detection", "Semi-supervised Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=HkgeGeBYDB
https://openreview.net/forum?id=HkgeGeBYDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "8IJSNB7ruS", "wuI6mcQBA", "BygE_gj3sr", "rklHQgohoH", "HyxKako2jS", "HJgBJ1i3jr", "rJgiUNqjjH", "BkeM2DIHoB", "BklJSBISjH", "r1gVRNLSjr", "S1lmbNX0cB", "HylFcgFRFB", "B1x9jGHjFH" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1581751661735, 1576798742065, 1573855340156, 1573855261157, 1573855169468, 1573854940644, 1573786707399, 1573377961780, 1573377334783, 1573377228444, 1572905978727, 1571881105451, 1571668642239 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/Authors" ], [ "ICLR.cc/2020/Conference/Paper2160/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2160/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2160/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Revisions in the camera-ready version\", \"comment\": \"Below, we summarize three main revisions made in the camera-ready version.\\n\\n1. Removal of the potential statistical bias in the experiments.\\n\\nWe calculated the means and standard deviations for standard scaling from the whole dataset during the preprocessing.\\nThus, we carried out the experiments again with the statistics only from the training set to measure the implications of the forward-looking bias.\\nThe updated experimental results indicate that the forward-looking bias does not affect the conclusion of the manuscript.\\nHence, we updated only experimental results and retained the conclusion in the camera-ready version.\\nMore specifically, we updated the following parts of the manuscript.\\n\\n- Table 2 and 3\\n- Section 5.4.1\\n- Appendix B, C and D\\n\\nTo obtain statistically more stable results, we increased the number of trials for the tabular datasets.\\nWe updated the corresponding explanations in Section 5.4.\\n\\n2. Notational change\\n\\nWe updated parts of notations for greater clarity.\\n\\n- In Figure 1 and its caption, we changed $h_i$ and $\\\\hat{h}$ to functional forms $h_i(x)$ $\\\\hat{h}(x)$, respectively, to increase consistency with the main text.\\n- In Section 4.1, we changed $h$ to $a$ in Equation (2) to indicate that the entity is an activation output.\\n\\n3. Fixing explanation of novelty ratios in test sets\\n\\nWe updated details of novelty ratios in our experimental setup for tabular datasets.\\nSee the last paragraph of Section 5.2\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes to extend the autoencoder loss in a deep generative model to include per-latent-layer loss terms. Two variants are proposed: SAP (simple aggregation along pathway) and NAP (normalized aggregation along pathway). SAP is simply the sum of the squared norm, while NAP performs decorrelation and normalization of the magnitude. This was viewed as novel by the reviewers, and the experiments supported the proposed approach.\\n\\nIn the post rebuttal phase, the inclusion of an ablation study has led to an upgrade in the reviewer recommendation. As a result, there was a unanimous opinion that the paper is suitable for publication at ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Moved\", \"comment\": \"The content is moved to the third reply\"}", "{\"title\": \"Additional Comments\", \"comment\": \"[(ia) Comparison to well-known baselines]\\n\\nThe table in Appendix C shows the result of comparing RaPP and baselines not based on deep approaches. With the baseline models, for some datasets best AUROC performance is altered, i.e., baseline models show higher AUROC performance. We\\u2019d like to point out, however, that there is room for improvement for the NAP result since hyperparameter tuning (e.g., depth of deep architecture, size of bottleneck, training epochs and etc) was not performed while the baseline models used tuned hyperparameters as provided in the DSVDD paper [1]. The evaluation of the baselines for SNSR, MNIST, and F-MNIST are still on-going. We will include the results in the final revision.\\n\\n[1] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Mu \\u0308ller, and Marius Kloft. Deep one-class classification. In ICML, 2018.\\n\\n\\n[(ib) Experiments on complex dataset: MVTec AD]\\n\\nWe evaluated RaPP (NAP) on MVTec AD dataset as suggested. For the preprocessing of the dataset, the following procedures were applied. First, each image is grayscaled and the resolution was lowered. Second, the image was segmented to $32\\\\times32$ patches. \\nWe used VAE with 8 layers for each fully-connected encoder and fully-connected decoder, and trained 200 epochs. Below is the result of our evaluations. \\n\\n**Result**\\n\\n+------------+-----------------+----------------+-----------+\\n| Category | Recon AUROC | RaPP AUROC | AE(L2)[1] |\\n+------------+-----------------+----------------+-----------+\\n| Carpet | 0.5846+-0.0022 | 0.5612+-0.0105 | 0.59 |\\n| Grid | 0.5444+-0.0039 | 0.7050+-0.0121 | 0.90 |\\n| Leather | 0.5603+-0.0099 | 0.8269+-0.0227 | 0.75 |\\n| Tile | 0.6055+-0.0007 | 0.5387+-0.0032 | 0.51 |\\n| Wood | 0.6794+-0.0008 | 0.7030+-0.0090 | 0.73 |\\n| Bottle | 0.6744+-0.0111 | 0.7602+-0.0025 | 0.86 |\\n| Cable | 0.6711+-0.0142 | 0.6939+-0.0048 | 0.86 |\\n| Capsule | 0.6781+-0.0092 | 0.8192+-0.0231 | 0.88 |\\n| Hazelnut | 0.7524+-0.0011 | 0.7491+-0.0075 | 0.95 |\\n| Metal Nut | 0.4692+-0.0030 | 0.5889+-0.0026 | 0.86 |\\n| Pill | 0.6275+-0.0065 | 0.6860+-0.0102 | 0.85 |\\n| Screw | 0.8140+-0.0036 | 0.7592+-0.0002 | 0.96 |\\n| Toothbrush | 0.7286+-0.0099 | 0.8559+-0.0300 | 0.93 |\\n| Transistor | 0.5983+-0.0014 | 0.6668+-0.0121 | 0.86 |\\n| Zipper | 0.5318+-0.0334 | 0.6356+-0.0011 | 0.77 |\\n+------------+-----------------+----------------+-----------+\\n| Average | 0.6346+-0.0894 | 0.7033+-0.0930 | 0.8173 |\\n+------------+-----------------+----------------+-----------+\\n\\nThe results show that AUROC obtained from RaPP in general is higher than AUROC obtained from reconstruction only. Yet it is noted that the overall performance is still lower than the quoted performance in the cited paper [1] except Leather and Tile. In order to make an apple-to-apple comparison, more work is needed to include 1) Incorporating with CNN architecture and 2) Data Augmentation, which is missing in our evaluation. In our evaluation, the resolution was intentionally lowered due to the shortage of time for the evaluation, but this should also be recovered as well to make a fair comparison. We are currently looking into the possibility of extending RaPP to CNN-based models, which will allow rigorous comparisons of RaPP approach with the other existing approaches on more complex image datasets such as CIFAR-10, MVTec and so on. \\n\\n[1] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger. Mvtec ad\\u2013a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9592\\u20139600, 2019.\"}", "{\"title\": \"Additional Comments\", \"comment\": \"[ About datasets for comparison to other state-of-the-art methods ]\\n\\nWe did experiments to compare RaPP to the state-of-the-art methods for more datasets.\", \"please_refer_to_the_additional_evaluations_in_the_following_responses\": [\"[Extending Table3 to include other datasets] in the last reply for the first reviewer, and\", \"[(ib) Experiments on complex dataset: MVTec AD] in the last reply for the third reviewers\"]}", "{\"title\": \"Additional Comments\", \"comment\": \"[Extending Table3 to include other datasets]\\n\\nOnce again, thank you for your suggestion. In order to make evaluations of the existing methods on the datasets used in Table 1, the following architectural modifications were made for the models in the recent approaches: (1) CNN components were removed and replaced by fully connected (FC) layers. (2) Number of layers and bottleneck size were modified to match that of AE, VAE, AAE in Table 2\\n\\nThe modifications were necessary for the following reasons. First, numeric data has no explicit relation between its features while CNN explicitly utilizes the grid structure of pixels in an image. As such, CNN architecture does not naturally extend to 1D numeric data (not 2D image data). Second, to make the comparison as close as possible, we keep the same number of layers and bottleneck size. Below is the result of the evaluations. We used the same settings for the training as in the cases shown in Table 2. Due to the shortage of time, the evaluation was done only for OCNN and DSVDD in uni- and multi- modal normality cases. GT was excluded since it relies on image-specific data transformations like rotation and flipping.\\n\\n**Result**\\n+---------+------------+-----------+----------+-----------+----------+\\n| Dataset | Type | OCNN MEAN | OCNN STD | DSVDD MEAN | DSVDD STD |\\n+---------+------------+-----------+----------+-----------+----------+\\n| STL | Multimodal | 0.755 | 0.146 | 0.518 | 0.162 |\\n| OTTO | Multimodal | 0.496 | 0.149 | 0.552 | 0.101 |\\n| SNSR | Multimodal | 0.487 | 0.029 | 0.49 | 0.048 |\\n| | | | | | |\\n| MI-F | Unimodal | 0.358 | 0.107 | 0.523 | 0.103 |\\n| MI-V | Unimodal | 0.43 | 0.061 | 0.46 | 0.042 |\\n| EOPT | Unimodal | 0.53 | 0.014 | - | - |\\n| NASA | Unimodal | 0.526 | 0.043 | 0.549 | 0.062 |\\n| RARM | Unimodal | 0.43 | 0.116 | 0.604 | 0.048 |\\n| STL | Unimodal | 0.467 | 0.094 | 0.566 | 0.124 |\\n| OTTO | Unimodal | 0.572 | 0.139 | 0.663 | 0.03 |\\n| SNSR | Unimodal | 0.578 | 0.069 | 0.586 | 0.058 |\\n+---------+------------+-----------+----------+-----------+----------+\\n(The experiment for DSVDD on EOPT is still on-going)\\n\\nThe result shows AUROC is in general lower than that of AE, VAE and AAE with the reconstruction error. The only exception is OCNN on STL in multimodal normality case where AUROC reaches 0.755. In all cases, AUROC is lower than the highest AUROC among AE, VAE, and AAE results. \\n\\nWe\\u2019d like to point out that more study is needed to make the comparison rigorous. In this regard, it would be interesting to see how CNN-based models could extend to incorporate 1D numeric data (e.g., sensors or time-series data). We are currently looking into the possibility of extending RaPP to CNN-based models, which will allow the evaluations of RaPP on more complex image datasets such as CIFAR-10, MVTec and so on.\"}", "{\"title\": \"Additional comments\", \"comment\": \"[(ic) Experimental results with subsequently adding hidden layers]\\n\\u200b\\nWe obtained the results for MNIST and STL datasets, and will add in the appendix. In general, performance tends to get higher as more layers are used.\\n\\n\\n[Cost of SVD]\\n\\nWe conducted experiments to evaluate the scalability of SVD with Pytorch implementation and Facebook\\u2019s implementation of a fast randomized algorithm (fcpca) [2]. Note that PyTorch SVD [3] partially utilizes GPU, but fcpca only runs on CPU.\\n\\n** Setup **\\nThe experiments were carried out for MNIST, SNSR, and OTTO which have a large number of samples and high dimensionality as can be seen in Table 1. Since the time complexity of SVD is linear in the number of data samples (because n_samples > data_dim in most of the cases), we mainly tested the performance of SVD across the various depth and bottleneck sizes of networks, which is directly related to the number of columns of the matrix fed to SVD.\\n\\n** Result **\\nWe observed that Pytorch SVD (used in our paper) is faster than fbpca, and takes much shorter than autoencoder training time. In our result, Pytorch SVD and fbpca are at least 47x and 6.5x faster than training an autoencoder, respectively.\\nWe will add the result in the following revision.\\n\\n** Conclusion **\\nThe impact of the SVD computation (required only at a training phase) in NAP is relatively small compared to training an autoencoder in practical setups.\\n\\n[1] Halko, N. and Martinsson, P. G. and Tropp, J. A. \\u201cFinding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions.\\u201d SIAM Rev., 53(2), 217\\u2013288, 2011\\n[2] https://fbpca.readthedocs.io/en/latest/\\n[3] https://pytorch.org/docs/stable/torch.html?highlight=svd#torch.svd\\n\\n\\n[About other feedbacks]\\n\\n** Minor Comment 15 **\\nWe used \\u201czero-knowledge\\u201d as a non-technical term to indicate when no prior knowledge exists for the selection of layers to derive a novelty metric. Our approach treats all the layers equally to calculate the metrics. We will revise the statement to avoid confusion in the next revision. \\n\\n\\n** Minor Comment 18 **\\t\\n\\nWe used as many data samples as possible for every setup, except for MNIST and FMNIST with the unimodality setup that we followed the GPND setup [4]. Since AUROC is invariant in expectation regardless of the proportion of novelty samples, we took the maximum proportion of 50% used in [4].\\n\\n[4] Stanislav Pidhorskyi, Ranya Almohsen, and Gianfranco Doretto. Generative probabilistic novelty detection with adversarial autoencoders. In NeurIPS, pp. 6823\\u20136834, 2018.\\n\\n\\n** Other Minor Comments** \\nWe will revise our paper as suggested by the reviewer.\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for the detailed feedback and ideas for improvement.\\nAt this time, we are sharing what we've done so far on your ideas.\\n\\n[(ia) Comparison to well-known baselines]\\n\\nThank you for the suggestion. We are evaluating the baselines you suggested, and will add the result in our revision.\\n\\u200b\\n\\n[(ib) Comparison on more complex datasets]\\u200b\\n\\nWe are also trying to do additional experiments on a complex image dataset. As soon as we get the result, we will report it as a reply.\\n\\u200b\\n\\n[(ic) Experimental results with subsequently adding hidden layers]\\n\\nWe indeed have experimental results about the comment. We will add the result to our revision.\\n\\n\\n[Cost of SVD]\\n\\nAs you pointed out, SVD takes quite a bit of computation resources: e.g. $nm^2$ for a full SVD in our case. However, since SVD computation is required only at a training phase in RaPP, we are more flexible in utilizing computational resources. To be more efficient, we can also employ probabilistic approximation algorithms [1]. \\nWe will add an explanation about this in our revised paper. Also, we are carrying out experiments to check computational advantage of [1] with the implementation provided in [2], and we will share the results in the next reply.\\n\\n[1] Halko, N. and Martinsson, P. G. and Tropp, J. A. \\u201cFinding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions.\\u201d SIAM Rev., 53(2), 217\\u2013288, 2011\\n[2] https://fbpca.readthedocs.io/en/latest/\\n\\n\\n[(iii) Revision of Section 4]\\n\\u200b\\nWe would like to clarify the motivation in Section 4.\\n\\n** Background **\\nLet us consider a symmetric autoencoder $A$. In general, its pair of the corresponding encoding and decoding layers is not guaranteed to express the same space: an obvious example is permuted dimensions. This is because training $A$ does not care about activations from intermediate hidden layers. As a result, directly comparing $g_{:i}(x)$ and $f_{\\\\ell:i+1}(g(x))$ does not make sense, except for $i=0$ with which the comparison becomes the same as computing ordinary reconstruction error. This makes the concept of \\u201chidden reconstruction\\u201d not defined as done with $A$ for the ordinary reconstruction, though it sounds reasonable.\\n\\n** What we showed **\\nNevertheless, we show that activation vectors in an encoding hidden layer obtained by feeding the original input $x$ and its reconstruction $\\\\hat{x}=A(x)$ to the same network $A$ have the relation of input and reconstruction for the corresponding hidden space. That is, $g_{:i}(A(x))$ is equivalent to a reconstruction for $g_{:i}(x)$ in the $i$-th hidden space of $A$.\\n\\n** Assumption **\\nFor the conclusion above, we only assumed that given a trained autoencoder $A$, $x = A(x)$ for $x\\\\in M_0$ where $M_0$ is a low dimensional manifold in the paper. With this assumption, $g$ restricted on $M_0$ and $f$ restricted on $M_\\\\ell$ must become one-to-one functions. Here, we note that we did not make the statement on training data but on the manifold $M_0$ that the trained autoencoder describes.\\n\\nIn Section 4, we tried to provide what quantity $\\\\hat{h}_i(x)=g_{:i}(A(x))$ means or why it is meaningful in connection to the well-known reconstruction concept.\\nReviewing Section 4 by ourselves, we think readers can be confused about the point. We will make it clearer in our revision.\\n\\u200b\\n\\n[Reporting standard deviations]\\n\\u200b\\nWe will add the result in the appendix of our revised paper.\\n\\u200b\\n\\n[About other feedbacks]\\n\\u200b\\nWe are now working on a revision and will include your feedback.\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for the feedback and the suggestion.\\n\\u200b\\n[About Mahalanobis distance]\\n\\u200b\\nThey are indeed equivalent. Thank you for pointing this out. If we let the covariance matrix be $S$, in our terminology, $S = \\\\overline{D}^T\\\\overline{D} = V\\\\Sigma \\\\Sigma V^T$. \\nTherefore, the mahalanobis distance $(d - \\\\mu)^T S^{-1} (d - \\\\mu) = (d - \\\\mu)^T V\\\\Sigma^{-1} \\\\Sigma^{-1}V^T (d - \\\\mu) = ||\\\\overline{d}^T V\\\\Sigma^{-1}||_2^{2}$, which is the same as $S_{NAP}$.\\n\\nThere was a typo in the manuscript ($d$ must be $\\\\overline{d}$), and we will fix it in our revision. Since our code was correctly written, the numbers in the current manuscript were obtained with the corrected expression above.\\n\\n\\n[About VAE inference]\\n\\u200b\\nAs you pointed out, we carried out experiments with the mean component (\\u2018mu\\u2019) given by the encoder during the inference phase and found the results are still consistent within the quoted standard deviation (and potentially better.) We also found that we included the results with K = 1 (K: number of latent samples for reparameterization trick) in the paper instead of K = 10. Thus, we additionally carried out experiments with K = 10, as originally intended, and found the results are also within the quoted standard deviation. See the table below. Evaluations were repeated 5 times to estimate the mean and standard deviation. We will update the paper as well as the code with the results obtained from VAE using mean (\\u2018mu\\u2019) for the inference. \\n\\n\\n+------------+-------------+---------------+---------------------+----------------------+---------------------+\\n| Dataset | Training | Inference | recon | SAP | NAP |\\n+------------+-------------+---------------+----------------------+----------------------+--------------------+\\n| MNIST | k=1 | k=1 | 0.8636+-0.1789 | 0.9070+-0.0779 | 0.9270+-0.0666 |\\n| +-------------+---------------+----------------------+----------------------+--------------------+\\n| | k=10 | k=10 | 0.8965+-0.1687 | 0.9603+-0.0411 | 0.9613+-0.0388 |\\n| +-------------+--------------+-----------------------+----------------------+--------------------+\\n| | k=10 | mu | 0.9038+-0.1619 | 0.9621+-0.0433 | 0.9654+-0.0348 |\\n+------------+-------------+--------------+-----------------------+----------------------+--------------------+\\n| FMNIST | k=1 | k=1 | 0.7101+-0.1379 | 0.6714+-0.1068 | 0.7365+-0.1275 |\\n| +-------------+--------------+-----------------------+----------------------+--------------------+\\n| | k=10 | k=10 | 0.7203+-0.1440 | 0.7320+-0.1258 | 0.7614+-0.1208 |\\n| +-------------+--------------+-----------------------+----------------------+--------------------+\\n| | k=10 | mu | 0.7244+-0.1439 | 0.7483+-0.1229 | 0.7678+-0.1199 |\\n+------------+-------------+---------------+---------------------+----------------------+----------------------+\\n\\n\\n[ About datasets for comparison to other state-of-the-art methods ]\\n\\u200b\\nWe are trying to do additional experiments on a complex image dataset. As soon as we get the result, we will report it as a reply.\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for the feedback.\\n\\n[Extending Table3 to include other datasets]\\nWe will answer to your suggestion as soon as ready. We are now examining the applicability of your suggestion.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a novelty detection method by utilizing latent variables in auto-encoder. Based on this, this paper proposes two metrics to quantifying the novelty of the input. Their main contribution is the NAP metric based on SVD. Their method is empirically demonstrated on several benchmark datasets, and they compare their proposed metrics with other competing methods using AUROC and experiments results are encouraging.\\n\\nThe metrics proposed in this paper are intuitive and interesting. The experiments shown in Table2 is very convincing, and it could be better to extend Table3 to include other datasets (STL,OTTO, etc. )\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I have read the reviews and the comments.\\n\\nI appreciate the effort of the authors. I feel positive about the paper and I think it should be accepted.\\n\\nI confirm my rating.\\n\\n=================\\nThe paper proposes a new method for novelty detection that is based on measuring the reconstruction error in latent space between layer of the encoder. \\nThe reconstructed sample is fed back to the encoder and activations of the hidden layers of the encoder are compared with the activations that occurred when the original sample was fed into it.\\n\\nTo aggregate the reconstruction error from all layers of the encoder, two methods are proposed SAP (simple aggregation along pathway) and NAP (normalized aggregation along pathway). SAP is simply the sum of the squared norm, while NAP performs decorrelation and normalization of the magnitude.\\n\\nThe idea is novel, well motivated and explained.\\n\\nIt is said in the paper that NAP performs distance normalization by doing orthogonalization and scaling. The way it is described seems to be equivalent to PCA whitening. Thus, the computed distance should be a Mahalanobis distance.\\n\\nIt is not clear why for the VAE case 10 samples are averaged, instead of just using the mean component given by the encoder and passing it to decoder. It is typical to use reparametrization only during training.\\n\\nComparison with other state of the art methods is somewhat weak, since only two similar datasets are used (MNIST and F-MNIST).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"UPDATE:\\nI acknowledge that I\\u2018ve read the author responses as well as the other reviews. \\n\\nI appreciate the improvements and clarifications the authors have made, especially adding an ablation study to see the benefits of adding additional layers. I updated my score to Weak Accept (6). \\n\\n####################\\n\\nThis paper considers deep autoencoders (AEs) for the unsupervised novelty/anomaly detection task and proposes to extend the standard AE anomaly score, given by the reconstruction error between the input and output in the original data space, to also utilize the reconstruction errors of the hidden activations in the AE network. The proposed method, Reconstruction along Projection Pathway (RaPP), specifically compares the hidden activations of all encoder layers given by the original input $x$ with the activations of the same units given by feeding the reconstruction $\\\\hat{x}$ back into the AE. Thus RaPP compares the activation statistics of the original input $x$ and its reconstruction $\\\\hat{x}$ along the encoder projection pathway from original data space to latent code space. Two ways for aggregating those reconstruction errors to a final anomaly score are presented: (1) Simple Aggregation Along Pathway (SAP) which simply computes the sum of reconstruction errors, and (2) Normalized Aggregation Along Pathway (NAP) which computes the sum of reconstruction errors after normalization via Singular Value Decomposition (SVD). The paper conclusively presents experiments on eight datasets from various domains, in which SAP and NAP are compared to the reconstruction error baseline for vanilla AE, VAE, and AAE, as well as experiments on MNIST and Fashion-MNIST in which NAP is compared to state-of-the-art deep anomaly detectors.\\n\\nThough this work is well presented and indicates promising results, I think the paper should not yet be accepted due to the following main reasons: \\n(i) The experimental evaluation indicates promising, but not yet convincing results; \\n(ii) The computational complexity of NAP seems to be a major limitation of RaPP which is not addressed in the text; \\n(iii) The added value/insights from the theoretical Section 4 (Motivation of RaPP) are not clear.\\n\\n(i) I think the experimental section shows promising, but not yet convincing results. To judge the significance of results, I think the paper should address the following:\\n(ia) The experiments on the eight non-image datasets should include other baselines (e.g. OC-SVM, Isolation Forest) besides the standard AE reconstruction error. One should expect SAP and NAP to improve over the standard AE since both methods include the original data space reconstruction errors as well. Moreover, the advantage of deep approaches on such non-image datasets is less clear [7] why a comparison to well-known baselines should be given.\\n(ib) The main motivation for deep approaches to anomaly detection are large and complex datasets [6, 5, 4, 2]. I think the comparison to recent, state-of-the-art deep competitors should at least include another dataset more complex than MNIST or Fashion-MNIST, e.g. CIFAR-10 as reported in the previous works or MVTec [1]. \\n(ic) I think the proposed method begs for an ablation study of subsequently adding the reconstruction errors of additional layers. This would clearly demonstrate the potential benefits of adding the hidden reconstructions.\\n\\n(ii) The experiments indicate that a proper normalization of the hidden activation reconstruction errors is crucial for improving detection performance. NAP shows consistent improvements, whereas SAP often performs similar to the AE baseline. However, the current SVD normalization procedure on a matrix with dimensions number of samples \\u00d7 number of hidden encoder units seems extremely costly to me and appears to be a major limitation towards larger datasets or networks. Could you comment on this since this is not yet addressed in the manuscript. Have you tried using Batch Normalization (after activation) together with per-layer averaging? To me, this seems the natural first choice to normalize unit scores and to account for different layer widths. Do you apply SVD on mini-batches?\\n\\n(iii) The additional insights from the theoretical Section 4 are not clear to me. I think the presented reconstruction property for the hidden layers follows somewhat directly per definition for symmetrically constructed deep autoencoders (specifically if weights would be shared in addition). For a theoretical contribution, on the other hand, the proof and proposition should be fully rigorous in my mind, i.e. stating all the necessary assumptions on the function class (e.g. you implicitly assume invertibility and thus some smoothness of the $g_i$'s which Conv+ReLU modules do not satisfy for instance). As of now, I think this section does not add to intuition, but on the other hand is not completely rigorous. Maybe I am missing something?\\n\\nThe overall presentation of the paper is good (clear writing and structure, polished Figures and Tables). The work is well motivated and properly placed in the literature. Maybe since the approach is rather simple (which I don\\u2019t find negative), the author felt the need to add some rigor to the paper, which I think would not be necessary for a significant contribution if the experimental results hold up against the additional baselines and more complex datasets as described in (i).\\n\\n\\n####################\\n*Additional Feedback*\\n\\n*Positive Highlights*\\n1. Simple idea that does not require autoencoder modification or retraining that indicates improved anomaly detection results.\\n2. The work is well placed in the literature. The related work includes all relevant and recent major works on the subject matter.\\n3. I appreciate the evaluation on both anomaly/novelty detection setups, unimodal and multimodal.\\n4. Comparison to recent OC-NN [3], GPND [5], Deep SVDD [6], and GT [4].\\n5. The writing, structure and overall presentation is good.\\n\\n*Ideas for Improvement*\\n6. Include additional baselines and more complex datasets as described in (i).\\n7. Address the computational complexity of RaPP as in described in (ii).\\n8. Maybe cut the methodical/theoretical parts in Section 3.2 and Section 4 a bit. I think they are rather straightforward. Maybe combine Figures 1+2 as well. Extend the experimental evaluation instead.\\n9. Report the AUROC standard deviations over the trials as well to better infer statistical significance of the results (defer to appendix if space is a constraint).\\n\\n*Minor comments*\\n10. Section 2: \\u201cUnsupervised and semi-supervised learnings\\u201d \\u00bb \\u201cUnsupervised and semi-supervised learning approaches\\u201d.\\n11. Section 2: \\u201cVariational Autoencoders (VAE) was reported ...\\u201d \\u00bb \\u201cVariational Autoencoders (VAE) were reported ...\\u201d\\n12. Section 3.1: \\u201cDue to this representation learning property, the autoencoder has been widely used for novelty detection.\\u201d \\u00bb emphasis on unsupervised learning property, specifically.\\n13. Section 3.1: \\u201cAlthough this approach has shown a promising result in novelty detection ...\\u201d \\u00bb \\u201cAlthough this approach has shown promising results in novelty detection ...\\u201d\\n14. Section 3.1, last sentence: \\u201c... in more details.\\u201d \\u00bb \\u201c... in more detail.\\u201d\\n15. Section 3.2: \\u201cThose are especially suited for the case of zero-knowledge to interpret identified hidden spaces, which commonly happens when modeling with deep neural networks.\\u201d Zero-knowledge case? Reference?\\n16. In Section 5.1: \\u201cFurther setups are described in Section 5.1\\u201d?\\n17. Section 5.4.1: \\u201cAlso, we showed the best score ...\\u201d \\u00bb \\u201cAlso, we show the best score ...\\u201d.\\n18. Section 5.2: \\u201c... maintaining novelty ratios to 35% for the multimodal and 50% for the unimodal normality setups, respectively.\\u201d Why use different ratios?\\n\\n\\n####################\\n*References*\\n[1] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger. Mvtec ad\\u2013a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9592\\u20139600, 2019.\\n[2] R. Chalapathy and S. Chawla. Deep learning for anomaly detection: A survey. arXiv preprint arXiv:1901.03407, 2019.\\n[3] R. Chalapathy, A. K. Menon, and S. Chawla. Anomaly detection using one-class neural networks. arXiv preprint arXiv:1802.06360, 2018.\\n[4] I. Golan and R. El-Yaniv. Deep anomaly detection using geometric transformations. In NIPS, 2018.\\n[5] S. Pidhorskyi, R. Almohsen, and G. Doretto. Generative probabilistic novelty detection with adversarial autoencoders. In NeurIPS, pages 6822\\u20136833, 2018.\\n[6] L. Ruff, R. A. Vandermeulen, N. G\\u00f6rnitz, L. Deecke, S. A. Siddiqui, A. Binder, E. M\\u00fcller, and M. Kloft. Deep one-class classification. In International Conference on Machine Learning, pages 4393\\u20134402, 2018.\\n[7] L. Ruff, R. A. Vandermeulen, N. G\\u00f6rnitz, A. Binder, E. M\\u00fcller, K.-R. M\\u00fcller, and M. Kloft. Deep semi-supervised anomaly detection. arXiv preprint arXiv:1906.02694, 2019.\"}" ] }
BJg1fgBYwH
SAFE-DNN: A Deep Neural Network with Spike Assisted Feature Extraction for Noise Robust Inference
[ "Xueyuan She", "Priyabrata Saha", "Daehyun Kim", "Yun Long", "Saibal Mukhopadhyay" ]
We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs. The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP). The complete network learns to ignore local perturbation while performing global feature detection and classification. The experimental results on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images.
[ "Noise robust", "deep learning", "DNN", "image classification" ]
Reject
https://openreview.net/pdf?id=BJg1fgBYwH
https://openreview.net/forum?id=BJg1fgBYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "A2F7gqAPho", "rylXm642oH", "Bkef03NniS", "Byehu3VnsS", "Hygo5iVniS", "ryew4lks5S", "rygtIBsQ5S", "rkgGEGy-5r", "rkxVKVvAwB", "SklLEylRDr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798742036, 1573829915385, 1573829834513, 1573829748181, 1573829522953, 1572692014810, 1572218193509, 1572037161727, 1569776763772, 1569746734242 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2159/Authors" ], [ "ICLR.cc/2020/Conference/Paper2159/Authors" ], [ "ICLR.cc/2020/Conference/Paper2159/Authors" ], [ "ICLR.cc/2020/Conference/Paper2159/Authors" ], [ "ICLR.cc/2020/Conference/Paper2159/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2159/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2159/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2159/Authors" ], [ "~Daiheng_Gao2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes to improve noise robustness of the network learned features, by augmenting deep networks with Spike-Time-Dependent-Plasticity (STDP). The new network show improved noise robustness with better classification accuracy on Cifar10 and ImageNet subset when input data have noise. While this paper is well written, a number of concerns are raised by the reviewers. They include that the proposed method would not be favored from computer vision perspective, it is not convincing why spiking nets are more robust to random noises, and the method fails to address works in adversarial perturbations and adversarial training. Also, Reviewer #2 pointed out the low level of methodological novelty. The authors provided response to the questions, but did not change the rating of the reviewers. Given the various concerns raised, the ACs recommend reject.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply to Review #2\", \"comment\": \"Thank you for your review.\\n\\n>>Key contribution of the paper\\n\\nAs mentioned in response to the reviewer #3, the key novelty of the paper is the hybrid architecture that couple features learned via STDP and trained via SGD to create a single classification network. During inference, the hybrid network acts as a DNN and shows better classification accuracy for clean images, as well as improved robustness against input perturbation (noise). To the best of our knowledge, this is first ever demonstration of a DNN, where hybridization of STDP and SGD is performed during training. \\n\\n>>Comparison to De-noising\\n\\nPlease see response to reviewer #2 on the qualitative comparison with de-noising network as well as noise-trained DNNs. \\n\\n>>Scope of the Input Noise Type\", \"we_conducted_more_experiments_on_other_noise_distributions_including\": \"Wald, Poisson, and Salt-and-pepper. Moreover, based on the reviewer\\u2019s suggestion we have also included adversarial noise generated via black-box attack. The results are included in the revised paper, and showed that SAFE-DNN, trained only on clean images, show appreciable robustness to different types of noise structures and magnitude.\\n\\n>>Relation to adversarial noise and \\u201cattacks\\u201d\\n\\nThe paper is primarily focused on noise or perturbation in the input image that can occur naturally, for example, due to sensor hardware or capturing environment. We acknowledge that there also exists targeted perturbations such as adversarial attacks. We have included new test results considering adversarial noise generated via black-box attack methods and demonstrated that SAFE-DNN can improve robustness to adversarial noise as well. \\n\\nWe would like to stress that the paper does not claim to address the domain of adversarial attack. In particular, current form of SAFE-DNN is vulnerable to white-box attack, as it is possible to extract gradient information from the SAFE-DNN architecture. That being said, the well-studied defense methods such as adversarial training can, in principle, be adapted for STDP learning and deep network training stage, which can improve robustness against white-box attack. However, designing a SAFE-DNN for adversarial attack requires complementary efforts including extensive studies and experimental results, and is outside the scope of the current paper.\"}", "{\"title\": \"Reply to Review #1\", \"comment\": \"Thank you for your review.\\n\\n>> Novelty in Combining STDP and CNN. \\n\\nWe agree (and like) that the concept is simple where STDP-learned and SGD-trained features are combined to provide improved classification performance under input perturbation. However, as mentioned in the previous response, this was achieved through innovation in (i) STDP algorithm to improved robustness against input perturbation; and (ii) hybridization to seamlessly integrate STDP-learned features into the DNN pipeline during training and inference. It is important to realize that the hybrid network behaves as a DNN during inference and there are no spiking neurons during inference. \\n\\n>> Explanation of the noise robustness of SNN learned features: \\n\\nWe have improved section 3 and hope it provides a better reasoning for the robustness of low level features extracted by SNN. \\n\\nWe are intrigued by the reviewer\\u2019s question on whether a normal CNN trained using methods other than back propagation will show similar noise robustness or not. In principle, we feel an alternative to back-propagation that does not distribute errors globally during training may achieve a similar objective; however, at this point we do not have any empirical evidence and quantitative result to prove (or dis-prove) this hypothesis. \\n\\n>> What kind of input noise are we considering in the analysis?\\n\\nThe initial paper considered pixel level noise modeled as a Gaussian variable. \\n\\n>> Experiments are only tested under one kind of random perturbation with different strengths. I think it will be better if the algorithm can consistently improve over various kinds of noise distributions.\", \"we_conducted_more_experiments_on_other_noise_distributions_including\": \"Wald, Poisson and salt-and-pepper. Moreover, based on the suggestion from Reviewer #2, we have also included adversarial noise generated via black-box attack. The results are included in the revised paper, and showed that SAFE-DNN, trained only on clean images, show appreciable robustness to different types of noise structures and magnitude.\\n\\n>> It is mentioned in the introduction that some methods were proposed to filter out the input noise, but they are not compared in the experiments. \\n\\nWe have considered one popular approach for noise removal (model-based), namely, average filtering. As observed, the filtering helps for noisy images but significantly degrade the quality for clean images. Please see the response to reviewer #3 on the qualitative discussions on SAFE-DNN versus input denoising. \\n\\n>>What's the training time of the proposed method?\", \"we_have_included_training_time_for_both_snn_and_safe_dnn_implementations_in_section_4\": \"\\u201c\\u2026 for CIFAR10, using a desktop machine with Intel Core i7-7700K and two NVIDIA GTX 1080 Ti GPUs, SNN simulation takes 265 minutes. Training time for SAFE-MobileNetV2 is 63 minutes, for SAFE-ResNet101, 412 minutes and for SAFE-DenseNet121, 274 minutes. \\u201d\"}", "{\"title\": \"Reply to Review #3\", \"comment\": \"Thank you for your review.\\n\\n>>Innovation with respect to SNN Literature\\n\\nSpiking neural network (SNN) is an attracting idea of realizing biologically plausible neural networks and have been widely studied. However, SNN based on pure STDP learning have yet to show comparable performance as DNN, in particular for complex datasets like ImageNet. More recently, there have been many efforts in realizing supervised training in SNN via back propagation; which can achieve comparable performance as DNN. These prior works on SNN-DNN conversion focus on generating a deep network with spiking activation function (primarily to save energy) but does not have unsupervised learning. \\n\\nTo the best of our knowledge, the proposed method is the first effort in creating a hybrid network that successfully couples supervised training in DNN with local unsupervised learning in SNN in a single architecture. Note, the hybrid network presented here is not a simple cascade of SNN and DNN where SNN acts as a pre-processor. Instead, we propose a tighter coupling of the two by integrating features learned via STDP and features trained using SGD into a single model. The unsupervised learning presented here goes beyond traditional STDP. We present an innovative frequency dependent stochastic STDP formulation that improves ability of the network to extract features that are robust to local perturbation. The additional results are added to illustrate this advantage. The local and cross-depth inhibition, a relatively new concept in the STDP-based learning, has also been incorporated. \\n\\nAfter the STDP-learning is completed for the SNN component, we present a new conversion approach, where the SNN architecture is converted to an equivalent DNN, by (i) removing input spike generation, (ii) re-scaling of weights, and (iii) converting the spiking activation function to a special activation function. The conversion allows the hybrid network to be trained as a DNN preserving the accuracy for baseline images. \\n\\nIn summary, in contrast to prior works on ANN to SNN conversion that essentially create an SGD-trained deep SNN with spiking activation; this paper creates a final hybrid network that behaves as a regular DNN during inference but hybridizes STDP and SGD during training to enhance learning capability of the network.\\n\\n>>Method of choice for noise-robust computer vision\", \"the_noise_robust_classification_can_be_achieved_using_two_complementary_approaches\": \"(i) pre-processing the input via de-noising networks, and (ii) improving robustness of the classification network against input perturbation. SAFE-DNN is an approach for (ii) and can be integrated with techniques for (i) as SAFE_DNN does not degrade accuracy for clean images.\\n\\nWe acknowledge that using deep learning techniques for image de-noising is a well-studied area, and many of them show very good performance for arbitrary noise structures. However, de-noising adds a new stage in the processing pipeline increasing the overall latency. Moreover, for light-weight de-noising networks, if there is a significant difference between noise structure during training and inference the quality degrades (Na et. al., Xie et. al). Also, as de-noising changes the input structure, it can degrade accuracy for clean images. Advanced networks have been proposed to generalize well for different noise levels but their complexity is high increasing complexity of the overall system. In contrast, SAFE-DNN introduces negligible overhead. For example, even a lightweight de-noising network shown by Na, et. al., the overhead is 2.5 times more parameters (0.187M versus 0.07M).\\n\\nThe direct comparison of SAFE-DNN will be with other approaches for (ii), for example, networks trained with noisy images as presented in the paper. In that sense, SAFE-DNN learns to become robust without ever being trained on the noisy images which ensures the network is easily generalized to very different noise structures. We have added additional results in the paper to clearly demonstrate the ability of the network to show robustness to noise of different magnitude and structures. This is the most important advantage of SAFE-DNN over noise-trained DNNs. \\n\\nWe note that, the proposed network is not in conflict with pre-processing techniques such as de-noising DNNs, meaning that SAFE-DNN can be considered as an addition to de-noising. We are currently implementing advanced de-noising networks into our pipeline to show how an integrated pre-processing + SAFE-DNN will perform, and new results will be added to the final paper, if accepted. \\n\\nTaesik Na, et. al. Noise-robust and resolution-invariant imageclassification with pixel-level regularization. InInternational Conference on Acoustics, Speechand Signal Processing,(ICASSP), 2018\\n\\nJunyuan Xie, et. al. Image denoising and inpainting withdeep neural networks. Advances in Neural Information Processing Systems 25,pp. 341\\u2013349. Curran Associates, Inc., 2012\"}", "{\"title\": \"Revision of script\", \"comment\": \"1.\\tWe have modified the introduction to better articulate the contribution of the work in the context of prior works in noise-robust DNNs, as well as, prior works in SNN.\\n2.\\tWe have added additional details on prior works on noise-robustness including de-noising, and explained the difference between proposed approach and image de-noising. \\n3.\\tThe text in section 3 is modified to better explain the impact of STDP on improving robustness to noise. \\n4.\\tWe have modified Section 4 to explain the contribution/novelty of the work compared to prior works in SNN. \\n5.\\tBased on the reviewers\\u2019 suggestions, more experimental results are included in section 5. Those results are: \\n a.\\tEmbedding space visualization comparison for FD stochastic STDP and deterministic STDP to illustrate the role of novel STDP learning techniques proposed in this work. \\n b.\\tTraining time analysis of SAFE-DNN.\\n c.\\tTest on additional noise types and structures.\\n d.\\tTest on adversarial noise generated using black-box attacks.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a hybrid network architecture that can integrate features extracted via supervised training and unsupervised neuro-inspired learning. The paper is well-written and the experimental results seem sensible. The experimental results mainly revolve around testing the networks over noise added to training images. The problem of image denoising is very well-studied and very good methods have been proposed for image denoising under arbitrary noise using deep learning (see the works in CVPR, ICCV, ECCV etc.). Unfortunately, I am not in the position to judge the novelty\\nwrt spiking neuron network literature. Nevertheless, as far as computer vision or general applications is concerned the proposed pipeline would not be among the methods of choice. Hence, I am recommending weak reject for now, waiting for a more informed opinion to see if I will change my opinion.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper shows that replacing feature extraction layers by spiking convolution network can improve the performance under random noise. The algorithm itself is simple since it's just a combiniation of STDP and standard CNN. The results shows improved performance under some random noise. Although the idea is cute, I feel the paper fails to convince why spiking nets are more robust to random noise; the explanation using backprop rules in section 3 sounds interesting but does not fully convince me; for example, if we train a CNN by other approach instead of back-propagation, can we also improve robustness to input noise? Also, what kind of input noise are we considering in the analysis?\\n\\nAlso, I have some questions on the experiments: \\n\\n1. Experiments are only tested under one kind of random perturbation with different strengths. I think it will be better if the algorithm can consistently improve over various kinds of noise distributions. \\n\\n2. It is mentioned in the introduction that some methods were proposed to filter out the input noise, but they are not compared in the experiments. \\n\\n3. What's the training time of the proposed method?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper develops a method to augment deep neural networks with Spike-Time-Dependent-Plasticity (STDP) aiming at improving noise robustness of the network learned features. In the hybrid network, learned feature is the concatenation of traditionally supervised-ly learned features and those from an auxiliary\\u00a0module trained locally and unsupervised-ly by STDP. The new network demonstrates improved noise robustness via improved classification accuracy on Cifar10 and ImageNet subset when input data have noise, on different network architectures.\\n\\nThe paper, however, fails to address the many works in the literature about adversarial perturbations ('attack') and adversarial training ('defense'), starting by\\u00a0(Szegedy et al., 2013). The different types of attacks\\u00a0affect the efficiency\\u00a0of defense due to the game-theoretical nature of the adversarial perturbation\\u00a0problem. If the attack is blind to the classification model, e.g., Gaussian attack by adding Gaussian noise, then image restoration\\u00a0techniques like denoising could provide an effective 'defense'. Thus model-specific attacks are of more application interest than model-blind ones. The current manuscript did not address the specific noise type being used to perturb the image. It is unlikely that the local learning techniques proposed in the paper can work on many kinds of perturbations especially the 'attacks' which is model specific.\\u00a0\\n\\nThe proposed methodology is a feature concatenation of local (low-level) features of image data and deep features. Given the current state of the manuscript, the level of methodological novelty and the scope of input perturbations that can be made robust against both appear to be limited.\", \"references\": \"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.\"}", "{\"comment\": \"Yes it should be STDP. Thank you for pointing that out. I apologize for the confusion.\", \"title\": \"Re: A spell error.\"}", "{\"comment\": \"Hi, when I read your paper, I occasionally found that on page 4. there is STPD instead of STDP, so it may be a little mistake or something?\", \"title\": \"A spell error.\"}" ] }
B1x1MerYPB
Putting Machine Translation in Context with the Noisy Channel Model
[ "Lei Yu", "Laurent Sartran", "Wojciech Stokowiec", "Wang Ling", "Lingpeng Kong", "Phil Blunsom", "Chris Dyer" ]
We show that Bayes' rule provides a compelling mechanism for controlling unconditional document language models, using the long-standing challenge of effectively leveraging document context in machine translation. In our formulation, we estimate the probability of a candidate translation as the product of the unconditional probability of the candidate output document and the ``reverse translation probability'' of translating the candidate output back into the input source language document---the so-called ``noisy channel'' decomposition. A particular advantage of our model is that it requires only parallel sentences to train, rather than parallel documents, which are not always available. Using a new beam search reranking approximation to solve the decoding problem, we find that document language models outperform language models that assume independence between sentences, and that using either a document or sentence language model outperform comparable models that directly estimate the translation probability. We obtain the best-published results on the NIST Chinese--English translation task, a standard task for evaluating document translation. Our model also outperforms the benchmark Transformer model by approximately 2.5 BLEU on the WMT19 Chinese--English translation task.
[ "machine translation", "context-aware machine translation", "bayes rule" ]
Reject
https://openreview.net/pdf?id=B1x1MerYPB
https://openreview.net/forum?id=B1x1MerYPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "3cdMnyN0iP", "rye7MN6qjH", "BklPU1CfjH", "BJgcb06GsB", "Hyxf1RTfjB", "Skxn36afjr", "HJlp8ls19B", "Sk97bk0tS", "r1lfMoQ_YH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798742005, 1573733387359, 1573211983018, 1573211650102, 1573211610151, 1573211571905, 1571954772648, 1571840289517, 1571465994101 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2158/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2158/Authors" ], [ "ICLR.cc/2020/Conference/Paper2158/Authors" ], [ "ICLR.cc/2020/Conference/Paper2158/Authors" ], [ "ICLR.cc/2020/Conference/Paper2158/Authors" ], [ "ICLR.cc/2020/Conference/Paper2158/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2158/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2158/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors propose using a noisy channel formulation which allows them to combine a sentence level target-source translation model with a language model trained over target side document-level information. They use reranking of a 50-best list generated by a standard Transformer model for forward translation and show reasonably strong results. The reviewers were concerned about the efficiency of this approach and the limited novelty as compared to the sentence-level noisy channel research Yu et al. 2017. The authors responded in depth, adding results with another baseline which includes backtranslated data. I feel that although this paper is interesting, it is not compelling enough for inclusion in ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you\", \"comment\": \"Thanks for your clarifications.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review.\\n\\nRegarding the perceived lack of novelty to Yu et al., 2017, please see the response to Reviewer 2.\\n\\nRegarding baselines. We do have sentence-level LM results + sentence level proposal results (see Table 2). Regarding the Xia et al. (2017) method \\u2014 which we argue would be a benchmark, not a baseline \\u2014 we have provided some back translation results (see response to Reviewer 3), and we can add to the paper. Since back translation techniques are closely related, it does not seem important to add these techniques, especially since the Xia et al. method has not yet been established for document level translation (although it no doubt could be used for this), further it is not obvious that it would provide a convenient way to exploit the kind of data that we wish to exploit (monolingual documents, parallel sentences). The goal of this paper, and why readers should read it, is using a theoretically motivated approach to the document translation problem that solves a central data problem in MT, and characterizing its performance relative to some representative baselines.\\n\\nRegarding the difference in performance between document and sentence LMs. As we discuss in the paper, with citations to much prior work, the impact of fixing problems related to cross-sentence consistency has a minimal impact on BLEU, but the impact on human judgments can be much more significant. For this reason we also carried out a human evaluation, where the document reranker was favoured two-to-one by our evaluators.\\n\\nRegarding why the proposal model adds no value to the objective. The fact that the proposal model does not add new information to the objective is expected if Bayes rule yields a better estimate of the translation probability than its direct estimation (i.e., the proposal model). Thus, since we believe our component models (channel and language model) to be well estimated, we expected this redundant component to add no value, and we see this result as a confirmation that Bayesian arguments are trustworthy in this domain (deviations could be expected for a variety of reasons: e.g., poorly calibrated probability distributions, that parameters are chosen to maximize BLEU not to minimize the cross entropy under the posterior distribution). We will clarify this point in the paper.\\n\\nRegarding the effects of ensembling. We have compare ensembling 2 zh->en + 1 en->zh models:\\n\\n+------------------------------------------------------------------------------------------+------------+\\n| Model | MT06 |\\n+------------------------------------------------------------------------------------------+------------+\\n| ensemble | 50.04 |\\n+------------------------------------------------------------------------------------------+------------+\\n| Sent-reranker (sent-level transformer as the proposal model) | 50.29 |\\n+------------------------------------------------------------------------------------------+------------+\\n| Doc-reranker (sent-level transformer as the proposal model) | 50.93 |\\n+------------------------------------------------------------------------------------------+------------+\\n\\nThe noisy channel approach outperforms ensembling in BLEU, as has been discussed in previous work on noisy channel approaches that show this isn\\u2019t just an effect of ensembling. Also, notably, ensembling sentence level models will not address the document translation problem, nor will it enable us to use monolingual text data, which are two benefits that our technique has.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review.\\n\\nRegarding the differences to Yu et al., 2017. While both papers indeed use a noisy channel decomposition, the novelty in this paper is the theoretical justification for training a model using only parallel sentences and monolingual documents, and then using it to infer document translations (an important task!). This asymmetry in available data is exactly the situation that exists in the world today, and our model, which addresses it directly and elegantly, will undoubtedly be of general interest. Moreover, while the Yu et al., 2017 model could be used on documents by concatenating their sentences to form a single long sequence, this would not let us use the conditional sentence independence assumptions that gives our model the flexibility to use just parallel sentences. Secondarily, the Yu et al. inference algorithm is specialized to their channel model, and it has a quadratic (in the length of the sentence) complexity, which would be prohibitive for sequences longer than a single sentence; in practice our inference technique is much faster. We will clarify these differences in the paper.\\n\\nRegarding whether our approach really needs parallel documents. First, there are two models in this paper- the joint translation model and proposal model we use to do inference. The joint translation model is only ever trained using parallel sentences. For inference, we use a proposal model that approximates the posterior, and we compare two variants: one that is trained using just parallel sentences (effectively, we assume independence between translations given the source document) and one that is trained with document context (see Table 2). As predicted, a proposal model that more closely matches the true posterior (i.e., the one with document context) is more effective than one that is less accurate (no document context), but the crucial result is that in both cases, document information has a positive impact on the performance of the system. The secondary result is that search is a hard problem, and while usable approximations exist, this is an important open question. We will clarify this.\"}", "{\"title\": \"Response-part 2\", \"comment\": \"Regarding speed. Search is indeed a hard problem in our model. We intend this paper to ask whether a well-motivated model performs well, and provide a reasonable (if imperfect) inference method. We show that it does work well. Now subsequent work can answer the question of how to make decoding fast. But search is a hard problem that has applications in many areas beyond translation, so this paper adds value to those who would work on this problem. We ourselves intend to work on this now that we know this model is effective, but we also argue that this is a good time to publish these results: others may be interested in knowing about yet another interesting search problem. We will clarify this.\"}", "{\"title\": \"Response-part 1\", \"comment\": \"Thank you for your review.\\n\\nRegarding the question about circumventing the data problem with back-translated documents. While this is a good idea, and there is evidence that it can work well (Junczys-Dowmunt, 2019), it is challenging to train such models well, whereas our model involves straightforward training procedures. Specifically, for back translation to succeed, monolingual data that will be back translated must be carefully selected, and, for good performance, you should filter likely bad translations; the ratio of back translated data and \\u201creal\\u201d data must be balanced, etc. While techniques for doing this are fairly well established for single sentence models, no such established techniques exist for documents. We do have several results that we can add to the paper which we discuss here to convince you that our results are both interesting and \\u201cgood\\u201d.\\n\\nFirst, we did attempt to replicate the technique of Junczys-Dowmunt (2019), but found that in Chinese-English, it was difficult to learn a model that reliably generates the correct number of sentences (contra his findings), which makes a fair comparison challenging. But, to give some calibration for the relative power of back translation vs noisy channel modeling, we did generate a sentence-level proposal model on back translation and compare it to the performance of a sentence-level proposal model trained only on \\u201creal\\u201d parallel data:\\n\\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| | Model | MT06 | MT03 | MT04 | MT05 | MT08 |\\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| 1 | Transformer baseline (q) | 49.40 | 49.42 | 50.11 | 48.76 | 41.58 |\\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| 2 | Backtranslation (q') | 51.11 | 52.12 | 51.82 | 51.10 | 43.15 |\\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| 3 | Sent-reranker (using q as proposal) | 52.25 | 52.21 | 52.35 | 51.28 | 44.27 |\\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| 4 | Doc-reranker (using q as proposal) | 52.70 | 52.47 | 52.52 | 51.49 | 44.43 |\\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| | Sent-reranker + back translated | | | | | |\\n| 5 | proposal (using q' as proposal) | 52.95| 53.93 | 53.69 | 53.61 | 45.18 | \\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n| | Doc-reranker + back translated | | | | | |\\n| 6 | proposal (using q' as proposal) | 53.56| 54.80 | 53.94 | 53.86 | 45.85 | \\n+------+-------------------------------------------------------+---------+---------+---------+---------+---------+\\n\\nFrom these results, we see that while both techniques improve translation, i.e., both (2) and (3) are better than (1), sentence level back translation (2) is less effective than a noisy channel model reranker is (row 3), and, as we showed in the reviewed draft the doc-reranker is better again (row 4). Since we have a new model q\\u2019, we can use it as a proposal model for our noisy channel reranker \\u2014 effectively using the monolingual data twice. Happily, this improves results even further (rows 5-6). Thus, in addition to the challenges of making back translation work at all which we believe argues for the value of our model, we have evidence that (a) the noisy channel approach makes better use of monolingual data than back translation does; (b) using our inference strategy based on reranking samples from a proposal model, samples from a backtranslation-trained proposal model (q\\u2019) can be improved further still, providing further evidence that the noisy channel model is well calibrated across a variety of qualities and that it picks up different things than backtranslation does. These results will be broadly of interest to the community, even if we haven\\u2019t explored all imaginable back translation configurations. If you have a specific result that you think is particularly important to make this paper acceptable, please identify it, and we will run the comparison.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe paper describes a noisy channel approach for document-level translation, which does not rely on parallel documents to train. The approach relies on a sentence-level translation model (from target-to-source languages) and a document level language model (on target language), each is trained separately. For decoding, the paper relies on another proposal model (i.e., a sentence level translation model from source to target) and performs beam-search weighted by a linear combination of the scores of all three models. Experiments show strong results on two standard translation benchmarks.\", \"comments\": [\"The proposed approach is strongly based on the neural noisy channel model of Yu et al. 2017 but mainly extends it to context aware translation. While the paper is referenced, I believe more emphasis should be put on the differences of the proposed approach\", \"It seems that the Document Transformer uses parallel-documents to train, so I am wondering if you can still claim that your approach does not require parallel documents.\", \"In general, I think the paper is well written and results are compelling.\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a simple approach for document-level machine translation. The idea is to use a language model on the target side and a reverse translation model to choose the best document-level translation. This is theoretically justified by Bayes\\u2019 rule and the assumption that the sentences are conditionally independent. The authors implement this idea using a reranking model that rescores 50 candidate translations generated by a standard Transformer model for forward translation.\\n\\nThis is interesting work and the experimental results demonstrate the effectiveness of the approach. However, I am concerned about the (missing) comparison between the proposed approach and the approach that combines backtranslation and a document-level translator (e.g. Doc-transformer). It seems to me that one could backtranslate a large monolingual corpus and use the resulting parallel documents as additional training data for a document-level translation model. How does the proposed approach compare to such a backtranslation approach?\\n\\nAnother concern is the speed of translation. It seems to me that the computational cost required for generating 50 candidates and reranking them is quite high. I would like to see some experimental results on the actual speed of translation. The aforementioned backtranslation approach should not have this problem, which also makes me unsure about the usefulness of the proposed approach in practice.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"** Paper summary **\\nIn this paper, the authors propose a new re-ranking mechanism leveraging document-level information. Let X and Y denote two languages for ease of reference. The authors focus on X->Y translation and Y->X is a model used for re-ranking. Specifically,\\n(1)\\tTwo translation models X->Y and Y->X are trained, where X->Y is a document Transformer and Y->X is a sentence transformer.\\n(2)\\tTrain a language model P(Y) on document-level corpus (rather than sentence-level LM).\\n(3)\\tGiven a document with $I$ sentences (x^1, \\u2026, x^I), translate each source sentence $x^i$ to K candidates.\\n(4)\\tUsing beam search guided by Eqn.(4) to find optimal translation paths, which is a combination of X->Y translation, Y->X translation, document-level language model and the number of words.\\nThe authors work on NIST Chinese-to-English translation and WMT\\u201919 Zh->En translation to verify the proposed algorithm.\\n\\n** Novelty **\\nThe novelty is limited. Compared to the paper \\u201cthe Neural Noisy Channel\\u201d (Yu et. al, 2017), the authors use document Transformer and document-level LM for re-ranking, which is of limited novelty. \\n\\n** Details **\\n1.\\tSome baselines are missing from this paper: (A) dual inference baseline [ref1]; (B) X->Y is sentence-level transformer and LM is sentence-level LM, i.e., (Yu et. al, 2017), where P(Y|X) and P(X|Y) are sentence-level translation models.\\n2.\\tIn Table 1, the improvement of doc-reranker is not very significant compared to sent-reranker, ranging from 0.21 to 0.66. \\n3. In Table 4, \\u201cChannel + LM\\u201d and \\\"Proposal + Channel + LM\\\" achieved almost the same results. Does it mean that the \\\"proposal\\\" component do not work?\\n4.\\tMany models are used in this framework. I am not sure whether simple re-ranking or ensemble can outperform this baseline, e.g., 2 Zh->En + 1 En->Zh\\n\\n[ref1] Dual Inference for Machine Learning, Yingce Xia, Jiang Bian, Tao Qin, Nenghai Yu, Tie-Yan Liu, IJCAI\\u201917\"}" ] }
BJxyzxrYPH
Deep geometric matrix completion: Are we doing it right?
[ "Amit Boyarski", "Sanketh Vedula", "Alex Bronstein" ]
We address the problem of reconstructing a matrix from a subset of its entries. Current methods, branded as geometric matrix completion, augment classical rank regularization techniques by incorporating geometric information into the solution. This information is usually provided as graphs encoding relations between rows/columns. In this work we propose a simple spectral approach for solving the matrix completion problem, via the framework of functional maps. We introduce the zoomout loss, a multiresolution spectral geometric loss inspired by recent advances in shape correspondence, whose minimization leads to state-of-the-art results on various recommender systems datasets. Surprisingly, for some datasets we were able to achieve comparable results even without incorporating geometric information. This puts into question both the quality of such information and current methods' ability to use it in a meaningful and efficient way.
[ "Geometric Matrix Completion", "Spectral Graph Theory", "Functional Maps", "Deep Linear Networks" ]
Reject
https://openreview.net/pdf?id=BJxyzxrYPH
https://openreview.net/forum?id=BJxyzxrYPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hdnAi6jhjZ", "S1e0phdhoH", "HJxIzuSOjB", "Byg11OH_sH", "HkeYUUBdjr", "S1gT7LHdoH", "SkgW4BB_oB", "SyxRPVxfqB", "HJgg1pKRtB", "HJlk4jujYB", "HyxruY5QOH", "H1l2FMPGOB", "Hylfk4HzOB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment", "official_comment", "comment" ], "note_created": [ 1576798741975, 1573846213820, 1573570574205, 1573570519201, 1573570128789, 1573570084957, 1573569833414, 1572107365805, 1571884248510, 1571683110948, 1570117996792, 1570038403801, 1570030554471 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "ICLR.cc/2020/Conference/Paper2157/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2157/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2157/AnonReviewer2" ], [ "~Abhishek_Sharma1" ], [ "ICLR.cc/2020/Conference/Paper2157/Authors" ], [ "~Abhishek_Sharma1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a multiresolution spectral geometric loss called the zoomout loss to help with matrix completion, and show state-of-the-art results on several recommendation benchmarks, although experiments also show that the result improvements are not always dependent upon the geometric loss itself.\\nReviewers find the idea interesting and the results promising but also have important concerns about the experiments not establishing how the approach truly works. Authors have clarified their explanations in the revisions and provided requested experiments (e.g., on the importance of the initialization size), however important reservations re. why the approach works are still not sufficiently addressed, and would require more iterations to fulfill the potential of this paper.\\nTherefore, we recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Revised paper\", \"comment\": \"Dear reviewers & ACs,\\n\\nWe have uploaded a revised version of our paper according to the discussions below.\", \"the_following_updates_have_been_made\": \"(1) We improved the introduction, eliminated redundancies and added some motivation.\\n(2) We updated the paper\\u2019s main contributions.\\n(3) We added the missing definitions, i.e., product graph.\\n(4) We included a study on the effect of the scale of the initialization.\\n(5) We filled in the missing results for the FM method and updated the results obtained with DMF to account for the initialization.\\n(6) We included a study on the effect of the rank and the number of samples on the reconstruction error for both DMF and SGMC.\\n(7) We added a discussion summarizing the results of these studies. In particular, conjecturing that the superb performance of SGMC is related to the fact that we work in an extremely data-poor regime and therefore the implicit regularization of DMF is not enough.\\n(8) We included a link to a simplified version of the code in the form of a jupyter notebook.\\n\\nWe thank you for your time reviewing our paper and for the useful comments. It helped us improve our paper and reconfirm our reportings.\\n\\nRegards,\\nThe authors.\"}", "{\"title\": \"Reply to reviewer #2 (1/2)\", \"comment\": \"Dear reviewer #2,\\n\\nThank you for your comments! In what follows, we will try to address in detail the issues you raised:\\n\\n(0) Regarding scalability of the spectral decomposition: This is an issue we acknowledged in the paper. Despite this shortcoming, our method is useful for a variety of mid-size problems, and we believe that the scalability issue does not take away from its theoretical and practical merits.\", \"regarding_the_quality_of_the_available_geometric_model\": \"First, one of the main purposes of our paper is to reflect on the quality of geometric matrix completion (GMC) methods. We raise the question whether the models used by GMC methods are truly useful, or their results are due to some underlying phenomena (e.g., an implicit regularization inherent in gradient descent)? Indeed, our experimentation shows that results on par with state-of-the-art GMC can be obtained even with simple baselines such as deep matrix factorization (Arora et. al). We believe that these observations following from our extensive experimentation provide an important contribution to the community.\\n\\nSecond, it is not always difficult to obtain accurate geometric models. As we acknowledge in the paper, our intuition and inspiration comes from shape analysis. In this field the geometric models are usually quite accurate. Focusing on such cases allowed us to develop the geometric intuition behind our method and attribute its success (or failure) to the quality of the available geometric model.\\n\\nThird, there is some merit in starting with a crude geometric model and refining it with our optimization. This crude geometric model can be obtained via pre-processing, e.g., a supervised learning approach, and refined on the actual data. In the following link we provide a toy example that allows to investigate what happens when there is a mismatch between the graphs used to generate the data, and the graphs used to estimate it. There is a clear evidence that learning the graphs (i.e., by \\u201crotating\\u201d the Laplacian bases) improves the estimation, (compared to using the graphs as is)\", \"https\": \"//colab.research.google.com/drive/1OkNEiTHok14gcVf3NxFIbAFutDN6-Tx6\\n\\n(1) Our model tacitly assumes a particular form of an (approximate) low rank matrix: \\nA matrix that is composed of a linear combination of the first harmonic vectors of some product graph (i.e., those corresponding to low frequencies). We further simplified this assumption and assumed this matrix is smoothed separately on the rows graph and the columns graph.\\nWhy is this a good model? We believe that information about a particular user can be shared across other similar users (as captured by the graph edges) via a process of smoothing (or diffusion). Following this process, similar users should give similar ratings. In the same way, similar items should have similar ratings. This similarity across neighbouring nodes is captured by the Dirichlet energies for the rows and column graphs - small Dirichlet energy corresponds to a smooth function. Please also refer to the explanation we provided to reviewer #1.\\nUsing this model is particularly helpful (empirically) in the data poor regime which most real data sets lie in. In this regime it is not enough to assume just that the matrix is low rank, despite methods like DMF (Arora et. al) exploiting complex rank regularizations. An additional assumption on the structure of this matrix turns out to be very helpful, provided that a geometric model (even a mildly crude one) is available. See our cold start analysis (FIgure 1) that explores this regime and our reply to reviewer #3. You are also welcome to play with the number of samples in the aforementioned link and witness the effectiveness of our method in the data poor regime.\\n\\n(2) The fact that our model only provides a marginal improvement in the case of a poor geometric model definitely makes sense to us. When the geometric model is poor, there is a small difference (if at all) between using the graphs or not using them altogether. In these cases there is no clear evidence as to what is the contributing factor to the rank regularization - be it an implicit regularization due to the gradient descent or some other factor. For the Douban dataset for example (table 1) we see that the DMF method (Arora et. al) is competitive with the other more complicated methods. In our opinion, this is due to poor geometry. We further explored this phenomena (i.e., by perturbing the graphs with increasing noise) in the toy example in the aforementioned link.\"}", "{\"title\": \"Reply to reviewer #2 (2/2)\", \"comment\": \"Experimentation:\\n\\n(1) The number of trainable parameters for our method is the number of elements in $\\\\mathbf{P},\\\\mathbf{C},\\\\mathbf{Q}$. This is chosen according to $p_{\\\\mathrm{max}},q_{\\\\mathrm{max}}$ - a hyperparameter in our setting reported in Table 5. Overparameterization alone is not enough to produce the empirical improvements we reported. Without explicit regularization it would overfit the training data and perform poorly on the test data, specifically in the data poor regime. Also see comment (3) of reviewer #3 and the answer we provided.\", \"regarding_the_other_methods\": \"we omitted those details as it is not clear how to compare the number of parameters between methods of ostensibly different nature, and it wasn\\u2019t the focus of our paper.\\n\\n(2) To generate the Synthetic ML-100K dataset we did the following:\\n\\n - We computed the first $k=50$ eigenvectors $\\\\mathbf{\\\\Phi}_k,\\\\mathbf{\\\\Psi}_k$ of $\\\\mathbf{L}_\\\\mathrm{r}$ and $\\\\mathbf{L}_\\\\mathrm{c}$, the Laplacians of the row and column graphs for the ML-100K dataset\\n\\n- We projected $\\\\mathbf{M}$ on this subspace, i.e., \\n\\\\[\\n\\\\mathbf{M}_{\\\\mathrm{proj}} =\\\\mathbf{\\\\Phi}_k\\\\mathbf{\\\\Phi}_k^\\\\top\\\\mathbf{M}\\\\mathbf{\\\\Psi}_k\\\\mathbf{\\\\Psi}_k^\\\\top;\\n\\\\]\\n- We performed histogram matching between $\\\\mathbf{M}_\\\\mathrm{proj}$ and $\\\\mathbf{M}$ such that the histogram of entries in $\\\\mathbf{M}_\\\\mathrm{proj}$ is the same as in $\\\\mathbf{M}$. This nonlinear operation increases the rank of the matrix, so it is no longer k, but the perturbation to the singular values is small (verified empirically).\\n\\n(3) We do not have the training times for the other methods as we took the results from the corresponding papers. Our method (SGMC) runs in just a few minutes, depending on the datasets. For example, on ML-100K with the parameters reported in table 5 it takes about a minute, including the eigendecomposition and excluding the time taken to build the computational graph in Tensorflow.\\n\\n(4) The results of the FM method are poor for the other datasets so we did not include them. We will include them in the revised version.\", \"minor_comments\": \"(2) Regarding equation (15): $\\\\odot S$ should appear twice, following from the computation of the gradient. If $S$ is binary (as in our case) then $S\\\\odot S = S$ and one of the $S$ disappears.\\n\\nWe again thank you for the suggestions and will incorporate your comments into the revised version of the paper.\\n\\nYours sincerely, \\nThe authors.\"}", "{\"title\": \"Reply to reviewer #1 (1/2)\", \"comment\": \"Dear reviewer #1,\\n\\nThank you for your comments! In what follows, we will try to address in detail the issues you raised:\\n\\nOur method comes from geometric considerations rather than an ad-hoc construction. We advocate that focusing on the geometric interpretation can sometimes lead to simplified architectures, therefore the title of our paper. In our case, it results in a fully linear network which, in our humble opinion, is a simpler architecture compared to some other competing geometric matrix completion methods. This motivates our subjective claims regarding \\u201ccumbersome and non-intuitive designs\\u201d. The message we were trying to convey was that architectural designs that originated in Euclidean deep learning such as convolutional layers followed by pointwise non-linearities, might not be the best candidates for other domains. For example, Wu et al. 2019 showed that it is possible to simplify graph neural network architecture with a minor compromise to the end task. We will try to clarify these claims and address your major concerns below:\\n\\n(1) Our inspiration for the method came from problems in shape correspondence. A correspondence problem is a matrix completion problem with some constraints on the matrix. Although it served as an inspiration, it is unnecessary to understand the correspondence problem on shapes in order to understand our method. An intuitive explanation can be given along the following lines:\\n\\nWe are given a rating matrix, where each entry in the matrix is the rating given by a user i to an item j. Our model assumes that similar users should rate items similarly, and similar items should be rated similarly by different users. Similarity between users/items is encoded by some external graphs (constructed, respectively, on the row/column spaces of the matrix). \\nOn a graph, one can define a function, i.e., a vector $\\\\mathbf{x}$ whose entries are values on the nodes of the graph. With some abuse of proper mathematical terminology, we call the function \\u201csmooth\\u201d if its values on adjacent nodes are close. This kind of smooth behaviour is encoded by the projection of the function on the first eigenvectors of the graph Laplacian $\\\\mathbf{L}$, in the same way that a \\u201csmooth\\u201d function in Euclidean space is composed only of harmonic functions with small frequencies. The eigenbasis of a graph Laplacian is the graph analogue of the Euclidean Fourier basis.\\nSince we have two graphs, we have two such Fourier bases, $\\\\mathbf{\\\\Phi},\\\\mathbf{\\\\Psi}$, and we can treat the rating matrix as an outer product of two functions: one defined on the users graph and one defined on the items graph. Each one of these functions is smooth in its own right. We therefore write our matrix as $\\\\mathbf{X} = \\\\mathbf{\\\\Phi}\\\\mathbf{C}\\\\mathbf{\\\\Psi}^\\\\top$.\\n\\nThe Dirichlet energy,\\n\\\\begin{equation}\\n \\\\mathbf{x}^\\\\top \\\\mathbf{L}\\\\mathbf{x} = \\\\sum_{(a,b)\\\\in E}\\\\omega_{a,b}\\\\left(x(a)-x(b)\\\\right)^2,\\n\\\\end{equation}\\npenalizes the difference between the function values on adjacent nodes, and therefore minimizing it promotes such smooth functions. So, in principle, one can find smooth functions on a graph by minimizing some data term (e.g., the L2 norm) and regularize it with some smoothness term such as the quadratic Dirichlet energy. This gives rise to a simple convex problem which can give great results if the graphs are accurate,\\n\\n(equation 1)\\n\\\\begin{equation}\\n\\\\min_{\\\\mathbf{C}} \\\\|\\\\left(\\\\mathbf{\\\\Phi}\\\\mathbf{C}\\\\mathbf{\\\\Psi}^\\\\top-\\\\mathbf{M}\\\\right)\\\\odot \\\\mathbf{S}\\\\|_F^2 + \\\\mu_rE_{Dirichlet}^r(\\\\mathbf{C})+\\\\mu_cE_{Dirichlet}^c(\\\\mathbf{C}).\\n\\\\end{equation}\\nUnfortunately, our graphs are inaccurate since it is hard to model the relationship between users and items. Nevertheless, we would like to enforce our matrix to follow the model described above due to its simplicity, despite the inaccuracies in the graphs. To do that, we assume that the graphs can be \\u201ccorrected\\u201d, and on the \\u201ccorrected\\u201d graphs the matrix will still be smooth. Since correcting the graphs seems like a hard task, and anyway we are only interested in the representation of the function in the eigenbases of the \\u201ccorrected\\u201d graph Laplacians, we try to directly get these bases $\\\\mathbf{\\\\Phi}_{new},\\\\mathbf{\\\\Psi}_{new}$ by applying a linear transformation to the old bases: $\\\\mathbf{\\\\Phi}\\\\mathbf{P},\\\\mathbf{\\\\Psi}\\\\mathbf{Q}$.\\nWe only need to make sure that the new bases are indeed Laplacian eigenbases, and this can be done by requiring them to diagonalize the new Laplacians. Since we don\\u2019t have the new Laplacians, we will use the old ones as proxies.\\n\\nIf all is working according to plan, we will end up with new graphs (which remain latent), on which our unknown matrix $\\\\mathbf{X}=\\\\mathbf{\\\\Phi}\\\\mathbf{P}\\\\mathbf{C}\\\\mathbf{Q}^\\\\top\\\\mathbf{\\\\Psi}^\\\\top$\\nis smooth, and therefore has a low Dirichlet energy. This entire story is captured by equation (10) in our paper, with some additional minor details.\"}", "{\"title\": \"Reply to Reviewer #1 (2/2)\", \"comment\": \"Another interesting observation we made is that our method is essentially an overparameterized deep matrix factorization (DMF) method with some additional structure. DMF has been proven recently (see Arora et al. 2019 and the discussion with reviewer #3) to promote a low rank via implicit regularization of gradient descent. This is a contributing factor to the success of our method.\\n\\nWe made up a tutorial to allow experimenting with our method in the link below, and we hope you can find it useful to understand the method better:\", \"https\": \"//colab.research.google.com/drive/1OkNEiTHok14gcVf3NxFIbAFutDN6-Tx6\\n\\n(2) The number of available ratings for each dataset is provided in Table 4. If by \\u201csample complexity\\u201d you mean how the test error changes with the size of the training set, we believe that our cold start analysis (Figure 1) provides an answer: we show that the SGMC-Z version of our method is particularly more effective in the data-poor regime than the SGMC and other competing algorithms. Even after retaining only 5 ratings for more than half the users, we still get competitive results compared, for example, to RGCNN (compare Figure 1 and Table 1).\", \"regarding_the_presentation_issues\": \"we thank you for the suggestions and will reformulate some parts of the paper according to your recommendations.\\n\\nYours sincerely, \\nThe authors.\"}", "{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Dear reviewer #3,\\n\\nThank you for your comments! In what follows, we will try to address in detail the issues you raised:\\n\\n(1) We believe this is a misunderstanding of the hyperparams involved. While we stated in the paper the full scope of possible hyperparams in this general framework, we limited ourselves to only two settings:\\n\\t\\n(a) SGMC - In this setting we set the weights $w_{ij}$ to 0 at all resolutions except the last one (full resolution). \\n\\n(b) SGMC-Z - In this setting we chose a-priori some spectral skip parameters (p_skip, q_skip) and we set $w_{ij}=1$ for (i=1+k*p_skip, j=1+k*q_skip), i.e., we sample the parameter space (p,q) on a grid with spacing (p_skip, q_skip), and set $w_{ij}=1$ only on the diagonal of this grid. We did not try to explore any other setting for $w_{ij}$. \\n\\nOverall, the number of hyperparams involved is between 8 to 10 (2 for each energy involved and the p_max/q_max, p_skip/q_skip params). Moreover, we usually define the same hyperparams for the rows and columns energies, so it is about half that number. In a future work we will also make some of these parameters such as p_skip/q_skip learnable.\\n\\nAlso note that, following our ablation study, the dependence on the hyperparams is quite small (on some even negligible), and it is rather easy to tune them using a validation set.\\n\\n(2) As we noted, the data term of the SGMC is a special form of DMF from Arora et. al. But we also introduced two important additional terms: \\n\\nA Dirichlet energy term - promoting smoothness on the (inferred) graphs.\\n\\nA diagonalization term - promoting the new (inferred) bases to be Laplacian eigenbases.\", \"these_three_terms_together_provide_the_geometric_interpretation\": \"If we treat the two factors $P,Q$ as corrections to some harmonic bases, and approximately enforce those new bases to also be harmonic bases (i.e., approximately diagonalizing the corresponding graph Laplacians), then we can model our matrix as some approximately bandlimited signal on a new product graph, whose functional space can be spanned by the new bases.\\n\\nFollowing your remark, we acknowledge there is some lack of clarity in the way we presented our approach: We do not provide a geometric interpretation to DMF but rather embed it within a bigger geometric framework. Once the aforementioned two terms are included, the geometric interpretation emerges.\\n \\n(3) We thank you for raising this point. Our intention in including a comparison to DMF was to show how a simple method such as DMF can produce results on par with state-of-the-art geometric methods. This is one of the main messages of our paper - to show how badly the underlying geometry is being used (or how bad is the geometry being used) in geometric matrix completion methods.\\nFollowing your remark, we performed the experiments with DMF again, using a \\u201csmall\\u201d initialization, and indeed we got a large improvement on the synthetic datasets! On the real datasets, however, we did not observe any improvement. The experiments we report in the following link measure the reconstruction error achieved by each method when the initialization is scaled by $10^{-\\\\alpha}$, $\\\\alpha>0$, as suggested in Li et al.\", \"https\": \"//colab.research.google.com/drive/1OkNEiTHok14gcVf3NxFIbAFutDN6-Tx6\\n\\nThis regularization does not necessitate depth, as in DMF, but still allows to enjoy the implicit regularization inherent to DMF with gradient descent methods.\", \"we_have_a_compelling_explanation_for_the_better_performance_of_our_method_compared_to_dmf\": \"As Arora et al reports (see Figure 2 in their paper), above a certain number of samples, DMF converges to the minimum norm solution. Below that number it induces a better regularization on the rank of the matrix, which still allows to recover low rank matrices. However, in the real datasets we tested on, the number of available samples is way below that threshold, as the rank of those matrices is not that low (See Table 4 in our paper). In this extremely data poor regime, DMF performs poorly, and the extra information present in the graphs is crucial. This is consistent with our experimentation with the toy problem we shared in the link above, and you can test yourself by changing the number of training samples. For a rank-10 matrix, using more than 30% of the entries allows for a very low reconstruction error with DMF, which outperforms our method (by a small margin). However, when going below 20%, our method demonstrates a clear advantage. We will add a discussion along these lines with relevant plots to the revised version.\\n\\nYours sincerely, \\nThe authors.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims to solve the matrix completion problem by incorporating geometric information. The proposed approach involves using graphs encoding relations between rows (and columns), applying spectral decomposition to these graphs, and using a multi-resolution spectral geometric loss to reconstruct the functional map which could then be used to directly recover the underlying matrix. The paper evaluates the proposed network on both synthetic and real datasets and shows improvements over the existing geometric methods and convex relaxations.\\n\\nWhile the geometric approach looks interesting and the experimental results seem promising, it is unclear why the proposed approach works, and the comparison with [Arora et al. (2019)] is not fair. Below are the specific comments.\\n\\n(1) The proposed approach (formulation (10)) involves too many parameters (including the weights w in (9)) that need to be tuned. The authors should discuss how to select the parameters after (10). This also raises the question of how practical the proposed approach is.\\n\\n(2) The authors claim the first contribution is to provide the geometric interpretation of deep matrix factorization via the functional maps framework. However, I didn't see clearly the interpretation. If it refers to the parametrization of X by \\\\Phi P C Q^T \\\\Psi^T, then it is just a special case of deep matrix factorization since both \\\\Phi and \\\\Psi are fixed, and P and Q are optimized to be approximately orthonormal.\\n\\n(3) Due to over-parameterization, in general deep matrix factorization would suffer from overfitting. That being said [Gunasekar et al. (2017), Arora et al. (2019)] prove that gradient descent induces implicit regularization if the algorithm is initialized with factors that are very \\\"small\\\". However, in the experiments, both P and Q are initialized as the identity, which is not close to zero. Indeed, it was proved in the following paper that the generalization gap will be proportional to the energy of the initialization, even for matrix factorization.\\n\\nYuanzhi Li, Tengyu Ma, and Hongyang Zhang, Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations.\\n\\n(4) As a followup question, without such implicit regularization, it is unclear why the proposed approach does not suffer from overfitting. A discussion along this line is required. Though the authors include the connection between [Arora et al. (2019)], this is not convincing enough since as explained above, the implicit regularization there depends on the smallness of the initialization.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new method for geometric matrix completion based on functional maps. The proposed algorithm is a simple shallow and fully linear network. Experimental results demonstrate the effectiveness of the proposed method.\\n\\nThe proposed method is new and has been shown good empirical results. The paper also points out a new way to interpret matrix completion. On the other hand, the proposed method seems ad hoc and there is no clear evidence why it is better than other baselines except the empirical results. The paper also has some clearance issues, making it hard to understand. I vote for a weak reject of the paper at the current pace and would like to increase my score if the following questions can be clearly answered.\\n\\n1.\\tWhy do we need to propose the algorithm? Is it because we have the functional maps technique motivated from shape correspondence, and we can see some connection of such technique with matric completion? If it is true, we surely can have a new algorithm based on such a new technique. But I can still not understand why the method work, at least, in an intuitive way.\\n2.\\tWhat is the sample complexity of the proposed matrix completion algorithm? \\nThe introduction of the paper is poorly written. The first paragraph and the third one both contain some introduction to matric completion, which results in a lot of redundant information. The second paragraph and the fourth one are redundant in the same way since they both focus on geometric matrix completion. I think besides introducing what is matrix completion and what is geometric completion, the introduction part should focus more on the motivation to propose the algorithm. However, I can only see from the end of the second paragraph (some simple models need to be proposed) and the fifth paragraph (\\u201cThe inspiration of our paper\\u201d) some motivation information. The introduction part needs to be re-organized to provide more useful information about the paper rather than a literature review.\\n\\nThere is some unclear/inaccurate/subjective statement in the introduction part. For example, \\u201cSelf-supervised learning\\u201d needs a reference. Why geometric matrix completion generalizes the standard deep learning approaches is not clear. What does it mean by \\u201ctheir design is \\u2026 cumbersome and non-intuitive\\u201d? The shape correspondence is never explained until very later in the paper. Also, there are some unclear issues besides the Introduction part. For example, what does it mean by \\u201cthe product graph\\u201d? All these issues need to be clarified before the paper can be accepted. \\n\\n---------------------------------------------------\\nThank you for the detailed rebuttal. For Q1, it clearly explains how does the method work. However, it is still not clear why does the method work. I also have another concern after reading the rebuttal, if the shape correspondence is not that important, why make it an important motivation in the paper? For Q2, it is interesting to see some theoretical results on the sample complexity, rather than an experimental one. The paper would also be much better if the clearance issues can be addressed. Even if I would not vote for an accept this time, I am looking forward to a revised version in the future.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel approach for the loss function of matrix completion when geometric information is available. The proposed method consists of two ideas: (1) spectral regularization (i.e., Dirichlet energy) with a re-parameterizing basis and (2) multiresolution of spectral loss (i.e., zoomout loss). In addition, the zoomout loss is motivated by the approach for shape correspondence and can be a generalization of the recent matrix completion method (deep matrix factorization). Empirical results show the best performance compared to other recent methods under small-scale datasets. Moreover, the proposed method outperforms when the geometric model is accurate (verified on the synthetic setting) and this can reflect that the proposed method is a good choice when the graph structures are given.\\n\\nThis work can be a significant contribution as it is a simple linear model but practically performs better than other deep nonlinear networks (e.g., RGCNN). Additionally, the proposed loss functions utilize only the spectral information of graph structure with novel approaches. However, there are some drawbacks to this work. First, it requires a good quality of geometric model which is hard to obtain in practical datasets. Second, the proposed method has a scalability issue since it requires eigendecompositions of graph Laplacians (as discussed in the paper). This can be a problem for real and large-scale datasets.\\n\\nOverall, this paper presents a novel approach utilizing graph spectral information with empirical improvements. But, I vote for weak acceptance due to its drawbacks as mentioned above.\", \"main_concerns\": \"1. It is not clear why minimizing Dirichlet energy can improve the performance of matrix completion. In the paper, the authors mention that it promotes smooth functions on the graph nodes, but not fully clear why smooth functions are good. And how much does the accuracy increase (or decrease) when using the Dirichlet regularization? \\n\\n2. Authors argue that the re-parameterizing of the basis (emerging P and Q) can find a better geometric model (section 2). So, it is expected that the proposed method shows a better result when the given geometric model is not accurate. However, the empirical results are reported poor improvements for inaccurate geometric models. Does this make sense?\", \"for_experiments\": \"1. What is the number of trainable parameters for each method? Since the proposed method is overparameterized, it is not clear that the empirical improvements come from the overparameterizing or the proposed loss function. It would be great to report the number of parameters of all other methods by setting similar numbers.\\n\\n2. It is not clear how to generate the synthetic dataset, i.e., projecting a random matrix on te the first few eigenvectors of L_r and L_c. It would be better to give more details.\\n\\n3. What are the training times of the proposed method and other competitors?\\n\\n4. Why results of FM are not reported under other datasets?\", \"minor_comments\": \"1. In page 4, please edit \\u201cWe explore The\\u201d -> \\u201cWe explore the\\u201d.\\n\\n2. In equation (15), writing \\u201c\\\\odot S\\u201d twice seems to be unnecessary.\"}", "{\"comment\": \"Thank you for the quick reply. Looking forward to the code!\", \"title\": \"Re: Source code\"}", "{\"comment\": \"Thank you for your interest in our paper!\\n\\na). We ablated all but the coefficients for the orthogonalization terms.\\u00a0 Figure 2 shows the ablation study for Dirichlet (left) and Diagonalizaition (right) terms for SGMC,\\u00a0\\nand Figure 3 shows the ablation study\\u00a0for Dirichlet (left) and Diagonalizaition (right) terms for SGMC-Z.\", \"we_can_summarize_the_importance_of_the_different_terms_as_follows\": \"Dirichlet - seems to be rather important.\\nDiagonalization - seems to be moderately important.\\nOrthogonalization - hardly important (that's why we did not include it in our ablation study).\\n\\nIt should be emphasized that the Dirichlet energy is with respect to the new basis and not the old basis (please refer to equation\\u00a07).\\u00a0 \\nYou can also look at the hyperparameters table (table 5) and and see that the Dirichlet energy was needed.\\u00a0 \\u00a0\\n\\nb). The diagonalization energy proposed\\u00a0in Coupled quasi harmonic basis (Kovnatsky et al.) is essentially the same as ours. The differences are due to notation:\\nWe define off(A) as the off-diagonal entries of A, whereas Kovnatsky et al. define off(A) as the sum of squares of the off-diagonal entries of A.\\nIn our notation, that would be ||off(A)||^2 = sum(i~=j) a^2_{ij}.\\nAlso,\\u00a0 Kovnatsky et al. enforces orthogonality (w.r.t to the manifold inner product), whereas\\u00a0we only promote approximate orthogonality via a penalty function.\\nIt should be noted that we are not the first to propose functional map estimation via joint diagonlization. These ideas have been spinning around for some time in the shape analysis community, and we borrowed them for the problem of matrix completion.\\u00a0We acknowledge Litany et. al's\\u00a0 \\\"Fully Spectral Functional Maps\\\", the most up to date shape correspondence method based on joint diagonalization, as our source of inspiration.\\n\\nc). We shall provide a link to the source code soon. Stay tuned!\\n\\nThe authors\", \"title\": \"re: Ablation study and source code\"}", "{\"comment\": \"Dear Authors,\\n\\nThis is a very interesting work. Could you please comment on:\\n\\na) missing ablation study of various terms in Eq. 10. In particular, how critical are the Dirichlet energy terms in the overall performance.\\n\\nb) difference in diagonalization proposed here and in Coupled quasi harmonic basis (Kovnatsky et al.)\\n\\n\\nc) Do you have plans to release the source code.\", \"title\": \"Ablation study and source code\"}" ] }
S1e0ZlHYDB
Progressive Compressed Records: Taking a Byte Out of Deep Learning Data
[ "Michael Kuchnik", "George Amvrosiadis", "Virginia Smith" ]
Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size. Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model. We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression. Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.
[ "Deep Learning", "Storage", "Bandwidth", "Compression" ]
Reject
https://openreview.net/pdf?id=S1e0ZlHYDB
https://openreview.net/forum?id=S1e0ZlHYDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "LI7Lo5ef3", "rygN9N_coB", "HkfjQucsr", "SylxdQO5or", "Bke1gmO5sB", "rkgdwz_qsB", "r1ehxrXZoB", "BkeXwO--sB", "SJluQZ2kor", "rkeMVzZJ5r" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741945, 1573713036075, 1573712793517, 1573712743606, 1573712614873, 1573712479595, 1573102836502, 1573095514835, 1573007648375, 1571914282313 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2156/Authors" ], [ "ICLR.cc/2020/Conference/Paper2156/Authors" ], [ "ICLR.cc/2020/Conference/Paper2156/Authors" ], [ "ICLR.cc/2020/Conference/Paper2156/Authors" ], [ "ICLR.cc/2020/Conference/Paper2156/Authors" ], [ "ICLR.cc/2020/Conference/Paper2156/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2156/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper2156/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2156/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Main content: Introduces Progressive Compressed Records (PCR), a new storage format for image datasets for machine learning training.\", \"discussion\": \"\", \"reviewer_4\": \"Interesting application of progressive compression to reduce the disk I/O overhead. Main concern is paper could be clearer about setting.\", \"reviewer_5\": \"(not knowledgable about area): well-written paper. concern is that related work could be better, including state of the art on the topic.\", \"reviewer_2\": \"likes the topic but discusses many areas for improvement (stronger exeriments, better metrics reported, etc.). this is probably the most experienced reviewer marking reject.\", \"reviewer_3\": \"paper is well written. Main issue is that exeriments are limited to image classification tasks, and it snot clear how the method works on larger scale.\", \"recommendation\": \"interesting idea but experiments could be stronger. I lean to Reject.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Revisions To Paper Uploaded (Summary)\", \"comment\": [\"We thank the reviewers for their time and helpful feedback. After carefully considering reviewers\\u2019 comments, we made revisions to our paper to address concerns, and have uploaded an updated version of the paper. A summary of major changes follows:\", \"We have run our experiments using the full 1000 class ImageNet dataset (see Section A.7 \\u201cImageNet-1000 Results\\u201d in our updated submission). Our results indicate that the PCR approach generalizes to larger datasets.\", \"We\\u2019ve added further motivation (see Section A.5 \\u201cRecord Format Conversion Times\\u201d) for this work, including encoding times as well as an updated Figure 1. PCRs offer a natural trade-off between speed and quality, and do so without sacrificing space or time per task.\", \"We\\u2019ve added microbenchmarks (see Section A.3 \\u201cExperiment Setup\\u201d) pertaining to various stages of the pipeline (e.g., computer speeds, decoding overheads). We include the rates of both training (in terms of images/second) and decoding to highlight the potential bottlenecks of the system. The results show that while image decoding does add overhead (up to 50% in terms of decoding time), the slowdown does not prevent achieving a training rate close to the maximum possible.\", \"We\\u2019ve added details pertaining to the datasets used in Section A.4 \\u201cDataset Details\\u201d. Since our approach is dependent on the JPEG quality of the dataset, we additionally include quality estimates for each dataset to highlight each dataset\\u2019s compressibility. We\\u2019ve further analyzed the Cars dataset to determine conditions when a dataset would be coarse-grained enough for PCRs to be helpful (see section A.6 \\u201cCoarse Grained vs. Fine Grained Cars Experiments\\u201d). We hope this analysis allows practitioners to reason about the conditions under which image degradation (from compression) is safe. To the best of our knowledge, this is the first work investigating how to calibrate compression parameters for training.\", \"We\\u2019ve added accuracy vs. epoch plots to disentangle the statistical effects of PCRs from the system speedups (see section A.1 \\u201cLoss, Space Savings, And Accuracy per Epoch\\u201d). Our results indicate that most of the improvements come from faster image rates.\", \"Once again, thank you for your valuable feedback!\"]}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"We thank the reviewer for their positive assessment of our work.\\n\\nIn terms of state-of-the-art, we are not aware of any other works that use progressive compression in the context of machine learning, and to the best of our knowledge, the intersection of storage and machine learning is also poorly explored.\\n\\nCurrent methods typically avoid disk I/O entirely by caching datasets in RAM. While this is a viable approach for small datasets, not all datasets can fit in RAM and thus we focus on large datasets. To the best of our knowledge, TFRecord and RecordIO can be considered state-of-the-art in the large dataset domain. Both TFRecord and RecordIO are implementations of the same idea, which is a record layout. For this reason, we compared against full-quality images stored in batch form (i.e., record layouts), which resembles state-of-the-art record formats (e.g., RecordIO, TFRecord) without relying on the implementation details of those highly-engineered formats, which may obscure the comparison.\"}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"Thank you for your detailed review and feedback. We have uploaded changes addressing the issues you raise, including updating Figure 1 to provide a more intuitive description of the setting of this work. Other changes are described below.\\n\\n[Small Datasets]\\nYes, our work targets large datasets that cannot fit in RAM. We used smaller datasets in order to rigorously evaluate our method within our computational budget. However, we understand the importance of validating the method at scale, and have therefore also run our experiments on the full ImageNet dataset and provided those results in Appendix A.7 \\u201cImageNet-1000 Results\\u201d. Thank you for this suggestion.\\n\\n[Sampling]\\nPrevious work uses data partitioning to allow each worker to hold a subset of the dataset. Data partitioning is an orthogonal optimization to ours; one can use PCRs with data partitioning. However, some users of data partitioning (e.g., Kurth et al., 2018) apply an additional optimization on top, which is to cache the partitions in memory or on a local SSD. In particular, they partition the 3.5TB dataset such that an 800 GB SSD can store each partition; this means each worker samples from a subset of the dataset (i.e., there is 0 probability of sampling certain images for any fixed worker). While sacrificing some sampling guarantees is common (e.g., record formats do it by correlating the samples drawn within records), static partitioning is one of the more extreme tradeoffs and relies on creating representative partitions. Additionally, static partitioning only works when the data is small enough to fit in a cluster\\u2019s aggregate memory or fast storage. In this case, 800 GB per node represents a hard limit after which a distributed file system would have to be used.\\n\\n[ImageNet Training]\\nIt is common practice to shard the dataset among each node, which allows the dataset to be collectively stored in a fast cache. For instance, Mikami et. al. 2018 state that data is partitioned between each worker (see Remark 4). Ying et al. 2018 do not report exactly how they do the data movement, but they have 409 TB of RAM among the utilized nodes, while ImageNet is only 150GB. If data is really in memory (or on a local fast SSD), then there are less concerns for I/O bottlenecks. However, as noted above, this approach is limited by the available space for storing the dataset.\\n\\n[Data Augmentation]\\nWe mentioned data augmentation in the paper to note that some data augmentation methods degrade image quality (e.g., with random noise or blur). If the image quality is already degraded, then it is intuitive that more compression may be tolerable during training (e.g., compression followed by blur would look similar to non-compressed followed by blur). Similarly, downscaling an image can reduce the artifacts introduced by compression (e.g., a low quality 4K image can be resized to a high-quality 256x256 image). Therefore, we are simply noting that some tasks may tolerate higher levels of compression given a particular set of data augmentations.\\n\\n[Number of Scans]\\nWe use 10 scans as that works for our experiments and is the default number used by the transcoder that we use. Adding more scans allows one to more finely trade off image quality vs. I/O bandwidth, but in practice we do not expect needing more than 10 scans (in fact, we only use 4 distinct settings).\\n\\n[Ceph]\\nWe use Ceph as it is an open source, widely deployed distributed store. Ceph\\u2019s metadata overhead is limited, as each node can determine what node the data is stored on. It\\u2019s also worth noting that by using record formats, we are only accessing a relatively small number of distinct files (rather than many individual images). Thus, the metadata overheads are less of a concern. We use a 40Gb Ethernet network to connect all nodes, which should be sufficient for training at this scale.\\n\\n[Additional Tasks and Modalities]\\nWe agree that additional tasks (e.g., segmentation) and modalities (e.g., audio, video, text) would be interesting directions for future work. For this work, we decided to focus on deepening our understanding of object classification by identifying dataset properties which favor compression, such as easy vs. hard tasks (Appendix A.4 \\u201cDataset Details\\u201d, A.6 \\u201cCoarse Grained vs. Fine Grained Cars Experiments\\u201d). Some of this understanding would hopefully transfer to the other tasks.\"}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"Thank you for your detailed review and feedback. We respond to each point in more detail below.\\n\\n[Dataset Limitations]\\nWe understand the concerns with generalization of our method to other datasets. To address this, we have run our experiments on the full ImageNet dataset and have provided those results in Appendix A.7 \\u201cImageNet-1000 Results\\u201d. We want to clarify that our work uses the full CelebAHQ dataset (without subsampling). We have also added coarse-grained vs. fine-grained classification results for the Cars dataset in Appendix A.6 \\u201cCoarse Grained vs. Fine Grained Cars Experiments\\u201d to address some of the concerns of compression on convergence. Finally, we\\u2019ve added dataset details in Appendix A.4 \\u201cDataset Details\\u201d to highlight differences in compression quality across datasets. Stanford Cars, for instance, is already highly compressed, and, thus, has reduced benefits from lower scan groups.\\n\\nIn general, our experiments point toward a dependence on the difficulty of the task, regardless of the dataset. Our results are stable across models, and combined with diagnostics like MSSIM, we believe practitioners will be able to properly gauge the appropriate scan group without much tuning. As we discuss in more detail below, the benefit of our approach is that when multiple levels of compression may be appropriate, PCRs obviate the need for storing multiple compressed copies of the dataset.\\n\\n[Alternatives to JPEG]\\nAny progressive image format can work with our method, such as neural network compression. This means, as you noted, that we can use progressive variants of lossless codecs, such as PNG, by subsampling pixels (an interlaced format). However, we chose not to as PNGs have larger file sizes than JPEG (roughly 10x larger, which would conflict with any subsampling gains).\\n\\n[Static Compression]\\nWhile it is true that part of our contribution is measuring the effect of compression on training, there are cases where dynamic compression is important. We\\u2019ve provided conversion times in Appendix A.5 \\u201cRecord Format Conversion Times\\u201d to highlight how costly conversions can be. With many tasks (and thus many distinct compression settings), these conversions can be costly in terms of time. Additionally, practitioners must store at least 2 copies of the dataset (and likely more): one full quality variant (to compress datasets from) and one \\u201caccelerated\\u201d variant that is compressed. PCRs can provide the same benefits without having multiple copies of the dataset. Further, some training tasks may benefit from varying compression needs at runtime (e.g., Progressive GAN training, which uses multiple distinct resolutions of the same dataset https://github.com/tkarras/progressive_growing_of_gans#preparing-datasets-for-training ); these tasks will necessarily require multiple copies per training session.\\n\\n[Convergence Diagnosis]\\nThe reviewer makes a good point that compression can potentially act as a regularizer and improve generalization. However, for our experiments, we do not find this to be the case: When measuring the convergence in terms of accuracy per epoch (rather than time), we observe that lower quality images reduce convergence speed. In contrast, we do observe speedups when measuring wall-clock time, suggesting that the improvements are coming from the reduction in access time and not from improved generalization. We include these accuracy vs. epoch results in Appendix A.1 \\u201cLoss, Space Savings, And Accuracy per Epoch\\u201d.\\n\\nOur method attempts to lower training expenses (specifically related to storage bandwidth), which we believe will allow practitioners to utilize the benefits of large-scale machine learning. While alternative techniques (e.g., dataset partitioning) do reduce the randomness in sampling (as discussed in a different reviewer comment (#4)), we view these methods as orthogonal to our technique, which saves bandwidth.\"}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"Thank you for your detailed review and feedback. We have updated the paper to address your feedback on understanding the hardware limits of the used GPUs, image decoding, experiment configuration (i.e., batch size and ImageNet subsampling), clarity of figures/data, and alternatives to JPEG. Below, we provide a detailed response for each item individually.\\n\\n[Hardware Limitations]\\nWe have added a section in the appendix (A.3 Experiment Setup) that shows the image rates achievable with in-memory training. Our loading rates approach these limits as the number of scans approaches 1, which suggests that the workload becomes bottlenecked by model updates (rather than the data pipeline) as the number of scans is reduced.\\n\\n[Image Decoding]\\nIt is correct that decoding progressive formats can be more expensive than non-progressive formats; please note that this overhead is already taken into account in our time-to-accuracy experiments. However, as per the reviewer\\u2019s request, we have added a section in the appendix (see Section A.3 \\u201cExperiment Setup\\u201d) to directly compare baseline JPEG decoding vs. progressive JPEG decoding rates. We observe less than 50% increase in decoding time using 10 scans; future work can look into how to reduce this overhead further. We keep the number of groups constant (the default 10 scans), but progressive compression does not drastically impact file size as the information content is merely re-arranged. A simple way to reduce overhead in the existing implementation is to reduce the number of scans from 10 to 4, as that is all we use for experiments. \\n\\n[Batch Size]\\nThe PCR format is not dependent on the batch size used for training because the same number of images is used for every batch. The number of bytes per batch, however, is reduced. This opens up opportunities, such as increasing the batch size while keeping the number of bytes per batch constant. In our experiments, we typically read multiple (e.g., 10) mini-batches from one record. Longer records are desirable because storage delivers longer, sequential reads faster.\\n\\n[Time to Accuracy and Image Rates]\\nThanks for these suggestions to further clarify our experiments. We have included a table indicating time to convergence (see Section A.2 \\u201cTime to Convergence Table\\u201d) as well as images per second of various scan groups (see Section A.3 \\u201cExperiment Setup\\u201d).\\n\\n[ImageNet]\\nWe have added a section in the Appendix (see Section A.4 \\u201cDataset Details\\u201d) explaining how the 100 class ImageNet was subsampled (it was done in alphabetical order). As noted above, we have also run our experiments on the full ImageNet dataset and provided those results in Appendix A.7 ImageNet-1000 Results. The 1000 class results mirror the 100 class results; we see a 2x speedup even with a larger dataset.\\n\\n[Alternatives to JPEG]\\nWe thank the reviewer for helping to point out the generality of our approach. Indeed, any progressive image format, such as neural network compression, can work with our method. While Webp is not currently progressive, PCRs could utilize WebP if that capability was added. The main issue with other formats is existing infrastructure to support them; JPEG is widely used and optimized, and thus developer efforts required to use it in this setting are reduced. As noted by Reviewer #3, we can use progressive variants of lossless codecs, such as PNG, by subsampling pixels (an interlaced format), but we chose not to as PNGs have larger file sizes than JPEG.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper demonstrates an interesting application of progressive compression to reduce the disk I/O overhead of training deep neural networks. The format encodes the trade-off between data fidelity and I/O bandwidth demand naturally, which could be useful when I/O is the bottleneck.\\n\\nMy major concern is that the paper should be clearer about the setting.\\n* Does your work target the case where data cannot fit in RAM and should be fetched from local disk or through network? However, the datasets used in the evaluation look small and could fit in RAM.\\n* How are mini-batches created? You mentioned in the related work that previous work (Kurth et al., 2018) lets each worker sample from a local subset instead of performing a true sampling of the whole dataset. Does your work perform true sampling? How much benefit does it give?\\n* Is disk I/O really a bottleneck in training? There are many evidence [1][2][3] of almost linear scalability in training ResNet on *full* imagenet across hundreds or even thousands of GPUs. These work focus heavily on network communication rather than disk I/O. Does your setting differ from theirs? How does your approach compare with their techniques for optimizing disk I/O?\\n\\nThat being said, I think this approach should be appealing when the I/O bandwidth is limited and dynamic. Examples include training on edge devices, or federated training where data needs be fetched via ad-hoc network.\", \"other_detailed_comments\": [\"Figure 1 is not very informative and quite puzzling. There is no definition of quality at that point.\", \"Sec 2 paragraph 3. What is the issue of data augmentation with the standard JPEG compression? Does your compression ease data augmentation?\", \"Sec 3.1 paragraph 1. \\\"This is turn enables ...\\\" -> \\\"This in turn enables ...\\\"\", \"How to decide the number of scans? Does it have impact on the I/O efficiency?\", \"Evaluation\", \"I'm not familiar with Ceph. Why choose this particular environment? Does it bring in extra overhead (e.g., communicating with metadata server). What does the network topology look like? Is the data loading stall (figure 7) due to network congestion?\", \"It worth evaluating more tasks such as detection and segmentation to measure the impact of compression.\", \"[1] Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, Goyal et al.\", \"[2] Massively Distributed SGD: ImageNet/ResNet-50 Training in a Flash, Mikami et al.\", \"[3] Image Classification at Supercomputer Scale, Ying et al.\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper introduces Progressive Compressed Records (PCR) which is an on-disk format for fetching and transporting training data in an attempt to reduce the overhead storage bandwidth for training large scale deep neural networks. This is a well written paper that includes all the required background and related works, as well as an easy-to-understand example that runs through the manuscript, explaining what the reader needs to know in order to appreciate the work. The empirical results of several experiments show that the PCR requires up to two times less storage bandwidth while retaining model accuracy.\\n\\nMy only concern is that although the related work section provides a thorough survey of the current methods in the literature, the authors did not demonstrate the performance of state-of-the-art and compare their performance with them. I believe this is necessary to truly validate the superiority of their method over state-of-the-art.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes using progressive encoding of images and re-arrange of data blocks in images to improve reading speed and therefore training speed.\\n\\nTo fully analyze the maximum possible speed of training, it would be great to the measure upper bound of images/sec, when avoiding reading from disk and just using images from memory. \\n\\nDecoding a typical progressive JPEG image usually takes about 2-3 times as much time as decoding a non-progressive JPEG, for full resolution, analyzing the time to read vs time to decode the images would be great. It is not clear how changing the number of total groups would affect the image size and the reading speed.\\n\\nBased on the current experiments it is not clear what is the impact of the batch size when creating PCRs and when reading the image blocks, or the impact of the batch size on the training speed.\\n\\nFigure 3 is really hard to read and compare times to convergence, authors should provide a table with times to X% accuracy. Although time to convergence is the key metric, it would be great to know the difference in images/sec of different settings.\\n\\nUsing ImageNet 100 classes (not clear how the 100 classes were chosen) instead of the usual 1000 classes, can distort the results, since it is not clear if higher resolution would be needed to distinguish more classes or not.\\n\\nHave the authors considered other image compression formats like WebP? How tie is the proposed record encoding with the image compression?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This paper introduces a new storage format for image datasets for machine learning training. The core idea is to use progressive JPEG to create sequential scans of the input image, from lower resolution to higher resolution. The authors found that on some datasets, using half of the scans is already enough to reach similar accuracy but speeded up the convergence by a factor of 2.\", \"detailed_feedbacks\": [\"The paper presents a simple idea that directly uses the nature of JPEG compression. The paper shows that it can work well and can be potentially integrated into real machine learning dataset storage applications.\", \"Related work section is thorough.\", \"The experiments are limited to image classifications, and some of the datasets are subsampled (e.g. ImageNet and CelebA). This may not well represent real machine learning tasks, and practitioners may be unsure about the reliability of the compression. The \\u201cCars\\u201d dataset contains fine-grained classification, in which the proposed method is\", \"Figure 1 is not very clear what is the key advantage of the proposed method, and what are the different mechanisms.\", \"Alternatively, one can subsample the pixels and store incremental subsets of those pixels. It would be good if the paper can discuss about this baseline.\", \"The data storage format is only loosely related to the main goal of the paper, which is to show that network can still train very well and even faster when receiving partial input data. Once they figured out the number of scans needed for this application, they don\\u2019t necessarily need to keep a full lossless version and can just go for a lossy version. In other words, the experiment section can be replaced by any other lossy compression by varying the compression ratio.\", \"In my opinion, there could be two reasons for faster convergence. 1) lowered image quality makes the data easier to learn and 2) the smaller data size allows faster reading of data from disk. The paper only shows wall-clock speed-up, but it is unclear which factor is bigger. 2) can be potentially addressed by faster disk reading such as SSD or in-memory datasets. One of the motivations is to help parallel training of dataset and it is also mentioned how non-random sampling of data can hurt training performance. It would be good to showcase how the proposed method can help in those parallel training settings.\"], \"conclusion\": \"This paper presents a simple and effective idea and can be potentially beneficial. However, my main concern is whether the experiments can be representative enough for large scale experiments (e.g. using non-subsampled ImageNet dataset with parallel training using SSD storage). Therefore, my overall rating is weak accept.\"}" ] }
SJxTZeHFPH
The Intriguing Effects of Focal Loss on the Calibration of Deep Neural Networks
[ "Jishnu Mukhoti", "Viveka Kulharia", "Amartya Sanyal", "Stuart Golodetz", "Philip Torr", "Puneet Dokania" ]
Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard for downstream components to trust. Ideally, we want networks to be accurate, calibrated and confident. Temperature scaling, the most popular calibration approach, will calibrate a DNN without affecting its accuracy, but it will also make its correct predictions under-confident. In this paper, we show that replacing the widely used cross-entropy loss with focal loss allows us to learn models that are already very well calibrated. When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to theoretically justify the empirically excellent performance of focal loss. We perform extensive experiments on a variety of computer vision (CIFAR-10/100) and NLP (SST, 20 Newsgroup) datasets, and with a wide variety of different network architectures, and show that our approach achieves state-of-the-art accuracy and calibration in almost all cases.
[ "focal loss", "calibration", "accuracy", "intriguing effects", "deep neural networks", "model", "confidence", "temperature scaling", "correct predictions" ]
Reject
https://openreview.net/pdf?id=SJxTZeHFPH
https://openreview.net/forum?id=SJxTZeHFPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ka03bK43qM", "BylPp3Khir", "HJx9FhKnsr", "rylRbCI3sS", "Bkx_ohZijr", "rkgcyqZooH", "S1ejArljsB", "HJgsDVgojr", "r1gc7GgiiB", "ryekOgxijS", "HygboAyjjS", "S1exU3JjiS", "H1xq5ikijH", "H1xIA9yosr", "SkePAgQ9KH", "r1gDtRmDFS", "SJlkRDV6ur" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741917, 1573850303034, 1573850241999, 1573838342088, 1573751968459, 1573751266168, 1573746131284, 1573745763304, 1573745186496, 1573744742639, 1573744280908, 1573743688491, 1573743505769, 1573743310128, 1571594447360, 1571401343203, 1570748358548 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/Authors" ], [ "ICLR.cc/2020/Conference/Paper2154/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2154/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2154/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper investigates the effect of focal loss on calibration of neural nets.\\n\\nOn one hand, the reviewers agree that this paper is well-written and the empirical results are interesting. On the other hand, the reviewers felt that there could be better evaluation of the effect of calibration on downstream tasks, and better justification for the choice of optimal gamma (e.g. on a simpler problem setup).\\n\\nI encourage the others to revise the draft and resubmit to a different venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to R1's response (Part 1)\", \"comment\": \"Thank you very much for your quick response, below we provide our replies.\\n\\n>> First, sorry, I used the term bregman divergence in my review when I meant to refer to proper loss (which is a loss where the minimizer is the true probability).\\n\\n-- Thank you for the clarification. We agree that focal loss is not a proper loss function which can also be inferred from the Figure 7 in the appendix. However, as we have mentioned in our response to your comment 1, in our classification experimental setup there is no label noise, and we train on one-hot encodings. Hence, the expected minimizer of focal loss comes out to be $x = p$. We will make it clear in our final submission.\\n\\n>>The point about focal loss being cross entropy + regularization on the entropy of the predictive distribution is interesting, thanks for including it. \\n\\n-- Thank you.\\n\\n\\n>> It still feels weird to me that the paper claims that focal loss improves calibration when in fact focal loss + perfect optimization in the scenario where we have label noise will in fact calibrate predictions incorrectly (and this is true regardless of whether the model is linear or deep). It seems clear to me that there is something good happening here because of the focal loss, as the experimental results do show improved calibration, but I still don't feel like I understand why.\\n\\n-- Thanks for pointing this out. We agree that if we have label noise, perfect optimization leads to optimal fractional predicted probability depending on the level of noise even in the case of a classification problem and because focal loss is not a proper loss, it won\\u2019t be calibrated in case of noisy labels. However, there is no label noise in our classification experimental setup. Also, standard optimization methods which are normally used in modern deep neural networks do not lead to perfect optimization so empirically it may be tough to compare focal loss with cross entropy in case of noisy labels. We will make it clear in the main paper. In our experimental setup we consider standard classification datasets without any noisy labels. This, along with other justifications made in the paper indicate why focal loss performs better.\\n\\n\\n>> In a way what seems to happen is that models are likely to end up in a regime where the entropy of the predictive distribution is lower than the entropy of the actual target distribution, and so regularizing this entropy by some amount will counter this tendency and lead to better-calibrated models. But why do models tend to have sharper-than-real predictive distributions? Is this overfitting? Is this an artifact of SGD optimization? Understanding why this happens would be helpful in understanding why this regularization is needed.\\n\\nTraining a model on separable data with loss functions like NLL (which contains an exponential map) will give a minimizer where the norms of its parameters are infinity (For a proof with linearly separable data one could look at Lemma 1 in [Soudry et. al.]. In neural networks, one can consider the learnt representations before the last linear layer to be linearly separable as neural networks often achieve near perfect training accuracy.) Thus, by optimizing this loss, gradient descent increases the parameter norms.\\n\\nWhen the parameters blow up, the logits will blow up and thus the predictive distribution will be very sharp. For focal loss, this happens much slower than it happens for cross entropy (or NLL) as we show that the gradient norms are smaller and thus the weights blow up slower. We argue that this combination of exponential loss (minimizer at infinity) and high-capacity model (hence separable data) causes this low entropy of the predictive distribution. On a similar note, it was also observed in Guo et. al. that neural networks can overfit to NLL without overfitting to the 0/1 loss (refer to Entropy before calibration in Fig S1 in Supplementary in Guo. et. al.).\\n\\n\\nSoudry, Daniel, et al. \\\"The implicit bias of gradient descent on separable data.\\\" The Journal of Machine Learning Research 19.1 (2018): 2822-2878.\\nGuo, Chuan, et al. \\\"On calibration of modern neural networks.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\"}", "{\"title\": \"Response to R1's response (Part 2)\", \"comment\": \">> Still, despite the explanations in the paper and the comment above, I still don't see how you can go from \\\"we're regularizing the predictive distribution's entropy\\\" to \\\"this is the optimal regularization value for this sample\\\"; the assumptions behind that optimization used to pick the best gamma are not clearly stated. Please clarify.\\n\\n--We are really sorry for the confusion but we don\\u2019t claim in the paper or in our responses that the value of $\\\\gamma$ we use for the focal loss sample-wise $\\\\gamma$ approach (or for any of the focal loss approaches like constant $\\\\gamma$ or scheduled $\\\\gamma$) is \\u201cthe optimal regularization value\\u201d. We design the policies for sample-wise $\\\\gamma$ using certain observations (we have listed these in points 1,2 and 3 in our response to your comment 4) and we clarify our rationale behind the policies in our response to your comment 4. \\n\\nEmpirically, we find the designed policies to perform very well across all datasets and models we trained on and hence, we propose these policies in the paper. Having said that, these policies are heuristics and we cannot claim that the values of $\\\\gamma$ we use are optimal. In fact, as we mentioned in our responses, we are actually interested in finding a more principled approach/algorithm to design these policies for future work.\\n\\nFinally, focal loss is an upper bound on KL divergence - entropy * $\\\\gamma$ (not equality), so $\\\\gamma$ is the regularization coefficient of the entropy of the predicted distribution. On the other hand, we design our strategy for choosing sample-wise $\\\\gamma$ using Propositions 1 and 2 to minimize gradient norm. These are two different interpretations of $\\\\gamma$ and we don\\u2019t try to make any leap from one to the other.\"}", "{\"title\": \"response\", \"comment\": \"(there are way too many author responses here, so trying to consolidate my response-to-the-response)\\n\\nFirst, sorry, I used the term bregman divergence in my review when I meant to refer to proper loss (which is a loss where the minimizer is the true probability).\\n\\nThe point about focal loss being cross entropy + regularization on the entropy of the predictive distribution is interesting, thanks for including it. \\n\\nIt still feels weird to me that the paper claims that focal loss improves calibration when in fact focal loss + perfect optimization in the scenario where we have label noise will in fact calibrate predictions incorrectly (and this is true regardless of whether the model is linear or deep). It seems clear to me that there is something good happening here because of the focal loss, as the experimental results do show improved calibration, but I still don't feel like I understand why.\\n\\nIn a way what seems to happen is that models are likely to end up in a regime where the entropy of the predictive distribution is lower than the entropy of the actual target distribution, and so regularizing this entropy by some amount will counter this tendency and lead to better-calibrated models. But why do models tend to have sharper-than-real predictive distributions? Is this overfitting? Is this an artifact of SGD optimization? Understanding why this happens would be helpful in understanding why this regularization is needed.\\n\\nStill, despite the explanations in the paper and the comment above, I still don't see how you can go from \\\"we're regularizing the predictive distribution's entropy\\\" to \\\"this is the optimal regularization value for this sample\\\"; the assumptions behind that optimization used to pick the best gamma are not clearly stated. Please clarify.\"}", "{\"title\": \"Response to R2 comment 2: Policies to choose gamma\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nIn response to R1\\u2019s comment 2 and comment 4, we have mentioned the intuitive ways in which we designed the simple policies for sample-wise gamma. We agree that the policies were hand-made but they were theoretically motivated using certain observations which we state in our response (to R1). As part of our future work, we want to develop a more principled algorithm to design these policies.\"}", "{\"title\": \"Response to R1 Comment 4: Choosing per-example gammas\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nWe design the policies for sample-dependent $\\\\gamma$ based on three observations:\\n1. Using Proposition 2 in the paper, given a prediction confidence $p_0$, we can compute a value of $\\\\gamma$ (say $\\\\gamma^*$) such that $g(p_0, \\\\gamma^*) = 1$. \\n\\n2. From Proposition 2 (and visually from Figure 4(a)), we know that for the same $\\\\gamma^*$, if we choose a $p$ such that $p < p_0$, we get $g(p, \\\\gamma^*) > 1$ and if we choose a $p > p_0$, we get $g(p, \\\\gamma^*) < 1$. \\n\\n3. Finally, from Proposition 1, we know that if $g(p, \\\\gamma) < 1$, the gradient norm of focal loss is lower than that of cross entropy and if $g(p, \\\\gamma) > 1$, the gradient norm of focal loss is higher than that of cross entropy. Hence, the weight regularisation effect of focal loss is present only when $g(p, \\\\gamma) < 1$.\\n\\n\\nFor a given training sample, $(x_i, y_i)$, let us say that $p_i$ is the confidence of the model on the correct class. If $p_i > 0.5$, we can say for certain that the sample has been correctly predicted. Hence, we want to accelerate the rise in the network\\u2019s confidence as long as $p_i < 0.5$. However, at the same time, we also want to regularise the weights of the network so that they don\\u2019t blow up leading to the network becoming overconfident and consequently, miscalibrated. \\n\\n\\nKeeping these requirements in mind, we design the policy for $\\\\gamma$ such that when $p_i \\\\in [0, 0.25]$, $g(p_i, \\\\gamma) > 1$ and when $p_i \\\\in (0.25, 1]$, $g(p_i, \\\\gamma) < 1$. This ensures that when the network has a very low confidence on the correct class (confidence being lower than 0.25), we are making the network more confident by ensuring that the gradient norms are higher (as compared to cross entropy) as we have $g(p_i, \\\\gamma) > 1$ (see Proposition 1). However, when the network reaches a confidence of 0.25 or more on the sample, we start regularising the weights of the network by having $g(p_i, \\\\gamma) < 1$. \\n\\n\\nIt should be noted that in the interval $p_i \\\\in [0, 0.25]$, we want $g(p_i, \\\\gamma)$ to be observably higher than 1 (i.e., we don\\u2019t want $g(p_i, \\\\gamma) \\\\approx 1$). As can be seen from Figure 4(a), a higher value of $\\\\gamma$ in this interval provides a higher value of $g(p_i, \\\\gamma)$. Hence, we set $\\\\gamma$ to 5 for low confidence values $p_i \\\\in [0, 0.19]$ as $g(p_i, 5) \\\\approx 1$ at $p_i = 0.19$. For $p_i > 0.19$, we change the $\\\\gamma$ from 5 to 3 so that $g(p_i, \\\\gamma)$ stays above 1 for $p_i < 0.25$ (as $g(0.25, 3) \\\\approx 1$, also refer Figure 4(a)).\\n\\n\\nFinally, when $p_i \\\\in (0.25, 1]$, we don\\u2019t want to set a high value of $\\\\gamma$ as such a value can lead to a steep drop in $g(p_i, \\\\gamma)$ thereby causing the gradients to vanish. Thus, we either stick with 3 (Focal Loss (sample-wise $\\\\gamma$ 5,3) policy) or change $\\\\gamma$ to 2 for $p_i > 0.5$ (Focal Loss (sample-wise $\\\\gamma$ 5,3,2) policy).\\n\\n\\nWe found the above mentioned policies (especially the Focal Loss (sample-wise $\\\\gamma$ 5,3) policy) to consistently perform the best across all the network architectures we tried and across all the datasets we trained on. In fact, it also performed really well on our Tiny ImageNet experiment (results for which can be found in our response to R2\\u2019s comment 1). Having said that, we think that an optimal policy which can be easily computed for any model and dataset combination would be ideal and would remove the need to hand-design policies. Developing an algorithm to compute such an optimal policy for any dataset-model pair is something we are interested in pursuing as future work.\\n\\n\\nWe will clarify the above mentioned points in the main paper.\"}", "{\"title\": \"Response to R1 Comment 3: Analysing calibration properties of focal loss on a simpler setup\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nThe behaviour of deep neural networks is generally quite different from linear models and the problem of calibration is more pronounced in the case of deep neural networks. Hence, we focus on analysing the calibration of deep networks in the paper. We have argued in the paper that the increase in magnitude of weights in the network during training is one of the main reasons for miscalibration. This increase in magnitude of weights leads to an increase in the norm of the logits of the network on all points irrespective of whether the point is correctly or incorrectly classified. In case of focal loss, the increase in the magnitude of weights is much lower as compared to cross entropy and hence, models trained using focal loss are more calibrated.\\n\\nWe analyse this setup using a linear model trained on linearly separable data with some noise using both cross entropy and focal loss in Appendix G. We observe that the norm of the logits and in turn, the magnitude of weights increases during training on a linear model, which is not very surprising as exponential losses like cross entropy on linearly separable data have their only minimizer at infinity. However, as our experiments show, this magnification of norms (of weight and logits) is significantly higher for the cross entropy loss as compared to the focal loss though both the losses produce the same decision boundary (shown in Figures 5, 6(b) and 6(c) in the Appendix). Also consistent with our argument, this leads to higher confidence for misclassified points in the case of cross entropy as compared to focal loss (Figure 6(a) in the appendix). In short, the linear model shows that, for our data (which is separable for the linear model), training with cross entropy loss leads to higher logit norms and weight norms thereby producing higher confidence for misclassified points as compared to focal loss.\"}", "{\"title\": \"Response to R1 Comment 2: Value of p0 when using the rule to automatically select gamma for focal loss\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nWe use $p_0$ as a notation to indicate the probability for which we find the $\\\\gamma$ such that $g(p_0, \\\\gamma) = 1$ (refer Proposition 2). Hence, in the context of automatically selecting $\\\\gamma$ for focal loss, when we say $p_0$, we are assuming that the reviewer means the probability values at which we change $\\\\gamma$ in the policies for the sample-wise $\\\\gamma$ approach. For instance, in the Focal Loss (sample-wise $\\\\gamma$ 5,3) policy, we have $p_0 = 0.19$ where we change $\\\\gamma$ from 5 to 3 and for the Focal Loss (sample-wise $\\\\gamma$ 5,3,2) policy we have two $p_0$s, one at 0.19 where we change $\\\\gamma$ from 5 to 3 and the other one at 0.5 where we change $\\\\gamma$ from 3 to 2. In our experiments, we use the following general rules for selecting $p_0$ values for the policies:\\n\\n1. One of the changing points to consider should be $p = 0.5$. For any $p > 0.5$, we want $g(p, \\\\gamma) < 1$.\\n\\n2. In the interval $[0, 0.5]$, we do not want $g(p, \\\\gamma)$ to be much lower than 1. In fact, for lower probability values in the interval $[0, 0.5]$, we want $g(p, \\\\gamma)$ to be higher than 1. Therefore, we divide the interval $[0, 0.5]$ into sub-intervals of the form $[a, b]$ and choose a $\\\\gamma$ for that sub-interval such that $g(b, \\\\gamma) = 1$. Hence, for all probability values $p$ lying in the interval $[a, b]$, $g(p, \\\\gamma) > 1$.\\n\\nWe designed a few policies based on the above two rules and chose the ones which consistently performed the best on validation sets. However, as part of our future work in this direction, we\\u2019re interested in designing an algorithm which can provide an optimal policy for any dataset, network pair.\"}", "{\"title\": \"Response to R1 Comment 1: Focal loss not a Bregman Divergence\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nWe deal with a classification problem where $p=0$ or $p=1$ (with one-hot encodings) and expected minimizer of focal loss for it comes out to be $x=p$. However, we believe that the following question needs to be addressed:\\n\\nQ. Focal loss is not a bregman divergence, thus the minimizer of focal loss is not the original label when the original label is fractional. So, what exactly is it minimizing ?\", \"ans\": \"Yes focal loss is not a bregman divergence. However, in Appendix H we show that it is a regularized bregman divergence in the sense that while $\\\\mathrm{CrossEntropy}(p,q)\\n = \\\\mathrm{KL}(p||q) + \\\\mathrm{Entropy}(p)$, we have $\\\\mathrm{FocalLoss}(p,q) > \\\\mathrm{KL}(p||q) + \\\\mathrm{Entropy}(p) - \\\\gamma * \\\\mathrm{Entropy}(q)$. Thus it is minimizing the KL-divergence between the target and predicted label distribution while ensuring that the entropy of the predicted distribution is large and $\\\\gamma$ is essentially the regularization coefficient. Having higher entropy on the predicted distribution can help avoid overconfident predictions observed in modern neural networks, thus leading to better calibration. We provide the related proof in Appendix H.\\n\\nPlease let us know if this answers your question or if not, it would be very helpful if you could give us some more clarity about what you are looking for in the theoretical part.\"}", "{\"title\": \"Response to R2 Minor Comments\", \"comment\": \"We sincerely thank the reviewer for these comments which we address as follows:\\n\\n1. \\\"It'd be nice to illustrate the confidence improvements on a few qualitative examples, maybe in appendix\\\": We have added some qualitative results to show the confidence improvements of focal loss in the Appendix F (Figure 8). We took ResNet-50 networks trained on CIFAR-10 using cross entropy, MMCE, Brier score and focal loss (with sample-wise gamma 5,3) and presented the prediction confidence of these networks on correctly and incorrectly classified test samples. We have reported the confidence estimates obtained both before and after temperature scaling. The observations we make in Appendix F support the claim that models trained using focal loss are well calibrated and confident on their correct predictions. \\n\\n2. \\\"10 pages is too much (given that were were given instructions to be more severe with long paper) table 6 and 3 could be merged for instance\\\": We have merged Tables 3 and 6 in the paper now.\\n\\n3. \\\"The focal loss column results of table 1 should be the same as Table 5 (sample wise)?\\\":\\nBriefly, Table 1 compares the loss functions over ECE% while Table 5 compares them over Adaptive ECE%, hence the results are not necessarily the same.\\n\\nIn Table 1, we report ECE(%) values of all the baselines (Cross Entropy, Brier Loss, MMCE) along with the best approach we found among all the focal loss approaches (i.e., sample-wise focal loss). In Table 2, we report the Adaptive ECE(%) values for the same frameworks. Tables 4 and 5, on the other hand, are meant to compare the different focal loss approaches over ECE and Adaptive ECE respectively. Hence, the focal loss column results of Table 1 are the same as the Focal loss sample-wise column in Table 4. Similarly, the focal loss column results in Table 2 are the same as the Focal Loss sample-wise column in Table 5. Since the purposes of Tables 1, 2 and Tables 4, 5 are different, we have kept both sets of tables in the paper.\\n\\n4. \\\"could specify what MMCE means\\\": We have mentioned the meaning of MMCE in Section 1 (Introduction) of the paper (it can be found in the second paragraph of Page 2). Furthermore, in the updated version of the paper, we have included a two-sentence description of MMCE in Section 5 (Experiments) in the Baselines (Cross Entropy, MMCE and Brier Score) paragraph.\\n\\n5. \\\"clean the bibliography\\\": We have updated the references, which previously had arXiv links for some papers that have actually been accepted and published. The updated references now provide information about the conferences/journals in which the papers were published. Please let us know if there are any other changes you\\u2019d like us to make to the bibliography.\"}", "{\"title\": \"Response to R2 comment 1: Experiments done on tiny images\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows:\\n\\nTo compare the performance on a bigger image dataset, we trained the ResNet-50 network using cross entropy, focal loss with a fixed gamma value of 3 and focal loss with the sample-wise gamma policy of 5,3 on Tiny ImageNet. The Tiny ImageNet dataset is a subset of ImageNet with 64 x 64 dimensional images, 200 classes and 500 images per class in the training set and 50 images per class in the validation set. The image dimensions of Tiny ImageNet are twice the size of the CIFAR-10/100 dataset images.\\n\\nWe use SGD with a momentum of 0.9 as our optimiser and train the networks for 100 epochs with a learning rate of 0.1 for the first 40 epochs, 0.01 for the next 20 epochs and 0.001 for the last 40 epochs. We use a training batch size of 64. We also augment the training images with random crops and random horizontal flips. It should be noted that we saved 50 samples per class (i.e., a total of 10000 samples) from the training set as our own validation set to fine-tune the temperature parameter on (hence, we trained on 90000 images) and we use the Tiny ImageNet validation set as our test set. We report the Tiny ImageNet validation set error%, ECE% both before and after temperature scaling and Ada-ECE% both before and after temperature scaling in the table below.\\n\\n+------------------------------------+----------+----------------+-----------------+--------------------+---------------------+\\n| Loss Function | Error(%) | ECE(%) (Pre T) | ECE(%) (Post T) | Ada-ECE(%) (Pre T) | Ada-ECE(%) (Post T) |\\n+------------------------------------+----------+----------------+-----------------+--------------------+---------------------+\\n| Cross Entropy | 49.88 | 14.98 | 5.05(1.4) | 14.98 | 5.05(1.4) |\\n| Focal Loss (gamma = 3) | 48.37 | 2.08 | 2.08(1.0) | 1.71 | 1.71(1.0) |\\n| Focal Loss (Sample-wise gamma-5,3) | 48.43 | 1.80 | 1.80(1.0) | 2.06 | 2.06(1.0) |\\n+------------------------------------+----------+----------------+-----------------+--------------------+---------------------+\\n\\nFirstly, we observe that models trained on focal loss have a lower error than the model trained using cross entropy (thus better accuracy).\\n\\nSecondly, we also observe a significant improvement in ECE and Ada-ECE values both before and after temperature scaling for models trained on focal loss indicating that these models are not only more accurate but also much more calibrated.\\n\\nFinally, we note that the optimal temperature for both models trained using focal loss is 1 which indicates that temperature scaling could not make these models any more calibrated.\\n\\nWe will add all these baselines to the paper. We report these preliminary results here (and not in the paper) because we have to train quite a few other networks (like ResNet-110, DenseNet, Wide ResNet etc.) and also we have to train other baselines (like MMCE, Brier Score and other versions of focal loss) to obtain the complete set of results which we can then add to the paper.\"}", "{\"title\": \"Response to R3 Comment 2: Data Augmentation and Calibration\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nA calibrated model without good test set accuracy isn\\u2019t useful. Hence, in all our experiments, we set the training parameters in a way so that we get models that generalise well, obtaining state-of-the-art test set accuracies for each model on each dataset. In order to achieve this, we had to apply data augmentation on both the CIFAR-10/100 training sets for all the loss functions, as without that we don\\u2019t get state-of-the-art test set accuracies on CIFAR-10/100. The data augmentation involved standard techniques like random crops and random horizontal flips. However, we didn\\u2019t have to use any data augmentation for the NLP datasets, 20 Newsgroups and SST Binary to achieve state-of-the-art results on them. This shows that our adaptation of focal loss performed well irrespective of data augmentation.\\n\\nIn order to make these points clear, we have added them in Appendix D, where we describe the training and implementation details for our experiments. Having said that, numerically investigating the importance of data augmentation in the context of model calibration is, in itself, an interesting idea, which we think would make for interesting future work.\"}", "{\"title\": \"Response to R3 Comment 1: Comparison with cross entropy applied to smoothed labels (Part 2)\", \"comment\": \"Table 2\\n+--------+-----------+-----------+------------+----------+----------------+-----------------+--------------------+---------------------+\\n| Loss | Smoothing | Dataset | Model | Error(%) | ECE(%) (Pre T) | ECE(%) (Post T) | Ada-ECE(%) (Pre T) | Ada-ECE(%) (Post T) |\\n+--------+-----------+-----------+------------+----------+----------------+-----------------+--------------------+---------------------+\\n| CE | 0.0 | CIFAR-10 | ResNet-50 | 4.95 | 4.35 | 1.35(2.5) | 4.33 | 2.14(2.5) |\\n| CE | 0.0 | CIFAR-10 | ResNet-110 4.89 | 4.41 | 1.09(2.8) | 4.40 | 1.84(2.7) |\\n| CE | 0.0 | CIFAR-100| ResNet-50 | 23.30 | 17.52 | 3.42(2.1) | 17.52 | 3.67(2.1) |\\n| CE | 0.0 | CIFAR-100| ResNet-110| 22.73 | 19.05 | 4.43(2.3) | 19.05 | 5.50(2.4) |\\n| Focal | 0.0 | CIFAR-10 | ResNet-50 | 4.98 | 1.55 | 0.95(1.1) | 1.56 | 1.26(1.1) |\\n| Focal | 0.0 | CIFAR-10 | ResNet-110| 5.42 | 1.87 | 1.07(1.1) | 2.07 | 1.67(1.1) |\\n| Focal | 0.0 | CIFAR-100| ResNet-50 | 23.22 | 4.50 | 2.00(1.1) | 4.39 | 2.33(1.1) |\\n| Focal | 0.0 | CIFAR-100| ResNet-110| 22.51 | 8.56 | 4.12(1.2) | 8.55 | 3.96(1.2) |\\n+-------+-----------+-----------+------------+----------+----------------+-----------------+--------------------+---------------------+\"}", "{\"title\": \"Response to R3 Comment 1: Comparison with cross entropy applied to smoothed labels (Part 1)\", \"comment\": \"We sincerely thank the reviewer for this comment which we address as follows.\\n\\nTo empirically observe the effects of training networks using cross entropy loss with smoothed labels, we trained ResNet-50 and ResNet-110 on both CIFAR-10 and CIFAR-100 using cross entropy loss with smoothing factors of 0.05 and 0.1. In simple terms, if the smoothing factor is $\\\\alpha$ and if for a sample we have a one-hot label vector $Y$, then its smoothed label vector $S$ will be such that $S_i = (1 - \\\\alpha ) * Y_i + \\\\alpha * (1 - Y_i) / (K-1)$ where $K$ is the number of classes. In Table 1, we present the test set error %, ECE (%) both pre and post temperature scaling and Ada-ECE(%) both pre and post temperature scaling for each of these configurations. \\n\\nWe also provide the same metrics for ResNet-50 and ResNet-110 trained using cross entropy and focal loss with one-hot labels (these numbers were taken from Tables 1,2 and 3 in the paper) for ease of comparison in Table 2. We provide Table 2 in a separate comment due to lack of space. The focal loss numbers in Table 2 are for focal loss with the sample-wise gamma approach reported in Tables 1 and 2 of the paper.\\n\\nTable 1\\n+-------+-----------+-----------+------------+----------+----------------+-----------------+--------------------+---------------------+\\n| Loss | Smoothing | Dataset | Model | Error(%) | ECE(%) (Pre T) | ECE(%) (Post T) | Ada-ECE(%) (Pre T) | Ada-ECE(%) (Post T) |\\n+-------+-----------+-----------+------------+----------+----------------+-----------------+--------------------+---------------------+\\n| CE | 0.05 | CIFAR-10 | ResNet-50 | 4.99 | 3.08 | 1.33(0.9) | 3.74 | 2.89(0.8) |\\n| CE | 0.05 | CIFAR-10 | ResNet-110 | 5.11 | 1.56 | 1.82(0.9) | 3.23 | 2.52(0.9) |\\n| CE | 0.05 | CIFAR-100 | ResNet-50 | 22.10 | 7.61 | 4.19(1.1) | 7.80 | 6.32(1.1) |\\n| CE | 0.05 | CIFAR-100 | ResNet-110 | 23.45 | 10.89 | 5.97(1.1) | 10.71 | 7.77(1.1) |\\n| CE | 0.1 | CIFAR-10 | ResNet-50 | 5.04 | 7.30 | 1.02(0.8) | 7.90 | 2.95(0.8) |\\n| CE | 0.1 | CIFAR-10 | ResNet-110 | 5.26 | 6.55 | 1.27(0.8) | 8.09 | 3.20(0.8) |\\n| CE | 0.1 | CIFAR-100 | ResNet-50 | 22.82 | 5.27 | 5.27(1.0) | 5.93 | 5.93(1.0) |\\n| CE | 0.1 | CIFAR-100 | ResNet-110 | 22.51 | 4.55 | 4.55(1.0) | 8.29 | 8.29(1.0) |\\n+--------+-----------+-----------+------------+----------+----------------+-----------------+--------------------+---------------------+\\n\\n* It is quite clear that models trained using focal loss outperform those trained using cross entropy loss with smoothed labels both before and after temperature scaling.\\n\\n* All the networks trained on smoothed labels are able to achieve test set accuracies which are in the state-of-the-art ballpark. Moreover, we observe a significant improvement in both ECE and Ada-ECE values before temperature scaling for models trained using cross entropy loss with smoothed labels as compared to models trained using cross entropy loss with one-hot labels. These improvements however, are not reflected in the ECE and Ada-ECE numbers obtained after temperature scaling.\\n\\n* It is quite interesting to note that training on smoothed labels causes the models to become less confident on their predictions in general as we often obtain optimal temperatures which are lower than 1. This means that temperature scaling for these models is increasing their confidence. On the other hand, optimal temperatures for models trained using cross entropy with one-hot labels are much greater than 1 and hence, temperature scaling is lowering the confidence of these models.\\n \\nWe will add the numbers obtained from models trained using cross entropy with smoothed labels (both with smoothing factors 0.1 and 0.05) to the paper. We present the preliminary set of results here (and not in the paper) because we need to train other networks (Wide ResNet, DenseNet, etc.) as well to have a complete set of results which can then be included in the paper.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper describes how the use of the now-standard focal loss can lead to improved calibration results when used to fit deep-models. When fitting a large capacity model with NLL, the model can often try to drive its predictions close to 1 (i.e. infinity pre-softmax) on the training set, ultimately leading to poorly calibrated models and overfitting behaviour. The focal loss appears to mitigate this issue.\\n\\nThe approach is extremely simple to implement, the theoretical justifications are believable, and the calibration/accuracy performances seem to be good -- for this reasons, I think that the paper should be accepted.\\n\\n(1) it would be interesting to compare the approach to using the standard cross-entropy applied to smoothed labels (i.e. (1-eps,eps) instead of (1,0) in binary classification and obvious generalisation in multi-class setting).\\n\\n(2) data-augmentation often greatly helps with calibration -- the paper did not describe in details what has been done on that front for the numerical investigations.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper studies the effect of the focal loss, proposed by Lin et al. in 2017 on network miscalibration, which appears when the network's confidence in its prediction does not match its correctness. The authors provide a theoretical explanation\\u00a0to the superior results of the focal loss for calibration.\\u00a0The temperature scaling technique of Guo et al. 2017 is applied (dividing the network's logits by a scalar learnt on a val set prior to softmax) to networks trained using the focal loss, with different options for the focal parameter, as well as the standard multi-class cross entropy and a few others.\\u00a0The experiments\\u00a0on CIFAR10/100 as well as two text dataset (20 Newsgroups, Stanford Sentiment Treebank) reach lower expected calibration error compared to the cross entropy (75% of relative improvement on cifar100 for instance).\\n\\nThe importance of the contribution will probably be discussed here. At first glance, it seems that the works build mainly on advances from Lin et al & Guo et al, but the authors do a promising job in combining the two.\", \"positive_aspects\": [\"The paper is well written.\", \"Experiments\\u00a0on both image and text dataset demonstrate the superiority of the focal loss on several calibration metrics.\", \"The theoretical explanation is convincing.\"], \"negative_points\": [\"The importance of the problem is motivated by future assessments by downstream tasks but do not address this aspect in the experiments. In particular, as the images experiments are conducted on tiny images, an experiment on a real size image dataset would strengthen the paper.\", \"The policy that works best for defining the sample wise tuning of the focal parameter was hand-made but ultimately uses only 3 parameters so finally it is not so bad.\"], \"minor\": [\"It'd be nice to illustrate the confidence improvements on a few qualitative examples, maybe in appendix.\", \"10 pages is too much (given that were were given instructions to be more severe with long paper) table 6 and 3 could be merged for instance.\", \"The focal loss column results of table 1 should be the same as Table 5 (sample wise)?\", \"could specify what MMCE means\", \"clean the bibliography\", \"I've read the other reviews and authors' responses. Experiences on Tiny ImageNet are better than CIFAR but still a little far from what I'd call real images but I understand it can be difficult to run experiments on ImageNet. Since the choice of gamma seems to be leading consistent results also on tinyIN, I find it less concerning.\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper explores how focal loss can be used to improve calibration for classifiers. Focal loss extends the cross-entropy loss, which is -log(p_label), with a multiplicative factor equal to (1 - p_label)^gamma. Intuitively, this downweights the loss for elements where the probability of the correct label p_label is close to 1, relatively increasing the weight of the misclassified examples.\\n\\nSomewhat surprisingly, this tends to improve the calibration of the model. I say surprisingly because the focal loss is not a bregman divergence for all values of alpha so in general the expected minimizer of the focal loss for a fractional label is not the fractional label (i.e. the minimizer wrt x of - p (1-x)^gamma log(x) - (1-p) x^gamma log (1 -x) is not in general p).\\n\\nThe paper shows somewhat thorough experiments on many datasets justifying this observation, but the theoretical part is rather weak since it doesn't seem to address this issue with the focal loss.\\n\\nIt's also not very clear from reading the paper what the p0 should be when using the rule to automatically select the gamma of the focal loss.\\n\\nI'd support accepting the paper if the calibration properties of the focal loss itself was better analyzed on a simpler setup (linear models, or single parameter models) so it's easier to understand how it's helping calibration in the deep network setup and if the algorithm for choosing per-example gammas was more clearly stated out.\"}" ] }
ryx6WgStPB
Hypermodels for Exploration
[ "Vikranth Dwaracherla", "Xiuyuan Lu", "Morteza Ibrahimi", "Ian Osband", "Zheng Wen", "Benjamin Van Roy" ]
We study the use of hypermodels to represent epistemic uncertainty and guide exploration. This generalizes and extends the use of ensembles to approximate Thompson sampling. The computational cost of training an ensemble grows with its size, and as such, prior work has typically been limited to ensembles with tens of elements. We show that alternative hypermodels can enjoy dramatic efficiency gains, enabling behavior that would otherwise require hundreds or thousands of elements, and even succeed in situations where ensemble methods fail to learn regardless of size. This allows more accurate approximation of Thompson sampling as well as use of more sophisticated exploration schemes. In particular, we consider an approximate form of information-directed sampling and demonstrate performance gains relative to Thompson sampling. As alternatives to ensembles, we consider linear and neural network hypermodels, also known as hypernetworks. We prove that, with neural network base models, a linear hypermodel can represent essentially any distribution over functions, and as such, hypernetworks do not extend what can be represented.
[ "exploration", "hypermodel", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=ryx6WgStPB
https://openreview.net/forum?id=ryx6WgStPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "VrdXmrpI5n", "BylA9vPnsH", "BJgUqGcdir", "S1xIag9usB", "rJlvT-lGiB", "rkxXO-lGsB", "B1lsm-ezor", "SJlvWZxziS", "r1gZ_23X5H", "B1eEz8qRKH", "HJgv4jb5YH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741886, 1573840790360, 1573589645682, 1573589182157, 1573155263116, 1573155178709, 1573155106873, 1573155071446, 1572224105295, 1571886604177, 1571588911140 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2153/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2153/Authors" ], [ "ICLR.cc/2020/Conference/Paper2153/Authors" ], [ "ICLR.cc/2020/Conference/Paper2153/Authors" ], [ "ICLR.cc/2020/Conference/Paper2153/Authors" ], [ "ICLR.cc/2020/Conference/Paper2153/Authors" ], [ "ICLR.cc/2020/Conference/Paper2153/Authors" ], [ "ICLR.cc/2020/Conference/Paper2153/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2153/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2153/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers ensemble of deep learning models in order to quantify their epistemic uncertainty and use this for exploration in RL. The authors first show that limiting the ensemble to a small number of models, which is typically done for computational reasons, can severely limit the approximation of the posterior, which can translate into poor learning behaviours (e.g. over-exploitation). Instead, they propose a general approach based on hypermodels which can achieve the benefits of a large ensemble of models without the computational issues. They perform experiments in the bandit setting supporting their claim. They also provide a theoretical contribution, proving that an arbitrary distribution over functions can be represented by a linear hypermodel.\\n\\nThe decision boundary for this paper is unclear given the confidence of reviewers and their scores. However, the tackled problem is important, and the proposed approach is sound and backed up by experiments. Most of reviewers concerns seemed to be addressed by the rebuttal, with the exception of few missing references which the authors should really consider adding. I would therefore recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"The authors have responded to most questions\", \"comment\": \"I got the appropriate answers to most of my questions and will increase my score to 7.\\n\\nI would have given a higher score if there were experiments supporting your theorem.\"}", "{\"title\": \"final comment\", \"comment\": \"We were surprised by this low score and haven't gained further insight into the reviewer's reasoning after responding to the initial review.\\n\\nIt seems to us that the most significant concern raised by this reviewer was that our experiments focused on simulated data. We would like to emphasize that this was an important choice as our intention was to carefully design controlled experiments to isolate issues and decisively answer questions we posed. Working with real data would have diffused focus, requiring us to simultaneously address a variety of issues that would arise from working with a real data set.\"}", "{\"title\": \"the authors' perspective on the paper and the reviews\", \"comment\": \"We believe we have clarified items the reviewers asked about and added results requested by the reviewers. We have not subsequently heard back from reviewers. Before the rebuttal period closes, we'd like to offer our perspective on the paper, and in particular, why we are surprised by the current low scores. We hope our points are clear and can justify higher scores.\\n\\nIn our view, the paper makes a few clear and striking points:\\n\\n1) Alternative hypermodels can offer dramatic gains over ensemble hypermodels -- our experiments point to cases where the speedup is 100x or greater!\\n\\n2) Alternative hypermodels can enable more intelligent exploration than is done by Thompson sampling. We carry out experiments with information-directed sampling to illustrate this, and provide computational results for an example where regret is reduced by over 25x!\\n\\n3) We prove that linear hypermodels and a sufficiently complex neural network can encode essentially any distribution over functions.\\n\\nWe should mention that we were so surprised by result (1) that we spent a lot of time checking and verifying. As such, in our view this is quite significant. Result (2) is also quite striking. We believe result (3) represents a substantial theoretical contribution.\\n\\nIn retrospect, we probably could have written a paper on any one of these three results individually. That would have allowed us to emphasize its significance. Perhaps putting all three results in one paper was too much, possibly wore down reviewers, and effectively diminished the attention or appreciation that could be afforded to any one.\"}", "{\"title\": \"ABOUT THEOREM 1\", \"comment\": \"We notice that none of the reviewers have commented on Theorem 1. Our understanding is that this novel result is a significant contribution of this paper. Specifically, it states that with neural network base models, linear hypermodels can represent essentially any probability distribution over functions with finite domain. In other words, without any additional constraints (e.g. constraints on the depth/width of the base model), linear hypermodels are essentially sufficient and hypernetworks do not offer to represent a broader range of probability distributions.\"}", "{\"title\": \"response to comments of reviewer #2\", \"comment\": \"Thank you very much for your review.\\n \\n0) On the performance gap between linear and ensemble hypermodels. We are working now on an experiment to see how index dimension effects performance and will add results.\\n\\n1) On larger ensembles. We observed in the neural network bandit experiments of Section 4.2 that, surprisingly, increasing the ensemble size beyond 100 does not seem to improve performance by much, if at all. Further, computational requirements become prohibitive as we try to increase the ensemble size beyond what we have reported. The fact that we can attain such performance with reasonable compute is the major source of advantage for linear hypermodels.\\n\\n4) The statement in the paper is correct as is. This setup makes the perturbations Gaussian.\\n\\n5) Clarifying the assumptions about random variables. As mentioned in (4) we\\u2019ve set things up so that a^\\\\top z is Gaussian. Hance, we are perturbing each response by Gaussian noise leading the hypermodel to approximate a posterior distribution. Past literature on ensemble sampling and bootstrapped DQN have used similar Gaussian perturbations. We have added a clarifying comment.\\n\\n6) On regularization. Yes, $\\\\nu_0$ parameterizes the additive prior. The idea is to regularize toward the prior network. This allows the initial model to represent prior uncertainty and guide early exploration. In the context of Thompson sampling, this induces initial randomization.\\n\\n7) Why multiply by $|D|$? This is done to calibrate the weight of the error term against that of the regularization term so that the hypermodel parameters will adapt to approximate a posterior distribution.\\n\\n8) Yes, the cardinality of the index set is independent of minibatch size. For each training data point, there are multiple models realized by multiple indices/\\n\\n9) We use the decomposition of Section 2.5, page 4, because the hypermodel needs to reflect uncertainty associated with a prior distributions. If we simply initialize to small values, we won\\u2019t get this, and for example, Thompson samples won\\u2019t adequately vary.\\n\\n10) You notation looks good. The original sentence was technically correct too but perhaps that was confusing since the first set mentioned only identified a generic partition. which is then defined by the remainder of the sentence. We have clarified this.\\n\\nThanks for pointing out typos and citations that we will add.\"}", "{\"title\": \"response to comments of reviewer #1\", \"comment\": \"Thank you very much for your review.\\n \\n1)Regarding what we mean by intelligent exploration: our assessment is in terms of regret and computational requirements, though you could also assess in terms of the number of samples needed and computational requirements to draw the same qualitative conclusions.\\n\\n2) Regarding simulated data: We intentionally focussed on simulated data in order to carry out controlled experiments that allow for definitive conclusions.\\n\\n3) On Section 5 benefiting from linearity of the model. Section 5 does not use a linear model.\\n\\n4) On assessing performance based on regret in bandits. It is not immediately clear what metric to use to compare approximate posterior distributions and some choices may not be easy to assess in a computationally efficient manner. Different metrics may be more or less appropriate depending on how the approximate posterior will be used. The motivation of our research is to provide tools for efficient exploration, so exploration performance seemed like a natural metric for us.\"}", "{\"title\": \"response to comments of reviewer #4\", \"comment\": \"Thank you very much for your review.\\n \\n1)With regards to numerical experiments, as you suggest, we will include results in the appendix on sensitivity analysis with respect to hyperparameters.\\ni) Trying larger neural networks would entail a lot of computational work and would be difficult for us to get done within the rebuttal time frame. Also, it is not clear to us that this would add insight given that the current network is already of nontrivial size.\\nii) We have added results in the appendix showing that ensembles without additive priors perform terribly.\\niii) We are running an experiment without noise and will add the results when they are available.\\n \\n2)The idea here is to regularize towards the prior network. This is essential as it induces exploration algorithms to resolve the prior uncertainty. In the case of Thompson sampling, this is what leads to randomization of initial samples.\\n \\n3) The point here is that it is easy in our framework to introduce structure to reduce the number of parameters. For example, graphical structure associated with conditional independencies can be imposed through linear constraints.\\n \\n4) This section is theoretical and offers a new fundamental and possibly surprising result on the representation power of linear hypermodels.\\n \\nBy perturbing data, we mean adding noise to response variables (adding $\\\\sigma_2 a^\\\\top z$ to $y$ in the loss functions of Section 2.1. Sorry that we did not state this clearly. We\\u2019ve edited the wording.\\n \\nThe notation for $a$ in Sections 2.1 and 2.3 are already consistent, so we are not sure what to change.\\n \\nThank you for pointing out typos.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper investigates the possibility of using hypermodels in improving the exploration of bandit problems. By using SGD for training the hypermodel parameters, this paper introduces a computationally efficient alternative to ensemble methods. The idea of the paper is novel and interesting; however, I do have several concerns, mainly from numerical experiments that I would like the authors to address those in the rebuttal.\\n\\n1) My first and the most important concern is that the numerical experiments do not evaluate different aspects of the method. There are numerous ways to check the sensitivity of your method for the choice of hyperparameters that I think could be added to the appendix. In addition to testing various values for $\\\\sigma_p$, $\\\\sigma_w$, and $\\\\nu_0$, I think that multiple experiments are missing:\\n i) larger neural network\\n ii) I was expecting to see what would happen without additive prior. It could be one of the baselines in Figure 3. Even though (Osband et al., 2018) discuss the effect of this extension, but the use of this model is not numerically justified. \\n iii) How the experiments are sensitive to the noise of the output variable? What will happen if you do not add noise?\\n\\nThere are also other experiments possible such as testing on a real scenario that would significantly improve the presentation of the work. This is not a requirement though.\\n\\n2) I didn't get what is the purpose of the last term in the loss function defined in Section 2.1. Why you are looking preferring $\\\\nu$ to be close to $\\\\nu_0$? \\n\\n3) P4, \\\"it is natural to consider linear hypermodels in which parameters a and B are linearly constrained.\\\" This sentence needs to be clarified. I didn't comprehend how you are dealing with large neural network issues.\\n\\n4) In Section 6, I was expecting to see a simulation showing a comparison of linear hypermodel with hypernetworks.\", \"minor\": [\"On P2, \\\"informatino-directed\\\" -> \\\"informatino-directed\\\"\", \"In the second paragraph of Section 2.1, it is mentioned that a hypermodel involves perturbing data. My understanding is that what is meant here by perturbing data is to add some noise to X. However, in the later formulae, there is no such thing as perturbing data. You could say that since our numerical experiments didn't show any improvement using data perturbation, we didn't include it in our notations. Please remove the confusion.\", \"very minor, but I would suggest using a different notation for $a$ in Sections 2.1 and 2.3 to remove any possible confusion.\", \"I think that the summation in computing the variance of IDS should be over $\\\\tilde{Z}_{x^*}$.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper builds on a classical idea of sampling model parameters apart from learning them. Specifically, it combines hierarchical sampling with neural networks and proposes models that can help explore the parameter space efficiently. The proposal is evaluated appropriately.\\n\\nWhat exactly do we mean by intelligent exploration? Is this quantified via the #samples needed or variance of sampled parameters? Or is it via regret? \\n\\nThe paper is clearly written and the idea makes sense. However the experiments are essentially based on simulated data. It is not entirely clear as to how this would translate to real setups. \\n\\nIs it possible that the linear hypermodel is performing well because the data was generated according to a linear model in section 5?\\n\\nIf the baseline is a classical ensembling setup, then why not use classical performance measures to evaluate the benefit of hypermodeling? like accuracy etc. Why are we specifically talking about bandits? In other words, does the proposed hyper sampling allow for better weak learners in general as well?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors demonstrate advantages of a linear hypermodel over an ensemble method in exploration guided by epistemic uncertainty. They perform an empirical study in the bandit setting and claim that their approach both outperforms the ensemble method and offers a significant increase in computational efficiency. The theoretical contribution is that they prove universality in the sense that an arbitrary distribution over functions can be represented by a linear hypermodel. The experiments support their claims. Some of the explanations, however, are confusing, and relations to prior work should be clarified.\\n\\nFigure 3 shows a surprisingly large performance gap between the hypermodel and the ensemble method as the number of actions increases. But how about comparing linear hypermodels with different index sizes? Do we also expect asymptotic improvement as we increase the index size?\", \"imprecise_or_confusing_explanations_in_the_paper\": \"1) Page 2, first Q: In theory, the effectiveness of the ensemble method should converge to that of the hypermodel as the ensemble size increases. They only tried ensemble size [10, 30, 100, 300] and then concluded that linear hypermodel can be effective regardless of the size of ensembles. Why?\\n\\n3) Page 3, Section 2.1, second paragraph, first sentence: Please clarify a bit more what do you mean by perturbing data? Random shuffling of the dataset in each training epoch? What does \\u2018response variables\\u2019 mean?\\n\\n4) Page 3, Section 2.1, second paragraph, last 2 sentences about $A_t$: we guess it should be $A_t ~ N(0, I)$ if $p_z$ is unit Gaussian according to the description in this paragraph. The current text claims it is the other way around, perhaps a typo?\\n\\n5) Page 3, Section 2.1, first equation: Why take the inner product between $a$ and $z$ ? How does this reflect the randomized computation (the motivation for augmented random vector $A_t$)? The objective is to maximize the log-likelihood of the prediction under the Gaussian assumption. Please clarify the assumptions about random variables $Y_t$ at the beginning of this paragraph. \\n\\n6) Same place as in 5): Why regularize hypermodel parameters such that they are not too far from the initial vector? Is $\\\\nu_0$ actually the additive prior model described in Section 2.5?\\n\\n7) Page 3, Section 2.1, second equation: why multiply $|D|$ in the first term within the parentheses? Why not just $1/|D_tilde|$ to average the prediction error over the mini-batch?\\n\\n8) Page 3, Section 2.1, second equation: is the cardinality of the index set $|Z_tilde|$ independent of mini-batch size? I.e. for each training data point there could be multiple models realized by multiple indices $z$\\n\\n9) Page 4, Section 2.5: Why use this decomposition for training the hypermodel? If the intuition is to keep the initial weight small, what if we just simply initialize small values for $f_\\\\theta(x)$ without decomposition?\\n\\n10) Page 5, last second sentence: The notation of partition (the set notation after \\u2018Here,\\u2026.\\u2019) is supposed to be $\\\\hat{\\\\mathcal{Z}}_{x^*} = \\\\{ z\\\\in \\\\hat{\\\\mathcal{Z}} | x^* in \\\\argmax_{x} f_{g_{\\\\nu}(z)}(x) \\\\}$\", \"minor_typos\": [\"Page 2, third paragraph: \\u2018\\u2026we compare their [efficacy] when used...\\u2019 -> [efficiency] ?\", \"Page 2, the last paragraph before Section 2, first sentence: \\u2018Approaches to approximating TS and [informatino]-directed sampling...\\u2019 -> [information]\"], \"relations_to_prior_work\": \"1. Page 2: Hypernetworks (where one neural net learns to generate the weights of another net) are much older than this recent reference of 2016. One should relate this work to the original references since 1991 [FAST0-3a][FAST5][FASTMETA1-3][CO2] in section 8 of the overview http://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html \\n\\n2. Intro 2nd par: dropout was first published much earlier in 1990 as the stochastic delta rule: \\nHanson, S. J.(1990). A Stochastic Version of the Delta Rule, PHYSICA D,42, 265-272. See also arXiv:1808.03578, 2018. \\n\\nWe might improve our rating provided the comments above were addressed in a satisfactory way in the rebuttal.\", \"edit_after_rebuttal\": \"The authors replied: \\\"Thanks for pointing out typos and citations that we will add.\\\" But apparently in the revised PDF this did not happen.\"}" ] }
ryl3blSFPr
Denoising Improves Latent Space Geometry in Text Autoencoders
[ "Tianxiao Shen", "Jonas Mueller", "Regina Barzilay", "Tommi Jaakkola" ]
Neural language models have recently shown impressive gains in unconditional text generation, but controllable generation and manipulation of text remain challenging. In particular, controlling text via latent space operations in autoencoders has been difficult, in part due to chaotic latent space geometry. We propose to employ adversarial autoencoders together with denoising (referred as DAAE) to drive the latent space to organize itself. Theoretically, we prove that input sentence perturbations in the denoising approach encourage similar sentences to map to similar latent representations. Empirically, we illustrate the trade-off between text-generation and autoencoder-reconstruction capabilities, and our model significantly improves over other autoencoder variants. Even from completely unsupervised training, DAAE can successfully alter the tense/sentiment of sentences via simple latent vector arithmetic.
[ "controllable text generation", "autoencoders", "denoising", "latent space geometry" ]
Reject
https://openreview.net/pdf?id=ryl3blSFPr
https://openreview.net/forum?id=ryl3blSFPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "GM-O2ojRNJ", "H1eQz0q3sB", "r1ewQqdsjr", "SkeCx_djsr", "SklWUyn9ir", "rklXkKAFiH", "HygStORKiB", "H1lNCGiSor", "Hkg7iGjSiB", "HylyQjeGsH", "Hkl4scxzsS", "BylI49ChFr", "H1x_OYsitH", "r1gBOZnutS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741857, 1573854731340, 1573779998597, 1573779445842, 1573728073473, 1573673179147, 1573673084870, 1573397195749, 1573397147102, 1573157654653, 1573157531749, 1571772974504, 1571694959629, 1571500396864 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/Authors" ], [ "ICLR.cc/2020/Conference/Paper2152/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2152/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2152/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This work presents a simple technique for improving the latent space geometry of text autoencoders. The strengths of the paper lie in the simplicity of the method, and results show that the technique improves over the considered baselines. However, some reviewers expressed concerns over the presented theory for why input noise helps, and did not address concerns that the theory was useful. The paper should be improved if Section 4 were instead rewritten to focus on providing intuition, either with empirical analysis, results on a toy task, or clear but high level discussion of why the method helps. The current theorem statements seem either unnecessary or make strong assumptions that don't hold in practice. As a result, Section 4 in its current form is not in service to the reader's understanding why the simple method works.\\nFinally, further improvements to the paper could be made with comparisons to additional baselines from prior work as suggested by reviewers.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Revision is uploaded\", \"comment\": \"We uploaded a revision that has addressed the reviewers' comments. We thank all reviewers for their useful feedback, and we believe that the clarity and completeness of our paper has improved through discussion.\"}", "{\"title\": \"Theorem 1\", \"comment\": \"As Reviewer#2 also pointed out, in Theorem 1, we actually meant \\u201cFor any *one-to-one* encoder mapping E from {x1 ... xn} to {z1 \\u2026 zn}, the optimal value of objective \\u2026 is the same\\u201d (we\\u2019re revising our paper to correct this). If the encoder maps different x to the same z, the reconstruction loss will be strictly worse and an optimal AAE will not learn such mappings. The remaining question is which one-to-one mapping AAE will learn, which is what Theorem 1 studies.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"We thank the reviewer for the feedback. While the results hold, we realize that the current writing of the theorems may be unclear/misleading. We clarify below and will revise our paper accordingly:\\n\\n- Theorem 1\\nWe meant \\u201cFor any *one-to-one* encoder mapping E from {x1 ... xn} to {z1 \\u2026 zn}, the optimal value of objective \\u2026 is the same\\u201d. Here we analyze which x-z mapping will the encoder/decoder learn under global optimality. If the encoder maps different x to the same z, the reconstruction loss will be strictly worse and an optimal AAE will not learn such mappings. The remaining question is which one-to-one mapping AAE will learn, which is what Theorem 1 studies.\\n\\nIn the reviewer\\u2019s example, the second encoder E2 would always be favored over E1 since E2 enables better reconstruction. (Also note that we have not assumed the encoder/decoder are Lipschitz on discrete input x, we only assume that the decoder is Lipschitz on its continuous input z.)\\n\\n- Theorem 2\", \"the_context_in_which_the_theorem_applies_is_described_in_the_previous_paragraph\": \"\\u201cthere are two pairs of x closer together and also two pairs of z closer together\\u201d. Here, the perturbation probability can be p_C(xi | xj) = 1/2 if d(xi, xj) < eps and = 0 otherwise. For improved clarity, we\\u2019ll move this setup context inside of the statement of Theorem 2.\\n\\n- \\u201cthere may be much more fundamental results in information theory that basically say what you are trying to convey, that when the encoder is constrained to maintain some kind of coherence between neighboring inputs, its choices of outputs are more limited.\\u201d\\n\\nWe agree it would be nice to have a broader information theoretic analysis of the encodings of discrete sequences. However, we are not aware of any specific references that would address the issue we are studying, especially how the use of input-denoising can help impose a particular geometry on the resulting encodings.\"}", "{\"title\": \"idea is fine, writing it as a theorem seems a stretch.\", \"comment\": \"I think I better understand what you are trying to say in theorem 1 and 2 but I still do not believe they are mathematically rigorous.\\n\\nSay my set X = {1,2} and Z = {-1,1}. I can construct an encoder E1(1) = E1(2) = -1 and another encoder E2(1)=1, E2(2) = -1. Then your statement is false and both are valid lipschitz encoders (for a large enough lipschitz constant.)\", \"theorem_2\": \"the assumption statement doesn't make sense. What if I have 4 points xk all within epsilon of xi; then they can't all have probability 1/2 (assuming discrete X).\\n\\nIn fact I think there may be much more fundamental results in information theory that basically say what you are trying to convey, that when the encoder is constrained to maintain some kind of coherence between neighboring inputs, its choices of outputs are more limited.\"}", "{\"title\": \"(continued)\", \"comment\": \"6) \\u201cTheorem 1 - What if the set of zs isn\\u2019t unique and there is some sort of encoder collapse? Does this theorem still hold? (i.e.) there exists some set of points x_1, x_2 .. x_i \\\\in x, that all map to z_k (and even potentially in the limit that all points in x map to the same point in z space).\\u201d\\n\\nTheorem 1 analyzes which type of x-z mapping the AAE model will learn when it has achieved global optimality of its training objective. When global optimality is achieved, the discriminator will prevent all points in x from being mapped to the same point in z space because they must be indistinguishable from Gaussian. Moreover, different x will be encoded to different z for the decoder to best reconstruct them. (In practice, we have never observed encoder collapses in AAE that map different x to the same z, in contrast to VAE-variants).\\n\\n7) \\u201cthe model presented in this work is far from SOTA on sentiment style transfer benchmarks like Yelp.\\u201d\\n\\nOur model is trained in a fully unsupervised manner (no sentiment labels are provided during training), and at test time it can perform various style transfers by adding simple fixed offset vectors. We agree that our model is less powerful than SOTA sentiment transfer models that are specifically trained with labeled data. We will add a note for this, also highlighting that the unsupervised task our model is used for is more challenging. Note also that the term \\u201cunsupervised\\u201d has different senses in the literature pertaining to sentiment transfer. E.g., the method of Yang et al. view the task as \\u201cunsupervised\\u201d even though sentiment-information is used during training.\", \"references\": \"C\\u00edfka et al. (2018). \\u201cEval all, trust a few, do wrong to none: Comparing sentence generation models\\u201d. https://arxiv.org/pdf/1804.07972.pdf\\n\\nShao et al. (2017). \\u201cThe Riemannian Geometry of Deep Generative Models\\u201d. https://arxiv.org/pdf/1711.08014.pdf \\n\\nSubramanian et al. (2018). \\u201cTowards Text Generation with Adversarially Learned Neural Outlines\\u201d. https://papers.nips.cc/paper/7983-towards-text-generation-with-adversarially-learned-neural-outlines.pdf \\n\\nYang et al. (2019). \\u201cUnsupervised Text Style Transfer using Language Models as Discriminators\\u201d. https://arxiv.org/pdf/1805.11749.pdf\"}", "{\"title\": \"Response to Reviewer#3\", \"comment\": \"We thank the reviewer for the feedback and comments. We address each of them in turn and will make the corresponding clarifications in the paper.\\n\\n1) \\u201cwhy input space noise is better than latent space noise? Poole et al 2014 [1] showed that additive latent space gaussian noise in autoencoders is equivalent to a contractive autoencoder penalty and contractive autoencoders have an *explicit* penalty to encourage minimal change in z when changing x (i.e.) penalizing the norm of ||dz/dx||.\\u201d\\n\\nPoole et al. (2014) studied continuous x, their autoencoder was a simple one-layer network with tied weight matrix, and their loss was squared reconstruction error. Namely, their encoder was h=f(Wx) with a single element-wise non-linearity, and the decoder was linear x_hat = W\\u2019h. In this setting, they showed that adding noise to h with a variance according to the encoder Jacobian can recover the contractive Jacobian norm regularization penalty. This relies heavily on the linearity of reconstruction, squared error and continuity of x, and does not really apply to our case where the encoder/decoder are complex models and x is discrete.\\n\\nWhen x is discrete and the encoder is complex, adding Gaussian noise to latent encodings no longer connects back to simple changes in the input text. Instead, denoising can directly control the latent encodings of perturbed versions of x by asking them to decode back to the original x. As our theorems show, it is advantageous for these latent vectors to geometrically concentrate so as to help a z-continuous decoder map these sets to common targets. This also encourages the decoder to treat the perturbations as related and share some of the generative process.\\n\\n\\u201cWas the LAAE implemented in the same framework as your DAAE?\\u201d\\nYes, we implemented all models in the same framework, just with different objectives.\\n\\n2) \\u201cForward / Reverse PPL results on bigger datasets like the BookCorpus or WMT\\u201d\\n\\nWe conducted extensive experiments on the benchmark datasets of text autoencoders to thoroughly investigate their generation-reconstruction trade-off. We agree with the reviewer that better performance for text manipulation can be achieved by training larger models on larger datasets. We would like to investigate this in future work.\\n\\n3) \\u201cYou may be able to get similar reconstruction vs sample quality trade-offs with ARAEs by varying the variance of the gaussian noise, similar to LAAEs.\\u201d\\n\\nThank you for the suggestion. We tried injecting latent noise into ARAE, but its generation-reconstruction trade-off curve is strictly worse than LAAE, so we only included the original ARAE as in their paper. C\\u00edfka et al. (2018) also reported similar findings that AAE is superior to ARAE.\\n\\n4) \\u201cIn Figure 3... How well would a vanilla autoencoder or vanilla DAE perform?\\u201d\\n\\nAE\\u2019s recall rate is slightly lower than DAAE, and DAE\\u2019s recall rate is the highest among all models. We found that an untrained RNN encoder from random initialization has a good recall rate (the ranking is DAE > untrained encoder > DAAE > AE > the rest), and we suspect that SGD training of vanilla AE towards only the reconstruction loss will not overturn this initial bias. Note that denoising still improves neighborhood preservation in this case.\\n\\nThat said, we believe that considering primarily generative models is a fair comparison and is most consistent with the main text. When a latent prior is included/enforced for generative purpose, the mappings learned by the model have different properties. We\\u2019ll nevertheless include the figure that includes non-generative AE, DAE and untrained encoder in the appendix.\\n\\n5) \\u201cgradient-based latent space walks\\u201d\\n\\nWe agree with the reviewer that using gradient-based latent space walks is an interesting setup. It is, however, a bit different from our goals. Our paper studies whether it is possible to learn simple latent space geometry to map similar x to similar z and manipulate x via linear latent vector arithmetic. While taking gradient steps with respect to the decoder would enable complex non-linear interpolating trajectories (imposing implicit decoder geometry on the latent space), it wouldn\\u2019t quantify whether a simple encoded latent space geometry was achieved. Even models with complex latent geometries may still be able to move from one sentence to another within a few gradient steps. The linked work of Shao et al. specifically uses these gradient steps to gauge (manifold) geometry of the inputs themselves, whereas our analyses aims to gauge how well the latent space of different models reflects structure in the data space. Similarly, the cited work of Subramanian et al. uses these gradient steps as a way to manage poorly-structured latent spaces, whereas our goal is to analyze the geometry of the latent spaces, regardless of how poorly-structured they may be for certain models.\"}", "{\"title\": \"(continued)\", \"comment\": \"References:\\n\\nKim et al. (2018) Semi-Amortized Variational Autoencoders\\n\\nDevlin et al. (2018) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\\n\\nZhang et al. (2017) Understanding deep learning requires rethinking generalization\\n\\nNeyshabur et al. (2019) Towards understanding the role of over-parametrization in generalization of neural networks\\n\\nBelkin et al. (2018) Reconciling modern machine learning practice and the bias-variance trade-off\\n\\nVincent et al. (2008) Extracting and Composing Robust Features with Denoising Autoencoders\\n\\nBengio et al. (2013) Generalized Denoising Auto-Encoders as Generative Models\"}", "{\"title\": \"Response to Reviewer#2\", \"comment\": \"We thank the reviewer for the feedback. We would like to clarify that the focus of this paper is *controllable text generation*, and we study autoencoder based text generative models as a tool that can manipulate text via latent vector operations. Therefore, we conduct experiments on various text generation tasks, and we compare our method to latent variable generative models but not to DAEs which cannot be employed generatively. We will update our paper to make this point clear.\\n\\nThe reviewer raised concerns about the use of over-parameterized autoencoders for representation learning. We would like to note that for controllable text generation, we actually need flexible models that have high content fidelity after encoding-decoding, while enforcing the latent codes to be Gaussian. In our experiments, the dimension of the latent variable z is not large (128 for Yelp dataset and 256 for Yahoo dataset), but the encoder and decoder networks are large in order to achieve good performance (we used 1024 LSTM hidden dimension and 512 word embedding dimension, the same network size is used in Kim et al. (2018)). For all models, we observed that smaller networks produced worse empirical results. In fact, the top performing text representation models are heavily over-parameterized (Devlin et al., 2018). Extensive empirical observations on a wide range of tasks have shown that over-parameterization of deep neural networks does not lead to overfitting but rather improves generalization, a phenomenon that has attracted a lot of research interest (Zhang et al., 2017; Neyshabur et al., 2019). Furthermore, recent research suggests that what matters is not the raw capacity of the model, but rather the \\\"simplicity\\\" of the learned function, and that higher-capacity models may (counterintuitively) learn simpler functions in practice (Belkin et al., 2018).\\n\\nThis is why we theoretically analyze powerful encoder/decoder networks, as these are the text models used in practice. Note that our theory is *not* stating that the 1-1 mapping / memorization of AAE is problematic (we will update our paper to clarify this). In fact, we require an (approximately) 1-1 mapping in order to handle fine-grained controllable generation. Since there exist many such x-z mappings, the question we address is which type of mapping (with what geometric properties) will the autoencoder learn. Here, we are able to prove that in terms of global optimality, DAAE will specifically learn only the x-z mapping in which x-neighborhoods are preserved in the z-space, whereas AAE has no such guarantees and may learn an arbitrary x-z mapping while still achieving optimality of its training objective. The theoretical insights provided by our analysis are also reflected in practice: Sec 5.2 verified that, consistent with our theory, DAAE has the best neighborhood preservation property in the real-world case.\\n\\nWe\\u2019d like to emphasize that our main contribution is not only the proposed model, but also the theoretical & empirical analysis that this simple use of denoising is truly superior. While denoising is a well-accepted practice, there is little understanding of its impact. Previous analyses of denoising only drew intuitive connections to manifold learning and more robust representations (Vincent et al., 2008; Bengio et al., 2013). We first present rigorous mathematical explanations of how denoising induces geometric latent space organization in text autoencoders and why it is superior to no denoising. Our theory is verified empirically. Thanks to the improved latent space geometry by denoising, we have successfully achieved not only good perplexity results, but also respectable performance on controllable text generation tasks for the first time from completely unsupervised data, including style transfer via vector arithmetic and sentence interpolation via latent space traversal. These completely unsupervised applications in our paper are novel as well.\\n\\nFinally, we\\u2019d like to note that for downstream classification tasks, BERT-style models have an overwhelming advantage over approaches that encode sentences into a single vector. In contrast, for generative modeling, it is easy to impose a latent prior on single-vector representations and manipulate them, but it is much more difficult to impose a prior on and manipulate variable-length vector sequences. That\\u2019s why we didn\\u2019t use text autoencoders to compete in classification tasks. Our experiments focus on generation, and many important applications from summarization to style transfer are generation tasks.\\n\\n- add the last experiment into main text\\nWe will include this last experiment into the main text in the revised version of the paper that will be uploaded shortly.\\n\\nWe hope through the above discussions the reviewer can reassess this work. We will improve clarity in the paper, and we are always available for further discussions.\"}", "{\"title\": \"(continued)\", \"comment\": \"- Difference between word removal and word masking\\n\\nWord removal will remove words from the sentence. The resulting sentence has no placeholder for the removed words and is shorter in length. In contrast, word masking will replace words with a <mask> token and preserve sentence length. For example, for the sentence \\u201cwe had a very nice experience\\u201d, after removing \\u201cvery\\u201d it becomes \\u201cwe had a nice experience\\u201d, and after masking \\u201cvery\\u201d it becomes \\u201cwe had a <mask> nice experience\\u201d. \\n\\n- What is the language model used for forward and reverse ppl ?\\n\\nWe used a LSTM language model which has one layer, 1024 hidden dimension and 512 word embedding dimension.\", \"references\": \"Sch\\u00e4fer and Zimmermann (2006). Recurrent Neural Networks Are Universal Approximators. https://link.springer.com/chapter/10.1007/11840817_66 \\n\\nRadford et al. (2019). Language Models are Unsupervised Multitask Learners. https://openai.com/blog/better-language-models/\\n\\nChen et al. (2017). Variational Lossy Autoencoder. https://arxiv.org/abs/1611.02731\\n\\nvan den Oord et al. (2018). Neural Discrete Representation Learning. https://arxiv.org/abs/1711.00937\\n\\nDieng et al. (2018). Avoiding Latent Variable Collapse with Generative Skip Models. https://arxiv.org/abs/1807.04863\\n\\nRazavi et al. (2019). Preventing Posterior Collapse with delta-VAEs. https://openreview.net/pdf?id=BJe0Gn0cY7\\n\\nMueller et al. (2017). Sequence to better sequence: continuous revision of combinatorial structures. http://proceedings.mlr.press/v70/mueller17a.html\\n\\nZhang et al. (2017). Understanding deep learning requires rethinking generalization. https://arxiv.org/abs/1611.03530\\n \\nVincent et al. (2008). Extracting and Composing Robust Features with Denoising Autoencoders. https://www.cs.toronto.edu/~larocheh/publications/icml-2008-denoising-autoencoders.pdf \\n\\nBengio et al. (2013). Generalized Denoising Auto-Encoders as Generative Models. https://arxiv.org/pdf/1305.6663.pdf\"}", "{\"title\": \"Response to Reviewer#1\", \"comment\": \"We thank the reviewer for the feedback. We address each question in turn and will add the clarifications in the revision:\\n\\n- VAE with word dropout on the decoder side (Bowman et al., 2016)\\n\\nBowman et al. proposed to weaken VAE\\u2019s decoder by masking words on the decoder side to help alleviate its collapse issue. However, as the authors pointed out in their paper: \\u201cEven with the techniques described in the previous section, including the inputless decoder, we were unable to train models for which the kl divergence term of the cost function dominates the reconstruction term\\u201d. From Table 2 in their paper, we can see that the VAE with inputless decoder has a small KL term (15) and a large reconstruction loss (120-15=105), which indicates that the latent z encodes little information about x and it cannot do reconstruction well. We also tried it in our experiments. On the Yelp dataset, the best reconstruction BLEU it can achieve is 12.8 with word dropout rate=0.7 (our model has BLEU 84.3). Therefore, it is not suitable for text manipulations that require high content fidelity.\\n\\n- Theorem 3 gives an upper bound, but what does it show?\\n\\nWe will clarify that the goal of our analysis is not to compare the objective values of AAE vs DAAE. Instead, we want to analyze which x-z mapping the model will learn under the AAE and DAAE objective. With powerful encoder/decoder networks, AAE has no preference over different x-z mappings because they can all achieve the same optimal objective value (Thm 1). In contrast, DAAE prefers organized x-z mappings (that preserve local neighborhoods in the x-space) over disorganized ones, since organized mappings can achieve better objective values (Thm 2, 3). In conclusion, a well-trained DAAE is guaranteed to learn neighborhood-preserving latent representations, whereas even a perfectly-trained AAE model may learn latent representations whose geometry fails to reflect similarity in the x space. \\n\\nWe agree with the reviewer that we\\u2019d ideally like to derive both upper and lower bounds on the achievable DAAE objective value. Nevertheless, the upper bound provided by Thm 3 implies that an organized x-z mapping has a better achievable limit than a disorganized mapping, thus supporting our argument that the denoising criterion will encourage a better geometrically-organized latent space (Sec 5.2 empirically verified this theoretical conclusion).\\n\\n- Assumptions made by our theoretical analysis\", \"the_assumptions_we_made_are\": \"(1) an effectively trained discriminator ensures that the latent codes resemble samples from the prior;\\n(2) the encoder and decoder are high-capacity models that are universal approximators, with the constraint that\\n(3) the decoder is Lipschitz continuous in its continuous input z. \\n\\nFor (1): In all the experiments we did, training was very stable and the adversarial loss was kept at around -log 0.5, indicating that the latent codes were indistinguishable from the prior and this assumption holds empirically.\\nFor (2): Sch\\u00e4fer and Zimmermann (2006) have shown the universal approximation ability of RNNs, and nowadays most state of the art sequence models employ high-capacity autoregressive neural networks with tons of parameters (Radford et al., 2019). In fact, the empirical ability of neural decoders approximate arbitrary distributions is widely cited as a reason for posterior collapse in recurrent VAEs (Chen et al., 2017; van den Oord et al., 2018; Dieng et al., 2018; Razavi et al., 2019), and thus this assumption is supported by a wealth of recent literature.\\nFor (3): This is a very weak assumption (also made in Mueller et al., 2017), and it holds as long as the recurrent or attention weight matrices in RNN/Transformer have bounded norm, which is naturally encouraged by SGD training with early stopping and L2 regularization (Zhang et al., 2017).\\n\\n- To theoretically analyze other baselines\\n\\nOur analysis applies to AAE-based models where the latent prior is imposed by an adversarial discriminator. To analyze the latent space geometry of (beta-)VAE would require different techniques, which is not the focus of this paper.\\n\\nThere are many interesting open questions in this area, and we would like to emphasize that this is the first theoretical analysis of how denoising induces latent space organization in text autoencoders. Existing analyses of denoising autoencoders (Vincent et al., 2008; Bengio et al., 2013) informally argued that denoising can help learn data manifolds and extract more robust representations, but did not notice that a major benefit of denoising is that it encourages the preservation of data structure in the latent space (regardless if the data live on a manifold or not).\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper argues that adding noise to the inputs of an adversarial autoencoder for text improves the geometry of the learned latent space (in terms of mapping similar input sentences to nearby points in the latent space). The authors present a mathematical argument for why adding noise to the inputs would enforce latent space structure while a vanilla autoencoder would have no preference over x-z mappings.\\n\\nOverall, the paper addresses an important problem of improving autoencoder based generative models of text, presents a simple solution to do so and mathematically and empirically demonstrates its effectiveness. While I think that the benchmarks are somewhat artificial with small sentences and vocabulary sizes, I think the improvements demonstrated are substantial enough.\\n\\nI have a few questions & comments\\n\\n1) I\\u2019m curious about whether the authors have an intuition for why input space noise is better than latent space noise? Poole et al 2014 [1] showed that additive latent space gaussian noise in autoencoders is equivalent to a contractive autoencoder penalty and contractive autoencoders have an *explicit* penalty to encourage minimal change in z when changing x (i.e.) penalizing the norm of ||dz/dx||. Additive latent space noise appears to be a key ingredient to getting the ARAE and similar work like in Subramanian et al 2018 [2] to work. Was the LAAE implemented in the same framework as your DAAE?\\n2) It would be great to see Forward / Reverse PPL results on bigger datasets like the BookCorpus or WMT similar to [2].\\n3) You may be able to get similar reconstruction vs sample quality trade-offs with ARAEs by varying the variance of the gaussian noise, similar to LAAEs.\\n4) In Figure 3, I would really like to see how an autoencoder that isn\\u2019t a generative model performs. How well would a vanilla autoencoder or vanilla DAE perform? This is a cool setup to evaluate latent space representation quality - you could even consider running some of the SentEval probing tasks on these representations.\\n5) Could you use something like gradient-based latent space walks like in [2] to characterize the latent space geometry? https://arxiv.org/abs/1711.08014 also use similar gradient-based walks to characterize latent space smoothness in deep generative models. For example, if it takes 10 latent space gradient steps with a fixed learning rate for model \\u201ca\\u201d to turn sentence \\u201cx\\u201d into a *similar* sentence \\u201cy\\u201d but 20 steps for model \\u201cb\\u201d, then maybe \\u201ca\\u201d has smoother latent space geometry.\\n6) Theorem 1 - What if the set of zs isn\\u2019t unique and there is some sort of encoder collapse? Does this theorem still hold? (i.e.) there exists some set of points x_1, x_2 .. x_i \\\\in x, that all map to z_k (and even potentially in the limit that all points in x map to the same point in z space).\\n7) It would be good to point out that the model presented in this work is far from SOTA on sentiment style transfer benchmarks like Yelp.\\n\\n[1] Analyzing noise in autoencoders and deep networks - https://arxiv.org/pdf/1406.1831.pdf\\n[2] Towards Text Generation with Adversarially Learned Neural Outlines - https://papers.nips.cc/paper/7983-towards-text-generation-with-adversarially-learned-neural-outlines.pdf\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper \\\"Denoising Improves Latent Space Geometry in Text Autoencoders\\\" tackles the problem of text autoencoding in a space which respects text similarities. It is an interesting problem for which various attempts have been proposed, while still facing difficulties for encoding in smooth spaces. The paper proposes a simple (rather straightforward) approach based on adversarial learning, with some theoretical guarantees, which obtains good performances for reconstruction and neighborhood preservation.\\n\\nMy main concern is about the missing of comparison with word dropout with variational encoding [Bowman et al., 2016], which also considers perturbations of the input texts to enforce the decoder to use the latent space. While the authors cite this work, I cannot understand why they did not include it in their experiments. \\n\\nAlso, theorem 3 gives an upperbound of the achievable log-likelihood, which is \\\"substantially better when examples in the same cluster are mapped to to points in the latent space in a manner that is well-separated from encodings of other\\nclusters\\\". Ok but what does it show for the approach. If it was a lower-bound of the DAAE likelihood it would be interesting. But an upperbound ? In which sense does it indicate that it will be better than AAE ? Wouldn't it be possible to theoretically analyze other baselines? Also, all the theoretical analysis is made based on strong assumptions. Are these verified on considered datasets?\", \"minor_questions\": [\"In introduction of the experiments section, authors mention that they tried word removal and word masking. What is the difference ?\", \"what is the language model used for forward and reverse ppl ?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presented a denoising adversarial autoencoder for sentence embeddings. The idea is that by introducing perturbations (word omissions, etc) the embeddings are more meaningful and less \\\"memorized\\\". Evaluations include measuring sentence perplexity in generation/reconstruction, tense changing via vector arithmetic, sentiment changes via negative/positive vector additions, and sentence interpolations.\", \"strengths\": \"I thought the idea is nice, and the results do seem to show improvements in a number of interesting tasks.\", \"weaknesses\": \"I don't really think the explanation, especially in Theorem 1, makes a lot of sense. Qualitatively speaking, it's true that \\\"memorization\\\" in autoencoders (where the latent space has a 1-1 mapping with the input space) is problematic when the autoencoders are too powerful, but it is not always the case, and it is too far to say that the probability in theorem 1 is ALWAYS agnostic to encoding. The fact is word2vec works just fine with no perturbations, and there is no mathematical reason why sentence embeddings are fundamentally different. What is more accurate to say is that there is a tradeoff between model complexity and latent space representation usefulness, which is also related to the regularization/overfitting tradeoff in supervised learning. Here, injecting noise in the exact same fashion proposed in this paper is a well-accepted practice. While I think it's interesting that it works well here, I wouldn't really frame it as such a novelty, in that case, and I believe the other works on denoising autoencoders should be compared against in the experiments. In general, I find the mathematical claims a bit dubious, as a main assumption seems to be that the autoencoder itself is so overparametrized that it isn't really functioning as a representation-learning tool anyway.\\n\\nI also feel that the last experiment (referenced in the appendix) needs to go in the main text if we're to see it as a contribution. There is some wording that can be tightened in the main text, to make more room. \\n\\nOverall, I would improve my rating if the paper refocused more on the experiments, included more baselines (like other denoising autoencoders) and tasks that measure something besides perplexity (such as actual sentiment prediction, or machine translation, or other somewhat unrelated downstream tasks), and decreased the emphasis on the theoretical analysis--unless there is something I am significantly misunderstanding, it does not seem to be a particularly powerful theoretical contribution.\"}" ] }
Skeh-xBYDH
On Symmetry and Initialization for Neural Networks
[ "Ido Nachum", "Amir Yehudayoff" ]
This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.
[ "Neural Network Theory", "Symmetry" ]
Reject
https://openreview.net/pdf?id=Skeh-xBYDH
https://openreview.net/forum?id=Skeh-xBYDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "CjzLx08ckf", "rklvJlN3jB", "H1eCE4LosB", "BJgezvzMsH", "BkewxGfGsH", "rJxskC-Mor", "HklNwhbMsB", "Bkg_Pp2rcB", "H1gUil1TYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1576798741828, 1573826526568, 1573770294112, 1573164807805, 1573163503308, 1573162467166, 1573162075696, 1572355424287, 1571774622049 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2150/Authors" ], [ "ICLR.cc/2020/Conference/Paper2150/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2150/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2150/Authors" ], [ "ICLR.cc/2020/Conference/Paper2150/Authors" ], [ "ICLR.cc/2020/Conference/Paper2150/Authors" ], [ "ICLR.cc/2020/Conference/Paper2150/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2150/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The two main concerns raised by reviewers is that whether the results are significant, and a potential issue in the proof. While the rebuttal clarified some steps in the proof, the main concerns about the significance remain. The authors are encouraged to make this significance more clear.\\n\\nNote that one reviewer argued theoretical papers are not suitable for ICLR. This is false, as a theoretical understanding of neural networks remains a key research area that is of wide interest to the community. Consequently, this review was not considered in the final evaluation.\", \"title\": \"Paper Decision\"}", "{\"title\": \"A final comment to all reviewers\", \"comment\": \"We uploaded a revised version of the paper.\\n1. We modified the representations of the symmetric functions to the simpler representations suggested by Reviewer#2.\\n2. We modified some phrasing in the proof of Theorem 1, to make it clearer.\\n\\nRecently, we came to know this work: https://arxiv.org/abs/1910.06956 and some several works that followed it.\\nIt studies the behavior of neural networks when dealing with an infinite width network while only considering the training with continuous GD and its empirical error. In essence, these works study training of neural networks where the embedding of the original space by the network does not change with time much. These works attracted considerable attention. So it seems we don't have enough understanding of the dynamics in these regimes yet.\", \"our_work_is_an_example_of_the_above_phenomenon_and_studied_in_full_detail\": \"finite network, discrete SGD on all layers, polytime, and generalization guarantees. Additionally, Lemma 4 gives the tools to study the training process of a finite width neural network performing discrete SGD (not only the special initialization we suggested). Also, Lemma 4 can enable to derive some generalization guarantees.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for addressing my comments. I did have the same issue as the other reviewer. Would be useful to make it explicit in the proof. As for the overall class of functions the paper is trying to learn, I'm still not convinced it gives us any new insight on how neural network training works since post initialization it is equivalent to learning a linear classifier. It would help the paper to extend the techniques developed to other more complex classes.\"}", "{\"title\": \"Response to your response\", \"comment\": \"Sorry guys! After reading the comments from other reviewers, I realised that I totally missed your points. I will read your paper again and resubmit my review.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your time reading our work and for your feedback. Below, we try to clarify your concerns and we\\u2019ll be happy to hear your perspective again after you read our response.\\n\\nYour main issue is similar to Review #3, so please also refer to our comments there.\\n\\n\\u201cIt is unclear why this setup warrants the use of a neural network for training.\\u201d\\n\\nIt does not warrant it. The class is easily learnable with no special algorithm. Once you receive an input of a given weight you immediately know the label of all vectors with the same weight. It is just a matter of collecting enough vectors with different weights.\\n\\nWe study this problem to have a fine-grained analysis of neural networks for a specific class of functions. Such analysis does not appear in the literature, as far as we know. Notice that this theoretical result demonstrates that although the neural network is over parametrized, by using SGD it still generalizes well.\\n\\nFrom an empirical and theoretical perspective, it is surprising that such a simple class of functions is not learnable by standard neural networks.\\n\\n\\u201cThe class of symmetric boolean functions can be modeled as a univariate function by mapping x -> |x| which is easy to solve in the no noise setting analyzed by the paper.\\u201d \\n\\nPotentially, this mapping will not work in the noisy case. We show empirically that our initialization works in two different noisy cases.\\n\\n\\u201cWriting - Proofs are mostly clear however it would help to add more details in the proof of the main theorem (especially to argue about the use of the Perceptron convergence theorem for the changing representations). \\u201c\\n\\nAre you maybe referring to the same issue about Lemma 4 Review#3 had? Maybe our response can clarify things. Indeed, Lemma 4 has a bit of a tricky proof. \\nIf not, can you specifically refer us to arguments/sentences that were not clear?\\n\\n \\u201cRegarding the representation for indicators using ReLUs, one could use a simpler and more standard representation\\u2026\\u201d\\n\\nThis is indeed a simpler representation and we can use it. We didn\\u2019t use it because we simply were not aware of it. We just used the representation that we naturally found ourselves. Thank you for introducing it to us.\\n\\n\\u201cThe number of epochs seem to be varying in the experiments, please make that consistent.\\u201d\\n\\nThe focus of this work is the number of SGD updates. So if the number of samples is different for every case (n, n^2, n^3, n^4), it is only natural to have fewer epochs for larger datasets.\\n\\n\\u201cLastly, the important plots need to be moved to the main paper.\\u201d\\n\\nWe agree. They were put there because of the page limitation.\\n\\n\\u201cAre the experiments for multiple runs or just a single run?\\u201d\\n\\nA single run, that represents all other runs we experimented with while working in this setting. We observed the same behavior on all of our runs.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"\\u201cIn my opinion, this paper doesn't fit to the main interests of ICLR. Therefore, I didn't spend too much time reviewing this paper and suggest for rejection.\\u201d\\n\\nWe would be happy if you can elaborate on why a theoretical result on neural networks is not relevant.\\nIn the past, it seems that ICLR accepted various theoretical results on neural networks.\\nCan you suggest another venue that fits this paper?\\n\\n\\u201cIt could be potentially very interesting if the authors provide also results with other initialisation methods.\\u201d\\n\\nWe\\u2019ll be happy to hear about other initialization methods you are interested in. Care to elaborate? \\nWe have used the standard initialization that is mostly used and studied theoretically.\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your time reading our work and for your feedback. Below, we try to clarify your concerns and we\\u2019ll be happy to hear your perspective again after you read our response.\\n\\nRegarding the results\\u2019 significance, we are not aware of any previous result that proves that a class of functions is PAC learnable using neural networks and SGD and considers all real-life elements of the training and generalization. For example, training all layers simultaneously, working with a fixed dataset, considering generalization and not just the training error, etc. We state the differences between our work and others\\u2019 in the Related Work section. \\n\\nAs a fine-grained analysis of neural networks (such as appears in the paper) should start somewhere, working with the simple class of symmetric functions is maybe a good start. It wasn\\u2019t the point to learn symmetric functions but to understand better the dynamics of training neural networks. Our analysis of SGD shows that it has an optimal sample complexity (in terms of the VC-dimension of the class of symmetric functions) and that SGD is also efficient in time and memory. \\n\\nAlso, Theorem 1 demonstrates that although the network has much more parameters \\\\Omega(n^2) than samples O(n), it is still able to generalize well (even when all weights are allowed to change). A phenomenon that happens in practice and not well explained.\\n\\nWe also find the empirical evidence surprising that a neural network cannot learn a random symmetric function from a random initialization, although these functions have a lot of structure and are easy to learn. This suggests another line of research; why these simple functions are hard to learn? \\n\\nFinally, Lemma 4 holds in general. It is not specific for symmetric functions and our initialization. It can be of theoretical interest independently of the paper.\", \"from_a_practical_perspective\": \"1.\\tEmpirically, we show that our initialization is robust to different types of noise. This suggests that our initialization can maybe help in cases where the function we learn is \\u201chighly symmetric\\u201d but not entirely.\\n2.\\tLemma 4 suggests that if during training the neural network found a good embedding of the original space and the weights of the output neuron are not too large, then running SGD will converge to a network with a small empirical error that also generalizes well. This suggests a practical idea to regularize only the weights of the output neuron. Doing this will make sure the network will not \\u201ctake a pass\\u201d on a good embedding of the original space.\\n\\nWe didn\\u2019t add these points, as it sidetracks the theoretical analysis and the purpose of the paper.\", \"regarding_the_other_comments\": \"\\u201cI could not follow one step in the proof of Lemma 4 (used to show that SGD does not move the weights too far from the initialization). Why does Theorem 2 imply that the number of updates is at most 20R^2/gamma^2? In Theorem 2, is fixed whereas in Lemma 4 it varies with. To me this seems important, since without a bound on the number of steps it is unclear how you can control how far the embeddings move. \\u201c\\n\\nThis is indeed important and is taken into consideration in the paper. Let us clarify: We first prove that if at most $20R^2/gamma^2$ updates were made, then the norm of all $v$ is at most $2R$ while keeping a margin of 0.9\\\\gamma. Now, we can use Theorem 2, which bounds the number of possible updates only in terms of the margin (0.9\\\\gamma) and maximal norm (2R) (independent of the size of the dataset).\\n\\n\\u201cIn the statement of Lemma 4, linear separability of V should be with respect to some fixed partition Y?\\u201d\\n\\nYes. X is partitioned and this induces a partition of V. We assume that at the beginning, V is linearly separable and under the right conditions, it stays this way (although V changes).\\n\\n\\u201cFirst, I think it would be helpful to the reader if the authors could make this intuition more explicit. In the submission the authors do not give much explanation for the choice of initialization. \\u201c\\n\\nYes, the purpose of the initialization is to embed the original space of binary vectors in such a way that every symmetric function linearly partitions the embedded space.\\n\\n\\u201cIn Figure 5, why is empirical error not decreasing over epochs? \\u201c\\n\\nThe empirical error does not decrease because we have more samples (n^4) than parameters (O(n^2)). The optimization problem is now hard. The bigger the sample, the harder it is for the network to fit the data. You can observe this gradual increase in hardness the bigger the sample you take.\\n\\n\\\"I think the figures referenced in the text should be in the paper, not the appendix.\\\"\\n\\nWe agree. They were put there because of the page restriction.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"PAPER SUMMARY: This paper studies the problem of training a single hidden layer neural network to represent an arbitrary symmetric function. These are functions $f : \\\\{0,1\\\\}^n \\\\to \\\\{-1, 1\\\\}$ which are invariant to permutations in the input coordinates. The authors' main result (Theorem 1) shows that if you take a single hidden layer network with $O(n)$ hidden units and initialize the weights in a particular way, then for any symmetric $f$, SGD training will converge to an empirical risk minimizer with guaranteed small generalization error. On the other hand, the authors' experiments suggest that arbitrary symmetric functions are not learnable from random initialization. Taken together, these results point to the importance of designing network architectures/ initializations that respect the structure in the function class you're trying to represent.\", \"review_summary\": \"I lean towards rejecting this paper however, because I am not convinced of the results' significance. We already know how to learn symmetric functions (see Exercise 3.26 in Mohri et al., 2018). The authors' results show that we can inject this knowledge into a neural network at initialization, and then run SGD without making things too much worse. I do not see how these ideas might apply to more substantial learning problems where our prior knowledge is less precise. Moreover, while the proofs are clearly presented overall, I have one concern with a key step in Lemma 4.\", \"major_comments\": \"1) The key property of symmetric functions is that their output depends only on $|x|$. Thus, one can first extract \\\"cardinality features\\\" $x \\\\mapsto |x|$, after which learnability follows by standard generalization theory results (as the authors note in the proof of Theorem 1).\\n\\nThe basic idea of Theorem 1 then seems to be to realize this feature map as the hidden layer of a single hidden layer ReLU network (this is essentially what the initialization does) and then show that running SGD will not move the weights too far from the initialization (Lemma 4).\\n\\n(a) First, I think it would be helpful to the reader if the authors could make this intuition more explicit. In the submission the authors do not give much explanation for the choice of initialization.\\n\\n(b) Second, because this is a learning problem we already know how to solve, the results seems a little contrived. I do not see how these ideas could extend to more challenging cases where our prior knowledge of symmetry (e.g. translation invariance) does not by itself lead to an algorithm with efficient learnability guarantees.\\n\\n2) I could not follow one step in the proof of Lemma 4 (used to show that SGD does not move the weights too far from the initialization). Why does Theorem 2 imply that the number of updates is at most $20 R^2 / \\\\gamma^2$? In Theorem 2, $R$ is fixed whereas in Lemma 4 it varies with $t$. To me this seems important, since without a bound on the number of steps it is unclear how you can control how far the embeddings move.\\n\\nMINOR COMMENTS\\n\\n3) In the statement of Lemma 4, linear separability of $V$ should be with respect to some fixed partition $Y$?.\\n\\n4) In Figure 5, why is empirical error not decreasing over epochs?\\n\\n5) I think the figures referenced in the text should be in the paper, not the appendix.\\n\\nMohri, M., Rostamizadeh, A., & Talwalkar, A. (2018). Foundations of machine learning. MIT press.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper studies the problem of learning the class of symmetric boolean function, that is, functions that depend only on |x| = \\\\sum_i x_i. The paper shows that with proper initialization, one-hidden layer over-parametrized networks can learn this class of functions. The main observation that the authors make is that the last layer weights are updated as in the Perceptron algorithm and as long as the first layer has learned a large-margin representation, the first-layer weights do not change much. The authors experimentally validate their theory and additionally show that random initialization fails to converge to a low test-error solution while their special initialization works.\\n\\nOverall, the main complexity arises from handling training of both layers and this is cleverly analyzed. However I am leaning towards rejection as the underlying problem does not seem well-motivated. The class of symmetric boolean functions can be modeled as a univariate function by mapping x -> |x| which is easy to solve in the no noise setting analyzed by the paper. Also, in terms of learning with neural networks, as the authors point out, one can learn this class by training only the last layer. It is unclear why this setup warrants the use of a neural network for training. The problem would be more challenging and interesting for the class of symmetric (permutation invariant) functions on the real domain where using symmetry in the architecture/initialization can potentially give gains.\\n\\nWriting - Proofs are mostly clear however it would help to add more details in the proof of the main theorem (especially to argue about the use of the Perceptron convergence theorem for the changing representations). Also, the introduction needs to further motivate the setup and its relevance to neural networks.\\n\\nRepresentation - Regarding the representation for indicators using ReLUs, one could use a simpler and more standard representation. In prior work the indicator is represented using a difference of ReLUs, 1[|x| >= i] = ReLU(|x| - i + 1) - ReLU(|x| - i) and 1[|x| = i] = 1[|x| >= i] - 1[|x| >= i+1] = ReLU(|x| - i + 1) - 2 ReLU(|x| - i) + ReLU(|x| - i - 1). Now one can express \\\\sum_{i \\\\in A} 1[|x| = i] by summing up these indicators and adding a bias term of -0.5 will make the sign be the correct value. Note that this would overall require only n + 2 hidden units with the weights now being bounded by constants. This would still have a margin of \\\\Omega(1/n). Is there a particular reason for the choice of representation in the paper?\\n\\nExperiments - The plots are hard to parse and inconsistent. Firstly, it would be better to use line plots instead of scatter plots to highlight the trend. Secondly, the x-axis needs to be sampled more frequently. The number of epochs seem to be varying in the experiments, please make that consistent. Lastly, the important plots need to be moved to the main paper. Are the experiments for multiple runs or just a single run?\"}" ] }
HkgsWxrtPB
Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies
[ "Sungryull Sohn", "Hyunjae Woo", "Jongwook Choi", "Honglak Lee" ]
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods.
[ "Meta reinforcement learning", "subtask graph" ]
Accept (Poster)
https://openreview.net/pdf?id=HkgsWxrtPB
https://openreview.net/forum?id=HkgsWxrtPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ubtOjZsiZ", "rJlWTrU2oH", "r1g4ssfhjB", "SyxbwjGnsH", "HyxG49f2jB", "HJgCe5G2iB", "r1l0Qu3nKS", "S1lrmJG2tH", "B1eAkDUuKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741800, 1573836217329, 1573821340058, 1573821273061, 1573820969972, 1573820918276, 1571764261973, 1571720989119, 1571477221617 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2149/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2149/Authors" ], [ "ICLR.cc/2020/Conference/Paper2149/Authors" ], [ "ICLR.cc/2020/Conference/Paper2149/Authors" ], [ "ICLR.cc/2020/Conference/Paper2149/Authors" ], [ "ICLR.cc/2020/Conference/Paper2149/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2149/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2149/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This work formulates and tackles a few-shot RL problem called subtask graph inference, where hierarchical tasks are characterized by a graph describing all subtasks and their dependencies. In other words, each task consists of multiple subtasks and completing a subtask provides a reward. The authors propose a meta-RL approach to meta-train a policy that infers the subtask graph from any new task data in a few shots. Empirical experiments are performed on different domains, including Startcraft II, highlighting the efficiency and scalability of the proposed approach.\\n\\nMost concerns of reviewers were addressed in the rebuttal. The main remaining concerns about this work are that it is mainly an extension of Sohn et al. (2018), making the contribution somewhat incremental, and that its applicability is limited to problems where subtasks are provided. However, all reviewers being positive about this paper, I would still recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Post-rebuttal comments\", \"comment\": \"Thank you for the detailed response. The authors addressed all of my questions and updated the paper accordingly. I thus confirm my initial view and vote for acceptance.\"}", "{\"title\": \"Comments to AnonReviewer 2\", \"comment\": \"We appreciate the reviewer for the constructive and helpful comments.\\n\\n>>> 1. \\u201cWhen/how are the number of remaining time-steps and episodes used?\\u201d\\nA) Intuitively, the agent will only execute the subtasks that can be executed within the remaining time step to get the reward, since the agent receives the reward corresponding to each subtask only if it \\u201cfinishes\\u201d the subtask before episode terminates. Also, these two time features (i.e., remaining time-steps and episodes) need to be given to the agent for time-awareness [1] when the MDP is of finite-horizon. In our implementation, these time features were given to the policy as additional inputs.\\n\\n>>> 2. \\u201cIt would be good to add more background/details about the previous works (e.g., [Sohn et al., 2018]) that this paper relies on (at least in the supplementary).\\u201d\\nA) In Appendix A, we added more details of backgrounds in [Sohn et al., 2018] that are relevant to this paper. We added an explicit reference to this in Section 2.\\n\\n>>> 3-1. \\u201cIn the SC2 experiment, what is the difference between MSGI-meta and MSGI-GRProp?\\u201d\\nA) MSGI-meta uses a meta-learned policy as an adaptation policy, while MSGI-GRProp uses GRProp policy. Motivation of MSGI-GRProp: GRProp is a good approximation/heuristic algorithm that works well without meta-training as shown in [Sohn et al., 2018]. We have made this motivation more clear in Section 5.2 as follows: \\u201c... Instead of MSGI-Meta, we used MSGI-GRProp. MSGI-GRProp uses the GRProp policy as an adaptation policy, since GRProp is a good approximation algorithm that works well without meta-training as shown in (Sohn et al., 2018).\\u201d\\n\\n>>> 3-2. \\u201cWhere is the \\\"oracle\\\" baseline (introduced in sec. 5) used in the experiments?\\u201d\\nA) GRProp+Oracle was used as the upper bound for performance normalization of all the agents. So, in Figure 4 and 5, $\\\\widehat{R}=1$ corresponds to the performance of GRProp+Oracle. We made it more clear in the comment of Figure 5 by adding \\u201cThe performance of each method was normalized where GRProp+Oracle is the upper bound (i.e., $\\\\widehat{R}=1$) and Random is the lower bound (i.e., $\\\\widehat{R}=0$).\\u201d\\n\\n>>> 4-1. \\u201cThe limitation of assuming options to be given can be stated much earlier.\\u201d\\nA) In the footnote 1 (in Section 2.2) of the submission, we stated that we assume options are pre-learned. We will move this to the main text to make this more clear. \\n\\n>>> 4-2. Can we learn options and subtasks instead of assuming that it is given?\\nWe believe this is possible but a highly non-trivial problem. One possible way is to use the option discovery methods [2, 3]. Consider the option that executes a subtask. The completion and eligibility set corresponds to the \\u201ctermination condition\\u201d and \\u201cinitiation set\\u201d in the option framework, which can be learned [4]. It is however not directly applicable to our framework since the ILP module requires the perfect (i.e., noise-free) completion and eligibility input, which is hard to achieve with the learned eligibility and completion predictor. Thus we leave it as a future work.\", \"minor_typos\": \"A) Thank you for finding the typos. We corrected the typo in Section 2.1 and added the missing definitions in Section 2.2.\", \"references\": \"[1] Pardo, Fabio, et al. Time limits in reinforcement learning. arXiv, 2017. https://arxiv.org/abs/1712.00378\\n[2] Krishnan et al., DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations, CoRL 2017. https://arxiv.org/abs/1710.05421\\n[3] Ramesh et al., Successor Options: An Option Discovery Framework for Reinforcement Learning https://arxiv.org/abs/1905.05731\\n[4] Harutyunyan et al., The Termination Critic, https://arxiv.org/abs/1902.09996\"}", "{\"title\": \"Comments to AnonReviewer 3\", \"comment\": \"We appreciate the reviewer for the positive and valuable comments.\\n\\n>>> \\u201cIt is not clear if the authors combined existing techniques and/or if they invented a new one.\\u201d\\nA) Please note that our proposed approach in terms of both high-level idea and technical details is a non-trivial solution to solve a very challenging problem (e.g., inferring unknown subtask dependencies) that is relevant to real-world applications. Our MSGI model consists of adaptation policy, inductive logic programming (ILP) module, and test policy, and they operate in a single meta-learning framework. We used the existing CART with Gini impurity (Breiman et al., 2017) for our ILP module (see Figure 2) and the GRProp policy (Sohn et al., 2018) for test policy. Rather than in individual sub-modules, our main contribution lies in the whole MSGI framework, where we propose 1) to use a separate policy for adaptation and test phase to enable the agent to explore during adaptation and exploit during testing and 2) to use an ILP method to efficiently infer the task parameter (i.e., subtask graph) for faster adaptation. The list of contributions other than the model are summarized in the last paragraph of Section 1.\\n\\n>>> \\u201cWhat is the big difference from the work by Sohn et al. (2018)?\\u201d\\nA) Sohn et al. (2018) require subtask graphs to be explicitly given to the agent to solve the task. However, our MSGI algorithm can solve the task without the requirement of subtask graph being given; instead, our method can \\u201cinfer\\u201d the unknown subtask graph (per task) from the agent\\u2019s experience during adaptation. To make it even more clear, we clarified the difference in Section 2.2 as follows: \\u201cOur problem extends the subtask graph execution problem in (Sohn et al., 2018) by removing the assumption that a subtask graph is given to the agent; thus, the agent must infer the subtask graph in order to perform the complex task.\\u201d\\n\\n>>> \\u201cThe authors evaluated one agent. It would have been better if authors trained multiple agents and showed a performance distribution (Fig 5.).\\u201d\\nA) In fact, we reported the performance distribution over multiple runs in Figure 5. Specifically, we reported the mean (solid line) and standard error (shaded area) of the performance over 5 random seeds, 500 tasks (i.e., subtask graphs), and 32 test episodes. We note that the standard error (shaded area) is quite small in Figure 5, since we measured the performance over the large number of runs.\", \"presentation\": \">>> \\u201cFigure 3 does not give a description of the subtask graph (middle) and the StarCraft II.\\u201d\\nA) We added more description of the subtask graph and the task of SC2 in the caption of Figure 3 as follows: \\u201c...The goal is to execute subtasks in the optimal order to maximize the reward within time budget. The subtask graph describes subtasks with the corresponding rewards (e.g., transforming a chest gives 0.1 reward) and dependencies between subtasks through AND and OR nodes. For instance, the agent should first transform chest AND transform diamond before executing pick up duck.\\u201d\\n\\n>>> \\u201cSection 5.1.2 does not clearly explain the different datasets D1-D5 of Playground.\\u201d\\nA) We added more detail about the Playground and Mining domain including explanation about D1-D4 and Eval datasets in Appendix C. We also added a pointer to Appendix C in Section 5.1.2. Thank you for pointing this out.\"}", "{\"title\": \"Comments to AnonReviewer 1 (Cont'd)\", \"comment\": \">>> \\u201cIt would be interesting to see how/if MSGI can perform in widely used meta-RL benchmarks in Mujoco\\u201d --- MSGI on the standard tasks without hierarchy?\\nA) In the standard benchmark tasks without hierarchy (i.e., single-goal), the performance of MSGI solely depends on the performance of the option since the task consists of a single subtask. Because our work assumes that the option is given, the problem becomes trivial; thus, we did not include the standard tasks without hierarchy in this paper. We instead focus on the complex hierarchical tasks that existing HRL and meta-RL methods cannot solve efficiently.\\n\\n>>> \\u201cAs for the results, the authors don't provide an ablation study on the UCB exploration bonus though they claim they would show it in the paper.\\u201d\\nA) We added the ablation study result in Appendix H. Figure 17 shows that UCB exploration bonus term helps meta-training in Playground and Mining domain. Thanks for pointing this out.\\n\\n>>> \\u201cMoreover, the result of GRProp+Oracle is also missing in the comparison\\u201d\\nA) GRProp+Oracle was used as the upper bound for performance normalization of all the agents (Section 5.1). Therefore, in Figure 4 and 5, $\\\\widehat{R}=1$ corresponds to the performance of GRProp+Oracle. To avoid confusion, we made it more clear in the comment of Figure 5 by adding \\u201cThe performance of each method was normalized where GRProp+Oracle is the upper bound (i.e., $\\\\widehat{R}=1$) and Random is the lower bound (i.e., $\\\\widehat{R}=0$).\\u201d\\n\\n>>> \\u201cThe authors also introduce MSGI-GRProp in this setting, which is never discussed before, and claim that MSGI-GRProp can successfully generalize to new tasks.\\u201d\\nA) Our MSGI agent consists of two policies: adaptation policy and test policy. MSGI-GRProp uses the GRProp [2] policy as an adaptation policy (instead of meta-learned adaptation policy) while other parts are unchanged. We made it more clear in Section 5.2.\", \"references\": \"[1] Kolve et al., AI2-THOR: An Interactive 3D Environment for Visual AI, ArXiv, 2017\"}", "{\"title\": \"Comments to AnonReviewer 1\", \"comment\": \"We appreciate the reviewer for their positive evaluation and detailed, constructive comments. We have updated the draft to address some concerns, and below we answer to the questions in more detail.\\n\\n>>> \\u201cWhy MSGI-Meta and RL^2 would overfit in the SC2LE case and are unable to adapt to new tasks. Is that a limitation of the method?\\u201d\\nA) The subtask graph structure (i.e., the Tech Tree) in SC2LE is fixed, as it is the inherent design of the game. For instance, Marine can only be built from Barrack and this remains fixed across the tasks. This limits the variation among the tasks, and the training and testing tasks in terms of the subtask graph structure become identical. In this case, the meta-learning method can achieve (near-) optimal performance by simply overfitting to the training tasks (i.e., memorizing the optimal sequence of options in training tasks) without any generalization over different subtask structures. Thus, this is not a limitation of our method, but a limitation of SC2LE domain in designing diverse tasks for meta-training.\\n\\n>>> \\u201cIt seems that the authors don't use a meta-RL agent in order to get this domain (SC2LE) to work. I believe more discussion on this part is needed.\\u201d\\nA) \\n(1) Why we did not include any meta-RL agent in the SC2LE domain: As answered to the above comment, we did not include meta-RL agents since meta-training makes the problem too trivial in SC2LE domain as we cannot generate as many different subtask graphs as necessary for meta-learning to work. \\n(2) How can our MSGI model work without a meta-policy: \\nThe ILP module infers the underlying subtask graph from the agent\\u2019s trajectory. In principle, however, our ILP module can infer the underlying subtask graph from any trajectory data (collected by any policy), as long as its coverage is enough. Our meta-trained policy makes this data collection as efficient as possible for more accurate inference.\\nAs shown in Figure 5, even the MSGI-Rand agent (i.e., our MSGI model but replacing the meta-policy with a random policy) outperform other baselines; it can still generate the experience data that can be used in ILP module.\\n\\n>>> \\u201cAdditional results on more challenging domains\\u201d\\nA) As an effort to make it more convincing that (1) our MSGI model can solve complex hierarchical task and that (2) real-world tasks have a hierarchical structure in it, we conducted an additional experiment on the AI2-THOR [1] environment. Here, we defined the cooking task similar to the breakfast preparation task described in the introduction with 148 realistic subtasks such as \\u201cslice bread\\u201d or \\u201ccook egg\\u201d. We added the details of the experiment and the result in the Appendix G. The experimental results on AI2-THOR show that our MSGI model can infer the underlying subtask graph accurately, and adapt more efficiently than the compared methods.\\n\\n>>> \\u201cThe set of subtasks is a Cartesian product of the set of primitive actions and a set of all types of interactive objects in the domain.\\u201d\\nA) We implemented a completion of the subtasks as a Cartesian product of the primitive actions and objects in Playground domain, but in general the completion set of a subtask can be any set of states as defined in Section 2.2. For example, the subtasks in SC2LE was defined by the design of the domain (i.e., high-level actions appearing in the environment\\u2019s API). Our model does not benefit from such a specific form, and in fact our MSGI can be applied as long as the completion and eligibility of subtask can be defined. To avoid confusion, we moved the Cartesian product-based definition to Appendix C, which explains the details of Playground and Mining domain, and provided the general definition of subtask in Appendix B.\\n\\n>>> \\u201cSuch a setup (discrete subtasks and grid-based observation) seems a bit contrived and is limited to low-dimensional state space and discrete action space, which makes me doubt its scalability to high-dimensional continuous control tasks.\\u201d \\nA) \\nSince the policies in MSGI model operates on the option/subtask level, it can scale in terms of the number of subtasks. For both continuous and discrete state/action space cases, an MDP can be abstracted into discrete subtasks, and then our MSGI model can be applied to solve the task efficiently. In such an environment with high-dimensional observation spaces, the policy can make use of both raw, high-dimensional observations (to learn a mapping/relation between subtasks and parts of the observation) and the discrete subtask information. We also note that our experiment shows that MSGI scales well to large number of subtasks (e.g., 148 subtasks for AI2-THOR [1]) and high-dimensional observation space (e.g., SC2LE and AI2-THOR).\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n-------------\\nThe authors propose a novel meta-rl problem where hierarchical tasks are characterized by a graph describing all sub-tasks and their dependencies. They propose a meta-rl approach to meta-train a policy that quickly infers the subtask graph from new task data. The approach is compared to relevant baselines from both the meta-rl and hierarchical rl literature on complex domains. In particular, the authors consider a large-scale Startcraft II experiment which proves the efficiency and scalability of the proposed methodology.\\n\\nMajor Comments\\n--------------\\n\\nMeta-rl is a relevant direction for reducing the sample-complexity of rl agents and scaling them to large domains. This work presents interesting and novel ideas in these settings. In particular, the few-shot rl problem with subtask dependencies seems quite interesting for both encoding and solving large hierarchical rl problems. The proposed meta-rl algorithm is sound and simple to understand. The paper is well-organized, though sometimes it is difficult to follow the formalisms due to a large number of different symbols introduced. The experiments are quite interesting and convincing. In particular, the Starcraft domain should address all concerns about the scalability and efficiency of the proposed approach. Some comments/questions follow.\\n\\n1. The state available to the agent includes the number of remaining time-steps and episodes. When/how are they used?\\n\\n2. The paper requires the reader to be quite familiar with some previous works (e.g., Section 3.2 requires to know Song et al. 2018 to understand the test phase). It would be good to add more background/details about these works (at least in the supplementary), so that the paper is more self-contained.\\n\\n3. In the Starcraft experiment, what is the difference between MSGI-meta and MSGI-GRProp? Furthermore, where is the \\\"oracle\\\" baseline (introduced in sec. 5) used in the experiments? I did not find any plot reporting it.\\n\\n4. The main limitation is that this approach requires options for each subtask to be provided before-hand. Do the authors think that the method is easily generalizable to learn such options as well? Furthermore, I realized this limitation only after reading the very last lines of the paper. Since this is of major importance, I believe it should be clearly stated much earlier.\\n\\nMinor Comments\\n--------------\\n1. First line of sec. 2.1: R_\\\\tau should be R_G\\n2. I did not find a definition of o_t and d_t which appear, e.g., in Algorithm 1.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new meta-reinforcement learning algorithm, MSGI, which focuses on the problem of adapting to unseen hierarchical tasks through interaction with the environment where the external reward is sparse. The authors make use of subtask graph inference to infer the latent subtask representation of a task through interacting with the environment using an adaptation policy and then optimize the adaptation policy based on the inferred latent subtask structure. Each task in the paper is represented as a tuple of subtask precondition and subtask reward, which are inferred via logic induction and MLE of Gaussians respectively. At meta-test time, MSGI rollouts a subtask graph execution (SGE) policy based on the graph inferred from the interactions between the environment and the adaptation policy. The authors also propose a UCB-inspired intrinsic reward to encourage exploration when optimizing the adaptation policy. Experiments are conducted on two grid-world domains as well as StarCraft II.\\n\\nOverall, this paper is mainly an extension of the prior work [1], which uses a subtask graph for tackling hierarchical RL problems. This work builds upon [1] by extending to meta-learning domains and studying generalization to new hierarchical tasks. While the contribution seems a bit incremental and the experimental setting is a bit unclear and limited to low-dimensional state space, the inference of task-specific subtask graphs based on past experiences and the proposal of a UCB-inspired reward shed some interesting insights on how to approach meta-hierarchical RL where long-horizon tasks and sparse rewards have been major challenges. Given some clarification on the experimental setup and additional results on more challenging domains in the author's response, I would be willing to improve my score.\\n\\nRegarding the experimental setup, the set of subtasks is a Cartesian product of the set of primitive actions and a set of all types of interactive objects in the domain, while the state is represented as a binary 3-dimensional tensor indicating the position of each type of objects. Such a setup seems a bit contrived and is limited to low-dimensional state space and discrete action space, which makes me doubt its scalability to high-dimensional continuous control tasks. It would be interesting to see how/if MSGI can perform in widely used meta-RL benchmarks in Mujoco. I also wonder how MSGI can be compared to newly proposed context-based meta-RL methods such as PEARL.\\n\\nAs for the results, the authors don't provide an ablation study on the UCB exploration bonus though they claim they would show it in the paper. Moreover, the result of GRProp+Oracle is also missing in the comparison. I also don't understand why MSGI-Meta and RL2 would overfit in the SC2LE case and are unable to adapt to new tasks. Is that a limitation of the method? The authors also introduce MSGI-GRProp in this setting, which is never discussed before, and claim that MSGI-GRProp can successfully generalize to new tasks. It seems that the authors don't use a meta-RL agent in order to get this domain to work. I believe more discussion on this part is needed.\\n\\n[1] Sungryull Sohn, Junhyuk Oh, and Honglak Lee. Hierarchical reinforcement learning for zero-shot\\ngeneralization with subtask dependencies. In NeurIPS, pp. 7156\\u20137166, 2018.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #564\", \"review\": \"The main problem that is tackled here are tasks that have a main goal that can only be reached by solving prerequisite tasks. They test their method on a simple game and a very complex one.\\n\\nMethodology and novelty\\nThe authors combine various techniques (subtask graph inference, gradient based meta-learning and inductive logic programming). It is not clearly stated if the authors combined techniques and/or if they invented a new one. What is the big difference from the work by Sohn et al. (2018)?\\n\\nExperiments\\nThe authors evaluated one agent. It would have been better if they trained multiple agents and showed a performance distribution, so it is clear that the performance is not just achieved by luck (Fig 5.). \\nThe video material showed clearly how the complex game (StarCraft II) was solved much quicker than a baseline model. \\n\\nPresentation\\nFigure 3 does not give a description of the subtask graph (middle) and the StarCraft II. The video material clearly shows the performance of their method. Section 5.1.2 does not clearly explain the different datasets D1-D5 of Playground.\"}" ] }
ByliZgBKPH
Policy path programming
[ "Daniel McNamee" ]
We develop a normative theory of hierarchical model-based policy optimization for Markov decision processes resulting in a full-depth, full-width policy iteration algorithm. This method performs policy updates which integrate reward information over all states at all horizons simultaneously thus sequentially maximizing the expected reward obtained per algorithmic iteration. Effectively, policy path programming ascends the expected cumulative reward gradient in the space of policies defined over all state-space paths. An exact formula is derived which finitely parametrizes these path gradients in terms of action preferences. Policy path gradients can be directly computed using an internal model thus obviating the need to sample paths in order to optimize in depth. They are quadratic in successor representation entries and afford natural generalizations to higher-order gradient techniques. In simulations, it is shown that intuitive hierarchical reasoning is emergent within the associated policy optimization dynamics.
[ "markov decision process", "planning", "hierarchical", "reinforcement learning" ]
Reject
https://openreview.net/pdf?id=ByliZgBKPH
https://openreview.net/forum?id=ByliZgBKPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "3Orygi4zw", "ryxWltR5sS", "rygNnxhncS", "ryer2GGXqB", "SkxEoyMAYr", "rJg_CgCUYH" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741772, 1573738728738, 1572810924245, 1572180653396, 1571852187584, 1571377359817 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2148/Authors" ], [ "ICLR.cc/2020/Conference/Paper2148/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2148/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2148/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2148/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The reviewers were not convinced about the significance of this work. There is no empirical or theoretical result justifying why this method has advantages over the existing methods. The reviewers also raised concerns related to the scalability of the proposal. Since none of the reviewers were enthusiastic about the paper, including the expert ones, I cannot recommend acceptance of this work.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author response\", \"comment\": [\"Thanks to all reviewers for your feedback. The manuscript has been edited to address some of the issues raised and to improve its clarity and precision. The overall impression is that comparative demonstrations of this theory embedded in a scalable RL algorithm is required which is not possible at this stage. To address Reviewer #1's questions directly:\", \"No. In fact, the natural path gradient could be combined with the natural policy gradient through the reparameterization rule for Fisher informations. Concretely, this would result in policy parameter updates sensitive to state-action correlations under the policy-induced path distribution (which is not the case with the natural policy gradient).\", \"Untested as yet.\", \"To do an exact full-width, full-depth backup via roll-outs is impossible since this would require an infinite number of infinitely deep samples in general. Our model accomplishes this using a known environmental model (and exhibits hierarchical processing of the environment and policy dynamics). With respect to an implementation in a scalable RL agent, though untested, our method suggests an alternative approach by which a full-depth (or n-step) backup may be approximated based on estimating the components of the path gradient calculation during exploration. This approach has the additional benefit that estimated path gradient components transfer across reward functions.\"]}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper considers the problem of finding the optimal policy in the Markovian decision Processes, where a KL policy regularizer is added to the objective function. Instead of the closed form solution which leads to the KL-regularized Bellman equation the paper proposes to use an incremental gradient ascent algorithm. The paper recommends an iterative policy gradient scheme to optimize this objective function. There exists a substantial literature on the subject of KL-regularized RL as well as using the policy gradient algorithms to optimize this objective function using policy gradient schemes (See all variants of KL(entropy)-constraint actor-critic or reinforce algorithms, e.g. A2C, IMPALA,...). Unfortunately the paper doesn\\u2019t provide any comparison with those methods. In the absence of those comparisons the significance of this work to the literature of RL is not clear, as it is not solving an open problem which hasn't addressed before, neither it provides theoretical/empirical evidence that it has advanced the start-of-the-art in terms of providing a more efficient solution.\\n\\n The paper considers a setting which is quite well-studied as there exist efficient solvers for optimizing the KL regularized RL (including policy-gradient variants). Why the proposed approach is better than those already existed in the literature? What is the outstanding problem in the literature of KL-regularized RL that this work tries to address? I couldn\\u2019t find a satisfying argument with respect to these questions in the current submission. In the absence of any theoretical or empirical result to justify the merits of the proposed algorithm the contribution of this paper to the literature is not clear. Also it is not clear how this approach can scale up to anything beyond the finite state-action problems as it relies on knowing quantities like state-action transitions and the inverse of state transition matrix which in practice is quite difficult to estimate. I recommend the authors to rethink their approach from the point of view of whether It provides solution to some open problems in RL/control or it advances the-state-of-the-art. If this is the case, the paper needs to provide theoretical/empirical evidence to back up its claim. Unfortunately the current submission does not satisfy these requirements.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper considers the problem of entropy-regularized discounted Markov decision processes with discrete state space. Instead of working on the parameter space of policy (\\\\pi_ij), the paper has proposed to reparametrize with natural parameters (A_ij). The reparameterization trick helps to learn the natural parameters using the natural gradient method.\\nThe writing is easy to follow. However, it is not clear what is the benefit of learning policy using path representation compared with other methods in the literature. The paper does not clearly state the motivation of the proposed method.\\nThe experimental section presents the convergence of the proposed methods in 3 small problems including decision trees of four levels, the tower of Hanoi problem, and four-room grid worlds. The experiment setting is very simple with a small number of states in the policy. It is not clear how the proposed method is able to scale up the size of state space. Besides, there is no baseline method in the literature to be presented to compare with the proposed method.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This work proposes a policy iteration algorithm that implements full-depth, full-width backups in contrast to one-step, full-width methods. The authors go over existing algorithms and talks a bit how their proposal conceptually differs in how it performs said backups. They provide a bit of intuition to help explain their algorithm's derivation. Finally, they provide a few experiments showing that their algorithm works.\\n\\nMy personal issues are with these experiments. First, I would like to see better comparisons between this method and existing policy iteration methods. I don't have a good sense in which one would choose to use this algorithm over any baseline methods. Is it faster in any sense? Does it produce better policies during certain games? For the experiments themselves, I don't see much clarification of what the various graphs even show. More effort should have been spent analyzing these. \\n\\nI come away from this work not fully appreciating the impact it is trying to sell me on. I also think the discussion section should have been more fleshed out.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a reinforcement learning method that exploits the full-depth backup. The policy update based on the full-depth backup is derived for entropy-regularized MDP. The state-action correlation function is introduced and the Fisher information matrix is computed with it. The proposed method is evaluated on tasks with discrete states and actions.\\n\\nI understand the concept of using the full path for updating the policy, but I do not see significant novelty of the proposed method from the current manuscript. The proposed method looks equivalent to the natural policy gradient with full-depth backup for entropy-regularized MDP, which is a special case of existing methods. \\n\\nMy concern is that the scalability of the proposed method. The use of the full-depth backup should suffer from the large variance, and I think the proposed method will not work on tasks with the high-dimensional state space. \\nThe evaluation is limited to tasks of which the state space is small, and the proposed method is not compared with existing methods.\\n\\nDue to the unclear novelty and limited empirical results, I give weak reject to the paper in the current form.\\n\\nI request authors to answer the following questions to improve the clarity.\\n\\n- Is the proposed method equivalent to use the natural policy gradient with the full-depth backup for a softmax energy-based policy? If they are different, what it the crucial difference?\\n\\n- I think that the variance of the estimation of the gradient is large when using the full-depth back up. I'm curious about the performance of the proposed method in high-dimensional tasks. However, the evaluation is very limited to simple tasks in which the state space is relatively small compared with tasks commonly used in deep RL papers.\\nDoes the proposed method scale to more complex tasks, such as Atari games?\\n\\n- When using the n-step TD learning, increasing n does not always improve the performance, and n should be set to an intermediate value\\nWhat is the motivation of using the full paths for updating the policy? Does the proposed method outperform existing methods? Especially, the comparison with natural policy gradient methods is necessary to show the benefit of the proposed algorithm.\", \"minor_comments\": [\"In page 2, \\\"A state-space \\\\mathcal{X} is composed of states x \\\\in \\\\mathcal{X}\\\" <- Authors may want to replace x with s in this sentence.\", \"In page 6, \\\"0\\\\lambda < 1\\\" I think that \\\"<\\\" is missing between \\\"0\\\" and \\\"\\\\lambda\\\"\", \"In page 6, some variables are explained after Equation (10). However, it took me a while to find \\\"lambda\\\" in Equation (10), since Equation (10) has 9 lines and many terms. I think the description in page 6 can be improved. For example, I recommend to use \\\"\\\\exp\\\" instead of \\\"e^\\\" for readability.\"]}" ] }
B1gcblSKwB
Meta-Learning with Network Pruning for Overfitting Reduction
[ "Hongduan Tian", "Bo Liu", "Xiao-Tong Yuan", "Qingshan Liu" ]
Meta-Learning has achieved great success in few-shot learning. However, the existing meta-learning models have been evidenced to overfit on meta-training tasks when using deeper and wider convolutional neural networks. This means that we cannot improve the meta-generalization performance by merely deepening or widening the networks. To remedy such a deficiency of meta-overfitting, we propose in this paper a sparsity constrained meta-learning approach to learn from meta-training tasks a subnetwork from which first-order optimization methods can quickly converge towards the optimal network in meta-testing tasks. Our theoretical analysis shows the benefit of sparsity for improving the generalization gap of the learned meta-initialization network. We have implemented our approach on top of the widely applied Reptile algorithm assembled with varying network pruning routines including Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT). Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method can not only effectively ease meta-overfitting but also in many cases improve the meta-generalization performance when applied to few-shot classification tasks.
[ "Meta-Learning", "Few-shot Learning", "Network Pruning", "Generalization Analysis" ]
Reject
https://openreview.net/pdf?id=B1gcblSKwB
https://openreview.net/forum?id=B1gcblSKwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "wUcxxqypMK", "ryeANU6ioB", "Syl0XcQdsH", "ryeI-9XdjS", "rJlzd_7_sS", "S1xjWdmOor", "HkgZbj0g5r", "SkxXjkpqKr", "Syg1bRkLYH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741739, 1573799478169, 1573562918030, 1573562878380, 1573562473884, 1573562370752, 1572035321036, 1571635099385, 1571319286838 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2147/Authors" ], [ "ICLR.cc/2020/Conference/Paper2147/Authors" ], [ "ICLR.cc/2020/Conference/Paper2147/Authors" ], [ "ICLR.cc/2020/Conference/Paper2147/Authors" ], [ "ICLR.cc/2020/Conference/Paper2147/Authors" ], [ "ICLR.cc/2020/Conference/Paper2147/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2147/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2147/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a regularization scheme for reducing meta-overfitting. After the rebuttal period, the reviewers all still had concerns about the significance of the paper's contributions and the thoroughness of the empirical study. As such, this paper isn't ready for publication at ICLR. See the reviewer's comments for detailed feedback on how to improve the paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Paper Revision\", \"comment\": \"We thank all reviewers for their constructive comments. Per your suggestions, we have carefully edited our submission. We hope that the given concerns have been addressed satisfactorily in the revised manuscript. Below we provide a summary of changes.\\n1.\\tAt the end of Section 4.1.2, we add a short discussion on the generalization performance of the dense output from the retraining phase. \\n2.\\tMore details about the setting of hyper-parameters are provided in the experiment section.\\n3.\\tWe have included more results of CAVIA on MiniImageNet for a more complete comparison.\\n4.\\tOther changes are minor clarifications that are detailed in our responses to the reviewers.\\n5.\\tThe code is released along with the revised paper at https://drive.google.com/open?id=1VOY1sCA1j5G1LE2AbDrPoZM-1ZwwVOHA and the corresponding datasets can be available here at https://drive.google.com/open?id=17Cftpney_up0u5SCPb5IuskFplrcAuri\"}", "{\"title\": \"Reply to Official Blind Review #1 Part(2/2)\", \"comment\": \"Response to experimental concerns:\\n\\n1.\\tYes, the standard MiniImageNet split was used in our work to ensure fairness of comparison.\\n\\n2.\\tWe note the best accuracy results by CAVIA (51.82\\u00b10.65% for one-shot, and 65.85\\u00b10.55% for 5-shot) on MiniImageNet were reported in the 512-channel case. These results, however, are still inferior to our results obtained even with 64 channels (51.91\\u00b10.45%for one-shot, and 67.23\\u00b10.65% for 5-shot). We will update Table 2 to have the best available results for CAVIA included for a more complete comparison. \\n\\n3.\\tDue to the limited computational resource, we cannot afford tuning the optimal hyper-parameters via grid search. Alternatively, we have tried a small number of hyper-parameter configurations based on our numerical experience and chose the one with the optimal validation performance. \\n\\n4.\\tIn our experiment on Omniglot, we followed the implementation of Reptile to split the 1623 character classes into 1200 training classes (including 100 validation classes for MAML and zero validation class for Reptile) and 423 test classes. \\n\\n5.\\tWe would like to clarify that in Table 2, we do have listed a set of results under varying network channels for our method and the baselines as well. We expect the performances might be further improved with more extensive hyper-parameter tuning under additional computation budge. \\n\\n6.\\tSince the primal goal of this paper is to demonstrate the benefit of network pruning in reducing the inter-task overfitting, we focus ourselves on the variant of our method based on Reptile and CNN. We did not extensively tune our model with respect to other backbone network architectures, although the related results in Table 2 have already shown some promise of our method implemented with ResNet-18. \\n\\n7.\\tPer your this suggestion, we have additionally applied the identical rule of hyper-parameter selection to the ablation study. The numerical results indicated that the current choice is still optimal among the considered parameter configurations. \\n\\n8.\\tRegarding the comparison with other relevant methods, we remark that since our method was implemented using Reptile as the backbone meta-learner, we focus on the comparison to Reptile to show the benefit of meta-learning with network pruning. Please keep in mind that our principle of limiting the inter-task network capacity is orthogonal to that of the inner-task overfitting reduction methods such as CAVIA and MetaOptNet. We believe it is possible to develop more sophisticated techniques to achieve the best of both worlds in future study. We will add more discussions on the comparison to these state-of-the-art results in the revised paper.\", \"response_to_minor_concerns\": \"We will fix all the minor issues in the revised paper along with which the code will be released.\"}", "{\"title\": \"Reply to Official Blind Review #1 Part(1/2)\", \"comment\": \"Thank you for the insightful review. We hope the main concerns can be addressed by the following clarification.\", \"response_to_general_concerns\": \"1.\\tWe would like to highlight that in contrast to CAVIA and MetaOptNet which handle overfitting by limiting the capacity of the inner-task learner, our method explores another direction of limiting the capacity of the inter-task meta-learner to improve generalization performance across tasks. \\n\\n2.\\tThe meta-generalization performance is measured by the population risk of meta-learner, while meta-overfitting is quantified by the gap between the population and empirical risks. We will update the paper to clarify these concepts. \\n\\n3.\\tWe address your concern about the curves in Figure 1(b). The main purpose of this figure is to demonstrate the power of our method for overfitting reduction. Concerning the loss in training accuracy of our method, we remark that such a trade-off between training accuracy loss and overfitting reduction is usually the case in sparse learning. In order to remedy this issue, we further propose to use a re-training phase to improve the overall training and testing accuracy. \\n\\n4.\\tYes, the left-most loss in Equation 2 should be evaluated on the query set. This typo can be easily fixed with almost no impact on the technical proofs of our theoretical results.\\n\\n5.\\tSorry for the confusion. We will remove the misleading term of \\u201cfirst-order\\u201d in the revised paper.\\n\\n6.\\tWe would like to address your concern about the usefulness of our sparse generalization theory. First, the ablation study results in Figure 3 affirmatively confirmed our theoretical prediction by showing that the sparse meta-initialization network did reduce the gap between training and testing accuracy. Second, to further justify the improved generalization accuracy with dense re-training, we comment that since the obtained sparse network generalizes well, it is expected to serve as a good initialization for future re-training via SGD. Then roughly speaking, according to the SGD stability theory in [Hardt et al., 2016] the output dense network will also generalize well (with high probability) if the re-training phase converges fast enough. We will update the paper to have this point clarified. \\n\\n[Hardt et al., 2016] Hardt M, Recht B, Singer Y. Train faster, generalize better: Stability of stochastic gradient descent, ICML, 2016: 1225-1234.\\n\\n7.\\tYes, the parameters are ranked according to their absolute values.\\n\\n8.\\tThe reason to fine tune the subnetwork during the iteration is because according to our numerical experience with IHT network pruning, sufficient steps of subnetwork fine tuning tend to substantially improve the stability and convergence behavior of the method. \\n\\n9.\\tWe appreciate your suggestions about reducing paper length and citing peer-reviewed references. We'll do our best to address them in the revised paper.\"}", "{\"title\": \"Reply to Official Blind Review #3\", \"comment\": \"Thank you for the insightful review. We hope the main concerns can be addressed by the following clarification.\\n\\n1.\\tThe degree of novelty of our method is higher than you suggest. We contribute several new insights on generalization theory and algorithm at the intersection of optimization-based meta-learning and non-convex sparse learning which to our knowledge has not been studied in prior work. Particularly, the sparse generalization theory established in our paper is applicable to non-convex deep models and thus contrasts itself from the existing vast body of results which are mostly about parameter estimation and support recovery consistency for convex problems. \\nConcerning the increased computational complexity for achieving the practically improved generalization performance, we believe such a trade-off is reasonable and common in statistical learning with sparsity.\\n\\n2.\\tSince we use deep neural nets as the base learner, the pre-training phase should be beneficial for generating a good initialization for further processing with network pruning. Our offline numerical results did suggest that sufficient pre-training always leads to better generalization performance than simply doing meta-training and pruning from scratch.\"}", "{\"title\": \"Reply to Official Blind Review #4\", \"comment\": \"Thank you for the insightful review. We hope the main concerns can be addressed by the following clarification.\\n\\n1.\\tWe would like to clarify that the ablation study actually well supports our generalization theory.\\n\\n- The results presented in Theorem 1&2 basically show that the meta-generalization gap bounds have polynomial dependence on the sparsity level rather than the size of the meta-initialization network. The ablation study results in Figure 3 affirmatively confirm this theoretical prediction by showing that the sparse meta-initialization network did reduce the gap between training and testing accuracy. Therefore, network pruning is beneficial, both in theory and practice, for making the meta-learner less overfitting. \\n\\n- To further justify the improved generalization accuracy with re-training, we comment that since the obtained sparse meta-initialization network generalizes well, it is expected to serve as a good initialization for future re-training via SGD. Then roughly speaking, according to the SGD stability theory in [Hardt et al., 2016] the output dense network will also generalize well (in high probability) if the re-training phase converges fast enough. We will update the paper by explicitly remarking this point in the related discussion. \\n\\n[Hardt et al., 2016] Hardt M, Recht B, Singer Y. Train faster, generalize better: Stability of stochastic gradient descent, ICML, 2016: 1225-1234.\\n\\n2. Yes, the outside risk should be evaluated on the query set. This typo can be easily fixed with almost no impact on the technical proofs of the theoretical results.\\n\\n3. Sorry for the confusion. The curves in Figure 1 correspond to the experiments on MiniImageNet. Although still being relatively large, the generalization gap of the sparse network is obviously reduced in comparison to the dense network. Per your suggestion, we will update the figure layout for better visualization. \\n\\n4. Concerning computational complexity, the generalization performance gain of our method is achieved at the price of increased computational cost mainly due to the additional (iterative) network pruning and re-training steps. We believe such a trade-off is reasonable and common in iterative hard thresholding methods for statistical learning with sparsity. \\n\\n5. We agree that the total number of classes should play a role in the performance gap between the considered datasets. We also think that another important factor is the regularity of image. Different from the natural scene images in MiniImageNet, the character images in Omniglot have relatively simpler foreground and background, and thus are easier to be classified.\\n\\n6. Regarding the motivation to deal with inter-task overfitting rather than the inner-task overfitting, since the optimization-based meta-learning is designed to learn fast from small amount of data in future tasks with the help of meta-learner, we conjecture that the former would have higher impact on the overall generalization performance than the latter. It is absolutely an interesting future work to jointly handle the inter-task and inner-task overfitting, e.g., through simultaneously limiting the capacity of the context parameters for inner-task update as suggested by CAVIA and MetaOptNet. \\n\\n7. The binary mask is generalized based on a hard thresholding operation to preserve the desired portion (say 50%) of parameters with top absolute values in each layer. \\n\\n8. The training accuracy results on the considered data sets are available for check at: https://drive.google.com/open?id=1OtaajWwga_0dTWx8CM8oObT3S7DJiG_f\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes to use the sparse network to mitigate the task-overfitting. The algorithm is based on an iterative training over the pruning phase and retraining phase. A theoretical analysis of the sparsity effects for the Reptile-based first-order meta-learning is provided which indicates if the sparsity level k satisfies k^2 << parameter dimension p, the sparsity can help with the generalization. The paper studies the Reptile + DSD pruning and Reptile + IHT pruning on two standard few-shot classification benchmark.\\n\\nThe paper is in general well-written. My major comments/questions are the following;\\n\\n1. As the author pointed in the experiment section, there is a mis-match between the experiments and the theory. The output of Algorithm 1 is not sparse though during the iterative training the initialization of the retraining phase is sparse. But the theorem says the generalization is achieved because of sparsity, which requires k^2 << p. The ablation study seems even put more doubts on what the theorem suggests, which basically shows sparsity alone harms the general performance.\\n\\n2. Is there a typo in Eq (1)? Should the outside loss be evaluated on the query set rather than the support set? If so, does this typo influences the prove of Theorem 1 based on Shalev-Shwartz et.al. 2009?\\n\\n3. In Figure 1, first of all, what experiments does this figure corresponds to? In Figure 1 (b), the gap between training and testing for both pruning methods are quite large which seems doesn\\u2019t solve the overfitting very much? The test traces are intertwined, so it is not clear that the test accuracy get really improved. Using a consistent color for the Figure 1 (a) and Figure 1 (b) can make it much easier to read.\\n\\n4. The paper needs to discuss about the computational-complexity. It seems each iteration in Algorithm 1 involves meta-training a sparse network and a dense network. And the algorithm needs the number of iteration t > 1. Is there any difficulty in scaling?\\n\\n5. In the experiments section, for Omniglot almost all results are overlapping with confidence interval. Maybe it should not mark some numbers with bold font. The results in Mini-imagenet show the improvement by proposed methods. Does the effectiveness related to the total number of image classes in dataset?\", \"some_other_comments\": \"1. As the author mentioned, there are two types of overfitting: the meta-level overfitting and task-level overfitting. Why the proposed methods deal with meta-level overfitting rather than task-level overfitting?\\n\\n2. How does the random mask generated in Algorithm 1?\\n\\n3. In experiments, can the training accuracy be also provided? \\n\\nIn general, this paper study an interesting problem in meta-learning and the paper is written in a clear way. The major problems are a mis-match between the theorem and the methods and the experimental results are not very strong. I will give a borderline rating.\\n\\n############\\n\\nThanks for the authors' feedback and I have read them. I am still not convinced about the proposed method about the effectiveness and the trade-off in computation. I hold the concerns about the rebuttal \\\"this typo can be easily fixed with almost no impact on the technical proofs of the theoretical results\\\". Therefore I maintain the current rating as borderline leaning to rejection.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors propose a new method to alleviate the effect of overfitting in the meta-learning scenario. The method is based on network pruning. Empirical results demonstrate the effectiveness of the proposed method.\", \"pros\": [\"The problem is very important in the meta-learning field. The model is intuitive and seems useful. In addition, the generalization bound further gives enough insights for the model.\"], \"cons\": [\"The proposed method is simple and lacks technical contributions. Adding sparse regularizers is a common practice to alleviate over-fitting in the machine learning field. In addition, the retraining process increases the time complexity of the proposed model (i.e., we need to train three times to get the powerful model).\", \"In the experiment parts, it will be more interesting if the authors can do the experiments without pre-training. Since in traditional meta-learning settings (e.g., Reptile and MAML), pre-training process does not be introduced. Thus, it might be more convincing if the authors can training the mask and initialization together.\"], \"post_rebuttal\": \"I have read other reviewers and the authors' responses. I still think the contribution is not enough to be accepted. I do not change my mind.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"============= Post Rebuttal Assessment =============\", \"i_appreciate_the_authors_thorough_response\": \"I believe the writing has improved considerably and the presentation is more convincing.\\n\\nAll in all, I think it's a good paper but not completely ready for publication based on the following final assessment.\\n\\nThe theoretical contribution of the paper is not entirely relevant to the proposed method as also mentioned by other reviewers. The empirical aspect of the work is incremental; by combining two prior works. In this situation, I would happily suggest acceptance if 1) the experimental setup is very thorough and 2) the results are consistent and conclusive 3) the improvements are significant.\\n\\nRegarding these, the paper's most interesting result is 5-way 5-shot classification with IHT/DSD + Reptile. I think that makes the paper borderline since the results are not state of the art and the experiments are not thoroughly done, while on the other hand, it does show clear improvement for this case. For such a borderline scenario, and since I have to choose between weak reject and weak accept, I would lean towards weak reject as I believe the experimental setup can be significantly improved by following some of the items below for the next version:\\n\\n- A thorough ablation study on all the components of IHT and DSD\\n\\n- A study of the effect of the hyperparameters of IHT/DSD\\n\\n- Additional datasets/architectures to show consistent improvement for various cases.\\n\\n- ResNet results are not reliable in the current form. I understand that it would need a sizable computational buget to perform a proper comparison. However, that means either the results should be taken away or done appropriately.\\n\\nTo reiterate, the reason I am asking for the above experiments is that the paper comes with incremental novelty as the main point, in my opinion. Thus it has to be backed up by extensive experiments to be confidently considered for publication.\\n\\ndget to perform a proper comparison. However, that means either the results should be taken away or done appropriately.\\n\\nTo reiterate, the reason I am asking for the above experiments is that the paper comes with incremental novelty as the main point, in my opinion. Thus it has to be backed up by extensive experiments to be confidently considered for publication.\\n\\n\\n============= Summary =============\\n \\nThe paper addresses the issue of overfitting the meta-parameters of optimization-based meta learning techniques. It uses an iterative training-pruning-retraining setup to learn the meta-parameters. The pruning step in each iteration is to limit the capacity of the meta-learner and thereby alleviate overfitting. Two pruning methods called DSD and IHT are employed on top of a first-order meta learning technique called Reptile. The combination is tested on 4 few-shot classification scenarios of two datasets; omniglot and miniImageNet. Results suggests improved accuracy on top of the Reptile meta-learning algorithm on miniImageNet.\\n \\n \\n============= Strengths and Weaknesses =============\\n\\n+ overfitting the meta learner due to the small number of samples (shots) per task and the large number of meta-parameters and/or base-learner parameters is an important problem in meta learning which is the focus of this work.\\n+ the results suggest improvements over the Reptile baseline on miniImageNet.\", \"general_concerns\": [\"abstract: \\u201cthe existing meta-learning models have been evidenced to overfit on meta-training tasks when using deeper and wider convolutional neural networks.\\u201d several methods such as CAVIA and MetaOptNet (among many others) address this issue by limiting the capacity of the learner.\", \"what is the formal definition of meta-generalization and meta-overfitting? This definition will be helpful for understanding the paper\\u2019s arguments, for instance, why \\u201creducing meta-overfitting\\u201d and \\u201cimproving meta-generalization\\u201d are two different matters (as suggested in the last part of the abstract).\", \"Figure 1.b: the green bars (proposed method) don\\u2019t seem to improve the generalization (testing accuracy). I can only speculate that the figure is to demonstrate that (in the rightmost plot) the difference between train and test accuracy for green bars is less than it is for the red bars. However, one should note that it seems to be due to the training accuracy being lower, which can be achieved, in the extreme case, by a random classifier (zero gap). So, it\\u2019s not necessarily useful when testing accuracy is not improved.\", \"eq (1) and eq (2): the left-most loss L should be on D^{query}\", \"page 4: \\u201cIn view of the \\u201clottery ticket hypothesis\\u201d (Frankle & Carbin, 2018), the model in equation 2 can be interpreted as a first-order meta-learner for estimating a subnetwork\\u201c -> eq (2) in the current form still requires a 2nd-order derivative and I cannot see how (Frankle & Carbin, 2018) helps make it first-order optimization.\", \"the usefulness of the provided theory is in question since the final method does not enforce the sparsity in practice. That is, the output of the final meta-parameters are *dense*.\", \"page 7 mentions \\u201ctop k_l entries\\u201d: what does \\u201ctop\\u201d mean here? k_l dimensions of \\\\theta with the highest absolute value, maybe?\", \"why do the two separate steps of only training the pruned subnetwork using Reptile and then retrain the whole network (with the updated subnetwork) again using Reptile? One can instead only train Reptile on the whole \\\\theta^{(i)}_M (with L0 norm constrained using the M projection) in a single Reptile step. Doing the former should be justified over the latter since its twice as expensive. Also it should be shown empirically (as an ablation study) that the former works better.\", \"reference list: arXiv versions of many papers are cited, it\\u2019s good to cite the peer-reviewed version of papers when applicable. For instance, IHT pruning method does not seem to be peer-reviewed which is a caveat for the current work.\", \"10 pages seem excessive for the content of the paper. For instance, the experiments section can be shortened extensively and the theories can be put into the appendix.\"], \"experimental_concerns\": [\"Has the standard miniImageNet split been used? This split is especially important to be respected since CAVIA\\u2019s accuracy is taken from the original paper.\", \"The reported number for CAVIA in table 2 is not the best number the original paper achieve. The best number is 51.82 \\u00b1 0.65% for one-shot, and 65.85 \\u00b1 0.55% for 5-shot.\", \"There are quite a few new hyperparameters (3 iteration numbers for pretraining, pruning/finetuning, and retraining, and then the sparsity level k_l). It\\u2019s important to mention how the hyper-parameters are chosen, especially since they are different for different setups.\", \"there seems to be no meta validation set for Omniglot.\", \"For a fair comparison, the network size of the baseline as well as the learning rate should be searched with the same budget as the hyperparameter search done for the proposed method.\", \"ResNet experiment is especially concerning since 1) there is an even higher-level of hyperparameter tuning for ResNet: first conv layer is not pruned, different pruning rates for different residual blocks are used. 2) the baseline is only tried with one setting for the capacity. It is evident in Table 2 that there is a sweet spot for the capacity of the ConvNet baseline, there is no reason this does not apply to ResNet.\", \"Ablation study: \\u201cFor the purpose of fair comparison, only the retraining phase is removed and other settings are the same as proposed in Section 5.1\\u201d: this is not fair. I believe a fair comparison would be to repeat the hyperparameter search for the ablation studies with the same budget as the full version of the proposed method.\", \"Table 2 only compares with CAVIA while many other meta learning methods could be relevant here such as MetOptNet which also limits the number of base learner\\u2019s learnable parameters thereby helping with overfitting. Also, CAVIA has results on ResNet useful for the last experiment.\", \"============= Final Decision =============\", \"While the paper addresses an important problem and reports improvements, there are many concerns with it including the writing, method, and experimental setup. My \\u201cweak reject\\u201d rating, however, is mainly based on the experimental concerns especially regarding the hyperparameters and the baselines.\", \"============= Minor Points =============\", \"code is not provided. I think it\\u2019s in general very helpful to release the codes for reproducibility and replicability of the experiments, especially in the case of an incremental study.\", \"what meta learning task has been used for Figure 1?\", \"There are different ways of including batchnorm in a meta learning setup. For instance, Reptile used two different strategies. How is batchnorm implemented for this paper\\u2019s experiments? Particularly, indicate if you are using transductive or non-transductive version of batchnorm at test time.\", \"The text as well as the math notations require polishing on various occasions including the following (non-exhaustive) list:\", \"abstract: \\u201csparsity constrained\\u201d -> sparsity-constrained\", \"abstract: \\u201ccan not only\\u201d -> rephrase so that it\\u2019s not read as \\u201ccannot only\\u201d\", \"abstract and intro: \\u201cease meta-overfitting\\u201d can be interpreted as facilitating overfitting -> maybe better to say \\u201calleviate meta-overfitting\\u201d\", \"intro: \\u201cmemorize the experience from previous meta-tasks for future meta-task learning with very few samples\\u201d \\u2192 memorize the experience from previous tasks for a future task with very few samples\", \"intro: \\u201cAs can be observed from Figure 1(a) that\\u201d -> It can be observed from Figure 1(a) that\", \"intro: \\u201csparsity benefits considerably to the\\u201d \\u2192 sparsity benefits considerably the\", \"intro: \\u201calong with two specifications to two widely used networking pruning methods\\u201d: rephrase\", \"related works: \\u201cmost closely\\u201d -> closest\", \"page 4: \\u201cultra goal -> maybe ultimate goal?\", \"page 4: \\\\theta is not defined\", \"page 4: eq (2): i -> m or m -> i\", \"page 4: the definition of an unordered set (S) coming as a sample of the product of m \\\\Tau spaces is not precise. Better to say each T_i \\\\in \\\\Tau.\", \"page 4: J_l is not defined,\", \"sec 3.3: the minimization are done over \\\\theta, it\\u2019s better to put that under \\u201cmin\\u201d and the conditions should be clearly indicated by \\u201cwith or s.t. or similar\\u201d\", \"page 7: zero-one -> binary\", \"page 7: \\u201c as the restriction of \\\\theta^{(t)} over the mask M^{(t)}\\u201d -> rephrase\", \"page 7: \\u201cit deems to reduce\\u201d -> rephrase\", \"page 8: \\u201cthis is consist\\u201d -> consistent\", \"it\\u2019s also good to settle on a single term for each concept to make it an easier read, for instance:\", \"\\u201cmeta-training tasks\\u201d and \\u201ctraining tasks\\u201d have been used interchangeably. I think \\u201ctraining tasks\\u201d and \\u201cmata training dataset\\u201d are better choices since a \\u201cmeta-training\\u201d task can refer to the whole task of meta learning.\", \"Meta-overfitting and meta-level overfitting\", \"meta-learner, meta-estimator, model\", \"meta-level training accuracy, meta-training accuracy\", \"meta training, meta learning\", \"base learner, learner, model \\u2192 I think it\\u2019s better to emphasize on learner being \\u201cbase learner\\u201d, avoid model as it is not clear what it refers to.\", \"============= Points of improvements =============\", \"The paper would significantly improve if 1) the text is polished and the equations revised. 2) hyperparameter optimization is done carefully and thoroughly described for both the proposed method as well as the baselines\"]}" ] }
Byg9bxrtwS
Kernel and Rich Regimes in Overparametrized Models
[ "Blake Woodworth", "Suriya Gunasekar", "Pedro Savarese", "Edward Moroshko", "Itay Golan", "Jason Lee", "Daniel Soudry", "Nathan Srebro" ]
A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.
[ "Overparametrized", "Implicit", "Bias", "Regularization", "Kernel", "Rich", "Adaptive", "Regime" ]
Reject
https://openreview.net/pdf?id=Byg9bxrtwS
https://openreview.net/forum?id=Byg9bxrtwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "28kwLPRzOM", "BylDcS_djH", "rkgXIHOusr", "SkgHXBudsH", "Hkgx4G7ZqH", "BklE7rOpKH", "B1eHz4x6Yr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741709, 1573582223028, 1573582155382, 1573582109496, 1572053544515, 1571812636272, 1571779597087 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2146/Authors" ], [ "ICLR.cc/2020/Conference/Paper2146/Authors" ], [ "ICLR.cc/2020/Conference/Paper2146/Authors" ], [ "ICLR.cc/2020/Conference/Paper2146/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2146/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2146/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper studies how the size of the initialization of neural network weights affects whether the resulting training puts the network in a \\\"kernel regime\\\" or a \\\"rich regime\\\". Using a two-layer model they show, theoretically and practically, the transition between kernel and rich regimes. Further experiments are provided for more complex settings.\\n\\nThe scores of the reviewers were widely spread, with a high score (8) from a low confidence reviewer with a very short review. While the authors responded to the reviewer comments, two of the reviewers (importantly including the one recommending reject) did not further engage.\\n\\nOverall, the paper studies an important problem, and provides insight into how weight initialization size can affect the final network. Unfortunately, there are many strong submissions to ICLR this year, and the submission in its current state is not yet suitable for publication.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to reviewer #1\", \"comment\": \"Thank you for your comments.\\n\\nRegarding the technical difficulty and proof technique compared to [Gunasekar+2017]: \\n\\nThe main novelty here is that we start with an optimization procedure (finite init + grad flow) and work backwards from its dynamics to uncover de-novo a complexity measure it is implicitly minimizing. This is different from [Gunasekar+2017] that started with a guess about what might be minimized (namely the nuclear norm) and worked forward from it. We should have indeed made this more explicit, and following your comment, we updated the manuscript explaining our derivation\\u2014-see the discussion before the proof of Theorem 1 on page 11 in the updated manuscript. As detailed there, we start from the dynamics, work out constraints satisfied by the dynamics, relate them to KKT conditions for a minimization problem of an unknown implied regularizer Q, and from that obtain a differential equation on Q which we solve. In a sense, the approach of [Gunasekar+2017] was \\u201cguess-and-check,\\u201d whereas in this paper we developed a principled approach that allows us to calculate the implied regularizer when we don\\u2019t have an obvious guess.\\n\\nUnderstanding the behavior in other settings (e.g. gradient descent instead of gradient flow, early stopping instead of running until convergence, etc.) is certainly interesting and important to understanding what happens in practical settings. In our setting, we do not know exactly what happens with early stopping. One reasonable hypothesis would be that optimizing with gradient flow from initialization alpha with early stopping would reach a point on the Q_alpha regularization path. However, we know that this is NOT necessarily the case. \\n\\nWe also have preliminary results that show that for the same simple model we consider in Section 4 with separable data and with the logistic or exponential loss (versus the square loss), then early stopping and the implicit bias are inseparable from each other. In particular, for any given initialization, the predictor will eventually converge in some sense to the minimum L1 margin predictor. On the other hand, for any given early stopping time, when the initialization is large enough, gradient flow will reach the maximum L2 margin predictor at the early stopping time. Therefore, the implicit bias depends simultaneously on the early stopping time AND the scale of the initialization. There are lots of interesting questions to explore here, which we hope to answer in the future.\\n\\nIn the case of non-linear models, it is much harder to characterize the implicit bias of the model outside of the kernel regime (in which case the non-linear model is effectively linear). Our experimental results in Figure 4 suggest that a similar phenomenon is occurring, where a small scale of initialization allows for better test error than larger initialization. We suspect that this corresponds to some sort of \\u201crich regime\\u201d for the non-linear models we use in the experiments, although we can\\u2019t say exactly what implicit bias that corresponds to. Extending the understanding from linear to non-linear models is a very important question for future work.\"}", "{\"title\": \"Thank you for your comments!\", \"comment\": \".\"}", "{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your comments.\\n\\nThe main contribution over Chizat and Bach is working out the entire entire transition between the kernel and rich regimes as a function of the initialization. Chizat and Bach only describe the limiting behaviour when alpha->infty, while we get a precise description for every finite alpha. Beyond showing an example of exactly how the Chizat and Bach limit comes about (which we think is also valuable in and of itself), understanding the behavior as a function of alpha (and not only in the limit alpha->infty) provided for new insights that were not apparent in previous work:\\n\\n- We show that reaching the rich regime can be very slow, and exponentially small initialization is necessary to enter this limit. This provides an explanation for why it has often been difficult to demonstrate the rich regime empirically, and can help cast discussions on this limit in new light. For example, [Arora+2019] cast doubt on the hypothesis that the rich regime in matrix factorization corresponds to nuclear norm minimization on the basis of several experiments where the nuclear norm is not exactly minimized. However, our work suggests that this may be explained by the fact that the initialization used isn\\u2019t (and in a sense can\\u2019t be) small enough, and gives a different twist to their results: maybe in the limit we do get nuclear norm, but the behavior before the limit is important since its extremely difficult to reach this limit.\\n\\n- Our analysis highlights the importance of the transition regime, between the two extremes (see discussion in the final two paragraphs of page 5). E.g., for the sparse regression problem described in Section 4, although the rich regime will lead to better learning, it is more difficult from an optimization perspective, and the \\u201ccorrect\\u201d initialization to use is in the transition. Our neural network experiments (see Figure 4) provide further evidence, showing that standard successful initialization schemes correspond to a point right on the boundary of leaving behind the good generalization of the rich regime and entering the kernel regime.\\n\\n- We connect between the scale of initialization and the sample complexity of learning (see Figures 1c and 2c). This level of detail was not explored by previous asymptotic analyses, and it further reinforces the importance of being in the transition regime as described above.\\n\\n- We investigate how higher order models/depth relate to the transition. In Section 5, we show that order-3+ models have rich regime behavior with dramatically larger initialization compared to the order-2 model. Similarly, in Figure 4a,b we see that deeper models have good rich regime generalization behavior at larger initializations than shallower models.\\n\\n- Finally, we develop an approach for deriving the implicit bias for a particular method in situations where it is unclear a priori what the implicit bias will be. See our response to Reviewer #1 and the discussion preceding the proof of Theorem 1 on pg 11 in the updated manuscript.\\n\\nAll the above insights rely on understanding the behavior as a function of alpha, and are not possible when only considering the alpha->infty endpoint as in Chizat and Bach. Beyond these specific insights, we expect our detailed description and the methodology developed (which is entirely different from Chizat and Bach) will also serve as a basis for future investigation. \\n\\nIn summary, our work is obviously heavily influenced by Chizat and Bach, but we take their work as a starting point and go well beyond what they already analyzed.\\n\\nFor all three of the experiments in Figure 4, which each use different architectures, we see that alpha ~ 1 has good test error and slightly larger alpha starts to degrade performance. This suggests that this phenomenon is fairly consistent over different architectures, although we did not experiment with different widths explicitly. This would be an interesting experiment to conduct, although we suspect that it is no coincidence that the transition point occurs right around alpha = 1, and thus we expect our experiments to be fairly robust to changes in width.\", \"notation\": \"e_1=[1,0,0,0,...,0] is the first standard basis vector and 1_d=[1,1,1,...,1] is the vector of all-ones in R^d. We have added an explanation in the figure caption (page 6).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper investigates the two regimes in the training of overparameterized networks (with small learning rates):\\n* kernel regime: the tangent kernel doesn't change much during training. The training behavior is then well approximated by a linear model (Taylor expansion at the initialization). This can happen when the weights are initialized to large values.\\n* rich regime: The kernel regime is turned into a rich regime when the assumptions of kernel regimes aren't met.\\n\\nSpecifically, the paper emphasizes how the scale of initialization controls the transition between the two regimes, which was first pointed out by Chizat & Bach (2018).\\n\\nMy main concern is that it is unclear what unique contributions are made by the paper, as the theoretical results are not more general than that of Chizat & Bach (2018). The contributions are not clearly stated and I can only see the execution of ideas from Chizat & Bach (2018) and applying them to more concrete examples, which leads to analytical results (for linear networks) in Theorem 1/2. This feels rather incremental.\", \"some_other_comments\": [\"In experiments it was shown that popular initialization schemes are right on the edge of entering the kernel regime, which is very interesting. How does this change with network widths and different architectures?\", \"It's difficult to see what Figure 2b tells because several notations are undefined. What are $e_1$ and $1_d$?\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper analyzes an inductive bias of the gradient flow for diagonal two-or higher-homogeneous models and characterizes a limit point depending on the initialization scale of parameters. Concretely, the paper shows that the gradient flow converges to an interpolator attaining minimum L1- (or L2-norm) when the scale is small (or large). In addition, these analyses are well verified empirically on MNIST and CIFAR-10 datasets.\", \"quality\": \"The work is of good quality and is technically sound.\", \"clarity\": \"The paper is well organized and easy to read.\", \"significance\": \"To explain the generalization ability of powerful machine learning models that can perfectly learn a training dataset, the implicit bias of the optimization methods and models play key roles when explicit regularization is not adopted. For instance, deep neural networks fall into this scenario. I think this paper makes a better contribution in this line of researches. Although, homogeneous models treated in this study is restricted (essentially linear models) and a theory is limited to the continuous gradient flow, these settings are rather common in this context. In [Gunasekar+(2017)], the convergence to the minimum L1-norm solution was shown for a slightly different model when the scale goes to zero. However, in addition to this property, the paper analyzes arbitrary scales of parameters and shows the convergence to the minimum L2-norm solution when the scale goes to infinity for diagonal homogeneous models.\\nIt would be nice if the authors could emphasize the technical difficulty compared to [Gunasekar+(2017)] to strengthen the contribution of the paper.\", \"a_few_questions\": [\"Can this analysis be extended to the setting of early stopping? Toward a better explanation of the generalization performance of deep learning, understanding of the inductive bias of the early stopping before convergence is more important.\", \"A provided theory is limited to linear models essentially. Is it possible to extend a theory to non-linear models?\", \"-----\"], \"update\": \"I thank the authors for the response. I am convinced of the difference from [Gunasekar+(2017)] and my review stands. I would like to keep my score.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"I really appreciated this paper. It discusses a very complex question (\\\"Are we learning in a kernel regime, or in a rich regime where features are identified\\\") by looking at perhaps the simplest model the authors could think of, and then study in detail the model. And how simple it turns out to be: just a linear regression with a twist. All in all, the paper is indeed is a clear demonstration that the differences between\\\"Kernel\\\" regime and one where some actual Learning is done can be demonstrated on simple examples. It is also the simplest model where one can observe a non-trivial inductive bias and 'implicit regularisation'\\n\\nI do not have much to say on the paper, except that I fully support publication.\"}" ] }
rJecbgHtDH
A Boolean Task Algebra for Reinforcement Learning
[ "Geraud Nangue Tasse", "Steven James", "Benjamin Rosman" ]
We propose a framework for defining a Boolean algebra over the space of tasks. This allows us to formulate new tasks in terms of the negation, disjunction and conjunction of a set of base tasks. We then show that by learning goal-oriented value functions and restricting the transition dynamics of the tasks, an agent can solve these new tasks with no further learning. We prove that by composing these value functions in specific ways, we immediately recover the optimal policies for all tasks expressible under the Boolean algebra. We verify our approach in two domains, including a high-dimensional video game environment requiring function approximation, where an agent first learns a set of base skills, and then composes them to solve a super-exponential number of new tasks.
[ "Reinforcement Learning", "Transfer", "Composition", "Lifelong", "Multi-task", "Deep Reinforcement learning" ]
Reject
https://openreview.net/pdf?id=rJecbgHtDH
https://openreview.net/forum?id=rJecbgHtDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "AMKp_qAFhg", "HJeTCJi3sH", "H1gv8W93ir", "ryltpNthiB", "Syl8lYDnor", "SyxuyeBnoH", "S1g66kSnjr", "BJxdj0V2sH", "HkxCYRN3sH", "BJeb9TNhjS", "r1lQupNhjH", "S1gpyr26tH", "rklZsDLptr", "S1e4FUxTFB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741673, 1573855189417, 1573851470556, 1573848256792, 1573841133810, 1573830624125, 1573830596725, 1573830303810, 1573830277794, 1573830025067, 1573829994563, 1571828964707, 1571805081256, 1571780220277 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/Authors" ], [ "ICLR.cc/2020/Conference/Paper2145/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2145/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2145/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper considers the situation where a set of reinforcement learning tasks are related by means of a Boolean algebra. The tasks considered are restricted to stochastic shortest path problems. The paper shows that learning goal-oriented value functions for subtasks enables the agent to solve new tasks (specified with boolean operations on the goal sets) in a zero-shot fashion. Furthermore, the Boolean operations on tasks are transformed to simple arithmetic operations on the optimal action-value functions, enabling the zero short transfer to a new task to be computationally efficient. This approach to zero-shot transfer is tested in the four room domain without function approximation and a small video game with function approximation.\\n\\nThe reviewers found several strengths and weaknesses in the paper. The paper was clearly written. The experiments support the claim that the method supports zero-shot composition of goal-specified tasks. The weaknesses lie in the restrictive assumptions. These assumptions require deterministic transition dynamics, reward functions that only differ on the terminal absorbing states, and having only two different terminal reward values possible across all tasks. These assumptions greatly restrict the applicability of the proposed method. The author response and reviewer comments indicated that some aspects these restrictions can be softened in practice, but the form of composition described in this paper is restrictive. The task restrictions also seem to limit the method's utility on general reinforcement learning problems.\\n\\nThe paper falls short of being ready for publication at ICLR. Further justification of the restrictive assumptions is required to convince the readers that the forms of composition considered in this paper are adequately general.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply to reviewer 1\", \"comment\": [\"2)\", \">>> Can you show some quantitative result and training curves or cummulated rewards?\", \"The boxplots showing average returns will be provided.\", \">>> why did you remove the pick up action?\", \"The pickup action adds additional sparsity to the rewards in that the agent can only receive goal rewards if it chooses to pickup an object while on top of it. This makes training prohibitively difficult using standard DQN. This is why the main experiment had fixed object position (to reduce sparcity). In the different terminal states, random object positions setting, we are demonstrating that composition stills holds without the additional constraints. So adding more sparsity with the pickup action was not needed.\", \"However note that more sophisticated learning methods would solve this issue. We leave this to future work.\", \"4)\", \">>> I am more referring to how entropy will affect your value functions because it seems like your method strictly relies on the value function. Would entropy break some of the assumptions?\", \"In entropy-regularized RL the learned exponentiated values can used to recover the Q_values. So our method should still work in this setting.\", \"Since we make no assumption on the learning methods, other than that they should produce the extended value functions, we believe our method works with most learning methods.\", \"We leave proper investigation of these to future work.\", \">>> Also zero-shot assumptions in this case becomes somewhere less appealing since you would have to retrain the actor every time.\", \"Note that our composition happens element wise, and the best action for a given goal remains un-changed even after composition (since composition of extended value functions produces extended value functions). All that changes during composition are the values per goal. Hence our composition method still works even with just the best value and actions for each state-goal.\"]}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the explanation and new experimental results.\\n1) Regarding generality, I was more referring to compositions such as language. The example I gave is something like \\\"Move the red ball to the left of green sphere\\\" and I wonder if this kind of composition can be expressed through Boolean algebra. Intuitively, the colors/shapes can be swapped out and directions too. This kind of composition through structures like language which is ubiquitous.\\n2) Can you show some quantitative result and training curves or cummulated rewards? As it stands I can't tell how the performance is affected, and why did you remove the pick up action?\\n3) Thank you for the explanation.\\n4) I am more referring to how entropy will affect your value functions because it seems like your method strictly relies on the value function. Would entropy break some of the assumptions? Also zero-shot assumptions in this case becomes somewhere less appealing since you would have to retrain the actor every time.\"}", "{\"title\": \"Reply to reviewer 2\", \"comment\": [\"Thank you for your quick response, which is greatly appreciated. We would just like to add that in the updated version we have separated our assumptions into Assumption 1 and Assumption 2 to make it clear what assumptions are we adding to the literature and why they are necessary.\", \"Assumption 1 is identical to that of [3].\", \"Assumption 2 says for all tasks, goals are either desirable or not. It is introduced to give tasks the Boolean nature necessary for the Boolean algebra to be formalised. As for the intuition for this assumption, consider again the tasks \\\"collect blue objects\\\" (B) and \\\"collect square objects\\\" (S). What is the meaning of the composed task \\\"collect blue objects that are not blue\\\" (i.e B AND NOT B), or the composed task \\\"collect square objects that are not squares\\\" (i.e S AND NOT S)? Intuitively they are both equally meaningless. This is what the Boolean algebra formalises as the universal lower bounds of tasks, M_emptyset, which is defined as all goals are equally undesirable.\", \"Our proofs for zero-shot negation, disjunction, and conjunction hold with just Assumption 1. This can be seen in the proof for the homomorphism (Theorem 3).\", \"While zero-shot composition holds for the individual operators without Assumption 2, the homomorphism does not. This is because a homomorphism requires the operators to be defined in an algebraic structure, but without Assumption 2 that structure is lost.\", \"While zero-shot negation and conjunction without additional constraints is a contribution by itself, our work focuses on the more general logical compositions.\", \"We hope to have motivated why this is necessary for lifelong agents, and why our additional assumption to achieve this is a necessary one.\"]}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your detailed response and effort in running the new experiments.\\n\\n> To the best of our knowledge, no prior methods in any reinforcement learning setting has explored optimal zero-shot composition of arbitrary negation, disjunction, and conjunction of tasks.\\n\\nWhile this is true, I meant that references [3] and [4] you mention in your comment do address composing value functions for general reward functions, and the main reason that this method is able to handle negation, disjunction, and conjunction is because of the restricted sparse + goal reaching setting. So it is a more general method (with provably good composition) for a restricted set of MDPs. I really appreciate the extra experiments to show that the algorithm can be run on other MDPs, although much of the theoretical derivations do not apply in those cases.\"}", "{\"title\": \"Reply to reviewer 2 (2/2)\", \"comment\": [\"3)\", \">>> it is not clear the language of Boolean algebra leads to significant insights in solving these compositional problems\", \"Note that Boolean algebra is the formal structure under which negation, disjunction, and conjunction operators are defined. Hence we use the language of Boolean algebra because that's the language of negations, conjunctions, and disjunctions. While it is popularly associated with computer logics (logic circuits), it is actually also important to many other fundamental fields, notably set theory and propositional logics.\", \"For some context/more intuition, consider learning the tasks \\u201ccollect blue objects\\u201d (B) and \\u201ccollect square objects\\u201d (S). We then want to immediately be able to do,\", \"\\u201ccollect blue objects or square objects\\u201d : B OR S\", \"\\u201ccollect blue squares\\u201d : B AND S\", \"\\u201ccollect square objects that are not blue\\u201d : S AND NOT B\", \"\\u201ccollect blue objects that are not squares\\u201d : B AND NOT S\", \"\\u201ccollect any objects that are not blue or squared\\u201d : NOT (B OR S)\", \"etc.\", \"Note how all these statements seem like logical statements, even though there is no formal definition of logics over tasks. So intuitively we want to be able to,\", \"pose tasks as logical compositions of known tasks: For ease of task specification, rather than having to figure out the reward functions that will enable the agent to learn a desired composed task, and having to do it for every single desired composed task.\", \"immediately solve them: For ease of task completion, since learning tasks is hard and the more we need to learn the more infeasible it becomes both in memory and time constraints. For example, if there are say 1 billion achievable goals in an environment (as is easily the case in real life), then there are 2^(10^9) possible distinct tasks. An agent equipped with a Boolean algebra only needs to learn floor(log2(10^9))+1 = 30 base tasks to be able to solve any of that astronomical number of tasks.\", \"The focus in this work is to achieve these intuitions formally by,\", \"Formally establishing logics over tasks: By formalising negation, disjunction, and conjunction of tasks under a Boolean Algebra.\", \"Formally showing zero-shot logical compositions of known tasks (i.e arbitrary negation, disjunction, and conjunction of tasks): By showing the homomorphism between the task and value function spaces.\", \"Meanwhile previous work focus on zero-shot conjunction (optimally by [3]) and disjunction (approximately by [4]), but none considers the general case of arbitrary negation, disjunction, and conjunction.\", \"[1] Abel, David, et al. \\\"Policy and value transfer in lifelong reinforcement learning.\\\" International Conference on Machine Learning. 2018.\", \"[2] Andrychowicz, Marcin, et al. \\\"Hindsight experience replay.\\\" Advances in Neural Information Processing Systems. 2017.\", \"[3] Van Niekerk, Benjamin, et al. \\\"Composing Value Functions in Reinforcement Learning.\\\" International Conference on Machine Learning. 2019.\", \"[4] Haarnoja, Tuomas, et al. \\\"Composable deep reinforcement learning for robotic manipulation.\\\" 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.\"]}", "{\"title\": \"Reply to reviewer 2 (1/2)\", \"comment\": \"Thank you for your careful review of our paper. We hope that the following points address your concerns.\\n\\n1)\\n>>> I worry that since the setting is so constrained, it is not likely to be widely applicable. The method in the paper likely does not apply to non-sparse, non-goal reaching settings.\\n\\n- In view of your concern, we investigated the effect of dropping the constraints made on the reward functions (see appendix A.7). Figure 7 shows the average return for all composed tasks in the four-rooms domain after training the base tasks under various relaxations of our assumptions. It shows that our framework works even with dense rewards and different terminal states across task.\\n- Note that goal reaching tasks and non-goal reaching tasks represent 2 different areas in reinforcement learning with large bodies of work. In this work, we are interested in goal-reaching tasks because they lend themselves to the lifelong setting where an agent is given tasks sampled from some distribution throughout its lifetime. This setting is formally defined in [1], and our work is a step towards achieving such lifelong agents.\\n- Also with goal-reaching tasks, if needed an agent can simply be left to continually act in the environment even after achieving a goal. For example, after learning how to collect blue objects, an agent can be left to continue collecting blue objects in the environment without terminating [3]. \\n\\n\\n\\n>>> ... prior methods have explored compositionality in that space anyways.\\n\\n- To the best of our knowledge, no prior methods in any reinforcement learning setting has explored optimal zero-shot composition of arbitrary negation, disjunction, and conjunction of tasks.\\n\\n\\n2)\\n>>> One suggestion is to discuss recent goal relabeling work such as Hindsight Experience Replay (Andrychowicz 2017). Kaelbling 1993 is mentioned already, but this line of work has recently shown significant progress in learning to achieve multiple goals at the same time from a different perspective (and also considers sparse rewards).\\n\\n- Thank you for the reference to HER [2]. There has indeed been a lot of work on efficient learning in multi-goal RL, and these can be used to learn the extended value functions. HER for example can be used to learn extended value functions in the sparse-rewards setting.\\n- Note however that this is orthogonal to our work, which is more focused on formalising logics over tasks and their zero-shot composition. However, important future work would indeed revolve around efficiently learning the base extended value functions, for which methods like HER would be relevant. We will add a discussion of this in the paper.\"}", "{\"title\": \"Reply to reviewer 3 (2/2)\", \"comment\": \">>> Could this task/value composition be extended to continuing tasks?\\n\\n- In this work, we are interested in goal-reaching tasks, which are by nature episodic. There is disagreement about an exact definition lifelong learning, but here we take it to mean the setting where an agent is given tasks sampled from some distribution throughout its lifetime, and must solve each task in turn. This setting is formally defined in [2].\\n- Note that while the tasks are episodic, if needed an agent can simply be left to continually act in the environment. For example after learning how to collect blue objects, an agent can be left to continue collecting blue objects in the environment without terminating.\\n\\n\\n[1] Van Niekerk, Benjamin, et al. \\\"Composing Value Functions in Reinforcement Learning.\\\" International Conference on Machine Learning. 2019.\\n[2] Abel, David, et al. \\\"Policy and value transfer in lifelong reinforcement learning.\\\" International Conference on Machine Learning. 2018.\"}", "{\"title\": \"Reply to reviewer 3 (1/2)\", \"comment\": \"Thank you for your careful review of our paper. We hope that the following points address your concerns.\\n\\n1)\\n>>> Additional motivation for the definition of extended value functions would be helpful to guide the reader.\\n\\nTo understand why standard value functions are insufficient, consider two tasks that have multiple different goals, but at least one common goal. Clearly, there is a meaningful conjunction between them-namely, achieving the common goal. Now consider an agent that learns standard value functions for both tasks, and which is then required to solve their conjunction without further learning. Note that this is impossible in general, since the regular value function for each task only represents the value of each state with respect to the *nearest* goal. That is, for all states where the nearest goal for each task is *not* the common goal, the agent has no information about that common goal. Conversely, by learning extended value functions, the agent is able to learn the value of achieving all goals, and not simply the nearest one.\\n\\n>>> Some further explanation on why extended value functions are necessary would be welcome at the beginning of section 3.2\\n\\nAs suggested we have added the above explanation in section 3.2.\\n\\n\\n2)\\n>>> Concerning assumption 1, it seems that the assumption that the reward functions only differ on the absorbing states is fairly limiting. For example, in a navigation task, if one goal location is A, then it must be an absorbing state under this formulation.\\n\\nThe assumption that all tasks have the same transition dynamics is the reason why formally they also need to have the same absorbing states. We think of the absorbing set as the set of all achievable goals in the environment, and each task is simply defined by how desirable each of those goals are. The assumption that the reward functions only differ on the absorbing set ensures the agents experience before reaching goal states is consistent across all tasks.\\n\\n>>> \\u2026 If we have another goal location B, then we cannot use paths through A since it is set as absorbing, even though that A may be part of the shortest path to B.\\n\\n- If we want to adhere strictly to the theory, then in general, one can have an action that the agent chooses to achieve goals. For example, in the four-rooms experiments, we have a 5th action for \\u201cstay\\u201d, such that a goal position only becomes terminal if the agent chooses to stay in it. This represents the intuition that if an agent is at the goal location of a different task, and chooses to stay in it, then it has clearly chosen the wrong behaviour for the current task. Similarly, we have added a 5th action for \\u201cpickup\\u201d to the 2d game environment. The agent can now follow an optimal path to the goal objects, then choose to collect it. For more intuition, assume the agent is a garbage collector and the objects are garbage. Clearly, if we ask the agent to collect plastics to make their recycling easy, we do not want the robot to also collect other garbage objects.\\n- All that being said, in practice we need not have the same absorbing set across all tasks (i.e the transition dynamics may differ in the absorbing sets). To demonstrate this, we have run additional experiments where we drop the constraint that the terminal states across tasks must be the same (see appendix A7), and achieve very similar results.\\n\\n\\n3)\\n>>> Could this task/value composition be extended to arbitrary reward functions?\\n\\n- The assumptions we make are for theoretical rigour. Note that the only additional assumption in comparison to the literature [1] is assumption (iv), which is the assumption that ensures tasks have a Boolean nature so that the algebra can be formally established.\\n- In practice our framework works even with dense rewards and different terminal states across task. We have added experiments in the four-rooms domain (see appendix A.7). Figure 7 shows the average return for all composed tasks after training the base tasks under various relaxations of our assumptions.\\n- The key insight that enables this level of generality is the introduction of extended value functions. While we provide a method for learning the extended value functions, a lot of work can still be done to make them more practical. The use of faster learning methods (such as hindsight experience replay) and better function approximators would improve learning these extended value functions. This is somewhat orthogonal to our main focus, but is certainly an important direction for future work.\"}", "{\"title\": \"Reply to reviewer 1 (2/2)\", \"comment\": \"3)\\n>>> In the current formulation, a policy is discouraged to visit goals that are not in its current goal sets (receives lowest reward). While this could be just a proof artifact, it can have some performance implications. For example, in the 4 room domain, if I place a goal in the left corridor, then the agent in the bottom left room will need to take a longer route to reach top left (bottom left -> bottom right -> top right -> top left) instead of the shorter route (bottom left -> top left). From this perspective, it seems some non-trivial efforts need to be put into designing these \\\"basis\\\" tasks\\n\\n- The assumption that all tasks have the same transition dynamics is the reason why formally they also need to have the same absorbing states. We think of the absorbing set as the set of all achievable goals in the environment, and each task is simply defined by how desirable each of those goals are. The assumption that the reward functions only differ on the absorbing set ensures the agents experience before reaching goal states is consistent across all tasks.\\n\\n- If we want to adhere strictly to the theory, then in general, one can have an action that the agent chooses to achieve goals. For example, in the four-rooms experiments, we have a 5th action for \\u201cstay\\u201d, such that a goal position only becomes terminal if the agent chooses to stay in it. This represents the intuition that if an agent is at the goal location of a different task, and chooses to stay in it, then it has clearly chosen the wrong behaviour for the current task. Similarly, we have added a 5th action for \\u201cpickup\\u201d to the 2d game environment. The agent can now follow an optimal path to the goal objects, then choose to collect it. For more intuition, assume the agent is a garbage collector and the objects are garbage. Clearly, if we ask the agent to collect plastics to make their recycling easy, we do not want the robot to also collect other garbage objects.\\n\\n- All that being said, in practice we need not have the same absorbing set across all tasks (i.e the transition dynamics may differ in the absorbing sets). To demonstrate this, we have run additional experiments where we drop the constraint that the terminal states across tasks must be the same (see appendix A7). \\n\\n\\n4)\\n>>> Can the method proposed in this paper be used with actor-critic style? Is the max-entropy principle applicable here as well? Discussion would be great and experiments would be even better.\\n\\n- Yes, since the homomorphism (Theorem 3) holds for any F, which is essentially a learning method. The extended value functions are goal oriented value functions and so the learning methods in multi-goal reinforcement learning are applicable to it. Hindsight Experience Replay [1] for example can be used to learn extended value functions in the sparse-rewards setting. Sophisticated learning methods can be employed to make learning extended value functions more efficient, but this is orthogonal to our main aim here - we leave this to future work. Finally, our results also make no assumption about the action space, and so readily extend to the continuous action setting.\\n\\n[1] Andrychowicz, Marcin, et al. \\\"Hindsight experience replay.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"title\": \"Reply to reviewer 1 (1/2)\", \"comment\": [\"Thank you for your careful review of our paper. We hope that the following points address your concerns.\", \"1)\", \">>> My biggest concern is whether boolean algebra is the right abstraction/primitive for task-level composition.\", \"Note that Boolean algebra is the formal structure under which negation, disjunction, and conjunction operators are defined. Hence we use it because we are interested in formalising the negation, disjunction, and conjunction of tasks. While it is popularly associated with computer logics (logic circuits), it is actually also important to many other fundamental fields, notably set theory and propositional logics.\", \"For some context/more intuition, consider the lifelong setting of a domestic robot. Say it has learned tasks like \\\"make drink\\\" (D), \\\"make tea\\\" (T), \\\"make coffee\\\" (C), \\u201cmake sugary drink\\u201d (S), \\\"make drink with milk\\\" (M), etc. We then want to immediately be able to do,\", \"\\u201cmake drink with sugar\\u201d : D AND S\", \"\\u201cmake tea or any drink without coffee\\u201d : T OR (D AND NOT C)\", \"\\u201cmake coffee or tea, with milk and without sugar\\\" : (C OR T) AND (M AND NOT S)\", \"etc.\", \"Note how all these statements seem like logical statements, even though there is no formal definition of logics over tasks. So intuitively we want to be able to,\", \"pose tasks as logical compositions of known tasks: For ease of task specification, rather than having to figure out the reward functions that will enable the agent to learn a desired composed task, and having to do it for every single desired composed task.\", \"immediately solve them: For ease of task completion, since learning tasks is hard. The more complex these tasks are and the more of them are desired, the more infeasible it becomes to learn all of them. For example, if there are say 1 billion achievable goals in an environment (as is easily the case in real life), then there are 2^(10^9) possible distinct tasks. An agent equipped with a Boolean algebra only needs to learn floor(log2(10^9))+1 = 30 base tasks to be able to solve any of that astronomical number of tasks.\", \"The focus in this work is to achieve these intuitions formally by,\", \"Formally establishing logics over tasks: By formalising negation, disjunction, and conjunction of tasks under a Boolean Algebra.\", \"Formally showing zero-shot logical compositions of known tasks (i.e arbitrary negation, disjunction, and conjunction of tasks): By showing the homomorphism between the task and value function spaces.\", \">>> For example, in the video game domain the author proposed, a very reasonable base task would be \\u201ccollect white objects\\u201d -- this task when composed with the task \\u201ccollect blue objects\\u201d is meaningless. This seems to be true for a large number of the MDP\\u2019s in the super-exponential composition.\", \"The conjunction of \\u201ccollect white objects\\u201d and \\u201c collect blue objects\\u201d is indeed meaningless, as it should be. But there is also meaningful compositions such as \\u201ccollect objects that are not white\\u201d, \\u201cCollect objects that are not white and not blue\\u201d, etc. None of which would be possible formally without a Boolean algebra. Note that the meaningless composition \\u201ccollect white objects that are blue\\u201d in the algebra reduces to \\u201ccollect any objects with low desirability\\u201d (the lower universal bound task of the environment). Hence meaninglessness is also formally defined in the Boolean algebra, since it formalises logics over tasks.\", \"2)\", \">>> Does the maze not change in the environment setup?\", \"The maze does not change. We have added another experiment where the agent and objects are randomly positioned at the start of each episode, and achieve similar results (see Appendix A.7.2).\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a framework for composing tasks by treating tasks as a Boolean algebra. The paper assumes an undiscounted MDP with a 0-1 reward and a fixed absorbing set G, and considers a family of tasks defined by different reward functions. Each task defers only by the value of the reward function at the absorbing set G. These restrictions are quite severe but basically describes goal-state reaching sparse reward tasks, which are quite general and valuable to study. The paper then defines a mapping onto a Boolean algebra for these tasks and shows how the mapping also allows re-using optimal Q functions for each task to solve a Boolean composition of these tasks. This is demonstrated on the tabular four-rooms environment and using deep Q learning for a 2D navigation task.\\n\\nThe writing is relatively clear and the experiments support the claim in the paper that the framework allows learning compositions of skills. Both experiments show that after learning a set of base tasks, the method can solve a task in a zero-shot manner by composing Q functions according to the specified task. This capability seems very useful wherever it can be applied. But I worry that since the setting is so constrained, it is not likely to be widely applicable. The method in the paper likely does not apply to non-sparse, non-goal reaching settings, and prior methods have explored compositionality in that space anyways.\\n\\nThe coverage of prior work seems complete. One suggestion is to discuss recent goal relabeling work such as Hindsight Experience Replay (Andrychowicz 2017). Kaelbling 1993 is mentioned already, but this line of work has recently shown significant progress in learning to achieve multiple goals at the same time from a different perspective (and also considers sparse rewards).\\n\\nHowever, my main concern with this paper is that it is not clear the language of Boolean algebra leads to significant insights in solving these compositional problems. Take Figure 1, which shows the disjunction and conjunction of tasks. While it is true the average does not lead to the same optimal policy as the conjunction, people use it because learning from the completely sparse reward is often prohibitively difficult. This kind of reasoning is straightforward in the restricted case of MDPs considered in the paper and people can design their reward function directly without considering boolean algebra. The result and proofs about recovering optimal Q functions without extra further training are interesting, but again, seem straightforward in the restricted family of MDPs considered without looking at Boolean algebra. Therefore, I am currently considering the paper borderline.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a method of combining value functions for a certain class of tasks, including shortest path problems, to solve composed tasks. By expressing tasks as a Boolean algebra, they can be combined using the negation, conjunction and disjunction operations. Analogous operations are available for the optimal value functions of the tasks, which allows the agent to have immediate access to the optimal policy of these composed tasks after solving the base tasks. The theoretical composition properties are confirmed empirically on the four rooms environment and with function approximation on a more complex domain.\\n\\nThe paper is generally well-written with a clear theoretical contribution and convincing experiments. The problem of composing tasks is important and I think this paper would be a good addition to the literature. My only concerns are in regards to the assumptions made in this formulation.\", \"i_would_be_wiling_to_increase_my_score_if_the_authors_address_the_following_points\": \"1) Some further explanation on why extended value functions are necessary would be welcome at the beginning of section 3.2. Currently, it is only said that regular value functions are insufficient without any explanation. Also, additional motivation for the definition of extended value functions would be helpful to guide the reader.\\n\\n2) Concerning assumption 1, it seems that the assumption that the reward functions only differ on the absorbing states is fairly limiting. For example, in a navigation task, if one goal location is A, then it must be an absorbing state under this formulation. So, if we have another goal location B, then we cannot use paths through A since it is set as absorbing, even though that A may be part of the shortest path to B. Would it be possible to modify this assumption to circumvent this problem? \\n\\n3) In a similar vein, could the authors discuss possible limitations to this framework? For example, could this task/value composition be extended to arbitrary reward functions and continuing tasks or are there some fundamental limitations to this approach? If lifelong learning is a motivating setting for this work, it seems like dealing with non-episodic tasks and more complex rewards would be an important goal. \\n\\n4) Fig. 3 b) does not seem to be particularly important as the result is clear enough in text. Perhaps the space could be used for something else. \\n\\nAs an aside, the paper is well-polished and the lack of typos is appreciated.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new framework for defining Boolean algebra over the space of tasks in goal conditioned reinforcement learning and thereby achieving composition of tasks, defined by boolean operators, in zero-shot. The paper proves that with some assumptions made about a family of MDP\\u2019s, one can build Boolean algebra over the optimal Q-functions of the individual MDP and these Q-functions are equipped with all the mathematical operations that come with the Boolean algebra (e.g negation, conjunction). The paper verify their theoretical results by experiments in both the 4-room domain with standard Q-learning and in a simple video game domain with high-dimensional observation space and DQN. The proofs of all the theoretical results seem sound and the experiments support the theory. I enjoyed reading this paper as the paper is generally well written and the idea is quite neat.\\n\\nThat being said, I have a few concerns and questions about the paper that I would like the authors to respond to so I am leaning towards rejecting this paper at the moment. However, I will raise my score if the revision addresses my concerns or provide additional empirical evidence. My concerns are the following:\\n\\n 1. My biggest concern is whether boolean algebra is the right abstraction/primitive for task level composition. Thus far, the most important application of boolean algebra has been in designing logic circuits where the individual components are quite simple. In the proposed framework, it seems that all of base tasks are required to be a well defined task which are already quite complex, so the utilities of composing them seems limited. For example, in the video game domain the author proposed, a very reasonable base task would be \\u201ccollect white objects\\u201d -- this task when composed with the task \\u201ccollect blue objects\\u201d is meaningless. This seems to be true for a large number of the MDP\\u2019s in the super-exponential composition. Furthermore, [1] also considers task level composition with sparse reward but I think these compositions cannot be expressed by boolean algebra. One of the most important appeal of RL is its generality so It would be great if the author can discuss the limitations of the proposed framework and provide an complex/real-world scenarios where composing these already complex base tasks are useful. Just writing would suffice as I understand setting up new environments can be difficult in short notice (Of course, actual experiments would be even better).\\n\\n 2. Does the maze not change in the environment setup? (It would be nice if source code is provided) If that is the case I would like to see additional experiments on different mazes (i.e. different placement of walls and objects). In my opinion, if there is only a single maze, then the only thing that changes is the location of the agent which makes the task pretty easy and do not show the full benefit of function approximators. I think it\\u2019d strengthen the results if the framework generalizes to multiple and possibly unseen mazes.\\n\\n 3. In the current formulation, a policy is discouraged to visit goals that are not in its current goal sets (receives lowest reward). While this could be just a proof artifact, it can have some performance implications. For example, in the 4 room domain, if I place a goal in the left corridor, then the agent in the bottom left room will need to take a longer route to reach top left (bottom left -> bottom right -> top right -> top left) instead of the shorter route (bottom left -> top left). From this perspective, it seems some non-trivial efforts need to be put into designing these \\\"basis\\\" tasks. I am curious about the discussion on this as well.\\n\\n 4. Haarnoja et al. 2018 and other works on composing Q values can be applied to high-dimensional continuous control using actor-critic style algorithms and relies on the maximum entropy principle. Can the method proposed in this paper be used with actor-critic style? Is the max-entropy principle applicable here as well? Discussion would be great and experiments would be even better.\\n\\nOut of all my concerns, 1 matters the most and I am willing to raise my score to weakly accept if it\\u2019s properly addressed. If, in addition, the authors could adequately address 2-4 I will raise my score to accept.\\n\\n=======================================================================\", \"minor_comments_that_did_not_affect_my_decision\": [\"In definition 1, it would be nice to define r_min and r_max and g \\\\ne s \\\\in \\\\mathcal{G} is also somewhat confusing.\", \"In definition 2, \\\\pi_g is never defined\"], \"reference\": \"[1] Language as an Abstraction for Hierarchical Deep Reinforcement Learning, Jiang et al. 2019\"}" ] }
H1xFWgrFPS
Explanation by Progressive Exaggeration
[ "Sumedha Singla", "Brian Pollack", "Junxiang Chen", "Kayhan Batmanghelich" ]
As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (eg saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually change the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a ``tuning knob'' to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.
[ "Explain", "deep learning", "black box", "GAN", "counterfactual" ]
Accept (Spotlight)
https://openreview.net/pdf?id=H1xFWgrFPS
https://openreview.net/forum?id=H1xFWgrFPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "J9xLvtkRFz", "rJxvtP93sS", "SygEn1AVsS", "Syg6pAa4oH", "rJgq6kTS5B", "H1eKOkenFS", "BJgotSzhDH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "comment" ], "note_created": [ 1576798741641, 1573853054859, 1573343147606, 1573342917055, 1572356033857, 1571712881166, 1569625475400 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2144/Authors" ], [ "ICLR.cc/2020/Conference/Paper2144/Authors" ], [ "ICLR.cc/2020/Conference/Paper2144/Authors" ], [ "ICLR.cc/2020/Conference/Paper2144/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2144/AnonReviewer1" ], [ "~Cantona_ViVian1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper presents an idea for interpolating between two points in the decision-space of a black-box classifier in the image-space, while producing plausible images along the interpolation path. The presentation is clear and the experiments support the premise of the model.\\nWhile the proposed technique can be used to help understanding how a classifier works, I have strong reservations in calling the generated samples \\\"explanations\\\". In particular, there is no reason for the true explanation of how the classifier works to lie in the manifold of plausible images. This constraint is more of a feature to please humans rather than to explain the geometry of the decision boundary.\\nI believe this paper will be well-received and I suggested acceptance, but I believe it will be of limited usefulness for robust understanding of the decision boundary of classifiers.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary of the updates to the paper.\", \"comment\": \"We want to thank the reviewers for their valuable and constructive feedback.\\n\\nWe have made the following changes in the revision of our paper.\\n1. Updated references to point to published articles.\\n2. Added human evaluation experiment in Appendix section A.4. We used Amazon Mechanical Turk (AMT) to conduct human experiments to demonstrate that the progressive exaggeration produced by our model is visually perceivable to humans. \\n3. Updated the description of Figure 1 to better explain the intent of the schematic figure.\\n4. Updated Section 4.2 to better explain the experiment with medical data.\\n5. Updated Section 4.1 with results from xGEM for data consistency and identity preservation goals.\\n6. Added another evaluation metric for identity preservation in Section 4.1. The new metric is based on face verification.\\n7. Added Appendix section with further experiments to demonstrate our model's compatibility with a multi-label classifier and, an ablation study, to show the relative importance of each of the three terms in the final loss function in equation 9.\\n\\n\\nWe hope that these changes address the reviewers' concerns. We are happy to provide any more details. We will also release our code on GitHub very soon.\\n\\nRegards,\\nThe Authors\"}", "{\"title\": \"Reply to Official Blind Review #1\", \"comment\": \"Thank you for your valuable and constructive comments.\\n\\n1. I don't understand Figure 1a. I don't think this helps to illustrate the point. M_z seems to just be a bottleneck but the writing makes it seem like it is more. \\n\\n[Ans] We agree that M_z is just a bottleneck. Through Figure 1a, we want to show the abstraction of the changes (perturbations) in the bottleneck. We hope the figure is conveying the message. We have added additional text to explain the figure better. If it is still misleading, we can remove the figure.\\n\\nFigure 1a is showing that the perturbation of the image happens not in high dimensional image space (M_x) but in low dimensional embedding space i.e. M_z. And there is a correspondence between image space (M_x) and latent space (M_z).\\n\\nAlso, Figure 1a is explaining the meaning of the desired perturbation (\\\\delta). Our proposed explainer function takes two arguments: a query image and the desired perturbation. Figure 1a demonstrates that the desired perturbation is the desired change in f(x). Most of the earlier work denoted \\\\delta as the amount of change in the input image (x). \\n\\n\\n2. Section 4.2 is a bit hard to read. It is not clear for me what is the goal of this section.\\n\\n[Ans] We understand that our audience will be less familiar with the x-ray images. We are showing the results of our model for evaluating Cardiomegaly disease on chest x-ray. Cardiomegaly means an enlarged heart. We overlaid the heart segmentation over the x-ray image, to help the readers in visualizing the gradual change in heart size. The black-box model didn\\u2019t use the heart segmentation or the heart size information to classify chest x-ray as Cardiomegaly. Our explanation model was successful in exaggerating the correct features (increasing the heart size), which comply with the clinical definition of the disease. \\n\\n\\nIn Section 4.2, we showed that on population-level, when we generate a counterfactual (abnormal) for a normal chest x-ray, it has a higher heart size as compared to the normal population and vice-versa. Thus, the explainer was successful in correlating the heart size with the disease. Hence, the explainer verified that the back-box is considering the correct feature (heart-size) for identifying the target class (cardiomegaly). \\n\\n3. Section 4.4 seems very similar to the idea in this work https://arxiv.org/abs/1805.08841 which studied how bias in CycleGANs can be seen when you vary the bias which I think should be cited here.\\n\\n[Ans] The conclusion of the suggested work (Distribution Matching Losses Can Hallucinate Features in Medical Image Translation) is that if the training data has biased, then GAN inference will reflect that bias. Section 4.4 is using this conclusion to detect bias. Thank you for introducing this relevant work. We have cited this paper in our updated version\\n\\n\\n4. Typos: \\\"our application of interested\\\"\\n[Ans] We have corrected the typo in our updated version.\\n\\nWe will soon upload our revised version.\"}", "{\"title\": \"Reply to Official Blind Review #3\", \"comment\": \"Thank you reviewer for your valuable and constructive comments.\\n\\n1. The presentation is clear. The coverage of prior work is sufficient (although references should point to the published work instead of arxiv entries, when the former is available).\\n\\n[Ans] We have updated the references in the paper to the published work.\\n\\n2. One question that is not addressed is how efficient is this method, in terms of computational cost. This is a method that increases the amount of input data (through perturbation). What is the minimum amount of input data that needs to be perturbed in this way, before the method can become human interpretable?\\n\\n[Ans] \\nWe want to answer this question in terms of computational and statistical efficiency. Computationally, our model is very efficient. At inference time, for a new image, only a single forward pass is required to generate a series of perturbation images (explanation).\\n\\nIn terms of statistical efficiency, yes, our model requires a minimum amount of input data for GAN training. The training data should be sufficient for training GAN and for producing realistic-looking results. However, the end-user can use any data that is compatible with the black-box model to train the explainer function, and not necessarily the same data that was used to train the black-box model. Also, we don\\u2019t require labeled (supervised) data for training our explainer function. \\n\\n3. Also, ideally any work on human interpretability of ML should be evaluated on humans. If not, it is an approximation, and it should be presented and reasoned as such (with a discussion of limitations and caveats, for instance).\\n\\n[Ans] We are currently running human experiments to test our model and will add the results in the revision. For human evaluation, we are running three tasks on Amazon Mechanical Turk. \\n\\nIn the first task, we demonstrate how humans perceive \\u201cthe progressive exaggeration\\u201d aspect of our explanation. In this task, we showed users two explanations create by our model for the same individual and ask the user to compare them in terms of a target class like age and smile (e.g. Identify the image in which the person is smiling more?). \\n\\nIn the second task, we show that our explanations help the user to understand better, the target class for the classifier. In this task, we showed users a series of images with gradual exaggeration of a target class, and ask the users to identify the target class. (e.g what is changing in the below images? Option: Age, Smile, Gender, Nothing, Something else). \\n\\nIn the third task, we demonstrate that our model helped the user to identify problems (biased training) in a black-box. Here, we used the same setting as in the second task but also showed explanations generated from a biased classifier. \\n\\n\\nWe will soon upload our revised version.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents a method for explaining the output of black box classification of images. The method generates gradual perturbation of outputs in response to gradually perturbed input queries. The rationale is that, by looking at these, humans can interpret the classification mechanics.\\n\\nThe presentation is clear. The coverage of prior work is sufficient (although references should point to the published work instead of arxiv entries, when the former is available).\\n\\nOne question that is not addressed is how efficient is this method, in terms of computational cost. This is a method that increases the amount of input data (through perturbation). What is the minimum amount of input data that needs to be perturbed in this way, before the method can become human interpretable? \\n\\nAlso, ideally any work on human interpretability of ML should be evaluated on humans. If not, it is an approximation, and it should be presented and reasoned as such (with a discussion of limitations and caveats, for instance).\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Here are the claims I could find in the intro:\\n\\\"Given a query input to a black-box, we aim at explaining the outcome by providing plausible and progressive variations to the query that can result in a change to the output\\\"\\n > This is well supported as the model generates these and it is very reasonable that it can.\\n\\\"the counterfactually generated samples are realistic-looking\\\"\\n> The images seem to support this.\\n\\\"the method can be used to detect bias in training of the predictor\\\"\\n> Section 4.4 makes it really clear that, at least in the described setting, it works.\\n\\nI think the idea could be presented in a better way. The general concept of exaggerating a feature that represented a class seems novel and exciting. Just based on the novelty of that alone I think this is worth accepting. I would imagine there would be a cleaner way of achieving all this but maybe it is all necessary.\\n\\nI don't understand Figure 1a. I don't think this helps to illustrate the point. M_z seems to just be a bottleneck but the writing makes it seem like it is more.\\n\\nSection 4.2 is a bit hard to read. It is not clear for me what is the goal of this section.\\n\\nSection 4.4 seems very similar to the idea in this work https://arxiv.org/abs/1805.08841 which studied how bias in CycleGANs can be seen when you vary the bias which I think should be cited here.\", \"typos\": \"\\\"our application of interested\\\"\"}", "{\"comment\": \"This is OPEN REVIEW, not a place just for SELF-PROMOTION.\", \"title\": \"Shameless\"}" ] }
ryxtWgSKPB
Quantum Optical Experiments Modeled by Long Short-Term Memory
[ "Thomas Adler", "Manuel Erhard", "Mario Krenn", "Johannes Brandstetter", "Johannes Kofler", "Sepp Hochreiter" ]
We demonstrate how machine learning is able to model experiments in quantum physics. Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography. Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels. Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it. To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states. In this work, we show that machine learning models can provide significant improvement over random search. We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.
[ "Recurrent Networks", "LSTM", "Sequence Analysis", "Binary Classification" ]
Reject
https://openreview.net/pdf?id=ryxtWgSKPB
https://openreview.net/forum?id=ryxtWgSKPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "z5z1tE1UOS", "rJgKzbZpKB", "B1gPU8eTtS", "rJe7c2NjKB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741610, 1571782929041, 1571780175366, 1571667083466 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2143/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2143/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2143/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper predicts properties of quantum states through RNNs. The idea is nice, but the results are very limited and require more work. It seems to be more suited for a conference focussing on quantum ML---even when the authors have an ML background.\\n\\nAll reviewers agree on a rejection, and their arguments are solid. The authors offered no rebuttal.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper looks at the problem of predicting 2 properties of quantum states produced by an optical table consisting of a sequence of physical elements that modify a quantum state in a particular way, given the sequence of optical elements applied. The authors train 2 separate recurrent networks (LSTMs). The first one takes a sequence of optical elements and predicts a binary boolean label that answers the question of whether the resulting quantum state would be maximally entangled. The second one takes a sequence of optical elements and predicts the Schmid rank vector of the resulting state -- a linear algebraic quantity that is used in quantum information science. The authors explore whether the trained networks generalize to longer sequences of optical elements than trained on, and conduct other checks on the result.\\n\\nI really appreciate that the authors checked the applicability to sequences of elements longer than trained on. Another great point is the separation of the problem into the two LSTM, so that spurious correlations between the Schmid rank vector and entanglement cannot be used by the NN.\\n\\nWhile overall I find the problem interesting and the authors\\u2019 approach reasonable, I have a large number of questions / points of confusion that I will detail below.\\n\\n--------------- Point 1 ----------------\\nIt is unclear what is actually fed into the optical table. My interpretation of the paper is that the input state to the optical table *must be fixed*, since it is not an input to the LSTM. This severely limits the applicability of the framework presented.\\n\\nI was unable to find what quantum state is actually being input into the sequence of optical elements whose effect you are trying to model. Considering that the properties of the quantum state /after/ the application of the set of N optical elements depend very strongly on the *input quantum state*, I find this omission significant. For an input state rho_0, each element L (which in this particular case is the input to the LSTM), will transform the state as rho -> L rho -> rho\\u2019. A series of N elements can be viewed as an application of a sequence of N operations L_1, \\u2026. , L_N. Therefore the resulting state is rho_final = L_N L_{N-1} \\u2026. L_1 rho_0. As it is clear here, the output state rho_final depends *both* on a) the sequence of applied operations / optical elements (L1,L2, \\u2026, LN) as modeled in this paper, and b) the actual input state rho_0. Since what is actually fed into the LSTM is only the sequence of optical elements (L1,L2, \\u2026, LN), while the loss depends on the properties of rho_final, I believe rho_0 must have been assumed fixed and constant throughout the whole paper (i.e. that every data point in the train and test set actually had the same rho_0). If that is the case, the problem that the paper addresses is not /predicting the properties of a quantum state produced by a particular sequence of optical elements/, but rather /predicting the properties of a quantum state produced by a particular sequence of optical elements *given a fixed, constant input state, and not other input state*/. That is a much more restricted regime. I think authors should be clearer about the actual problem their paper is addressing.\\n\\n--------------- Point 2 ----------------\\nLack of baselines. The problem setup and metrics used are quite specific (understandably) to the problem at hand. However, I do not have intuitions for what a \\u201cbaseline\\u201d performance should be and therefore cannot validate whether the approach presented improves in it. I would like to see something equivalent to the 10% chance result on a 10-class classification task. To be more precise, while the binary classification of the maximally entangled nature of the quantum state is clear, the L_2 distance on the sorted 3-tuple of the Schmid rank vector is less so. The explicitly encoded sorted nature of the output 3-tuple (it is parameterized as such) makes it hard for me to make sense of the L_2 distance threshold r that the authors use (which is in fact at some sections r=3, i.e. quite large). I think that adding a baseline performance of some sort would greatly improve my ability to see how much better the LSTM approach performs.\\n\\n--------------- Point 3 ----------------\\nThe datatype of the LSTM inputs is unclear to me. As I discussed in point 1), the resulting quantum properties of the output state after the application of N optical elements depends on the input state as well as the sequence of elements applied. I am not clear on how the authors encode the optical elements themselves -- i.e. what datatype are the inputs x1, \\u2026 , xN to the LSTM. If they model them categorically as one-hot vectors, i.e. a 0 degree orientation polarization plate as (1,0,0,0,...0), a beam splitter as (0,1,0,0,....,0), a 90 degrees polarization place as (0,0,1,0,....,0) etc, they are restricting the applicability of the trained models to that particular set of optical elements. For example, if I wanted to know what adding a polarization plate with orientation alpha will do, I would not be able to do that. If it indeed is true that the inputs are drawn from a small number of possible optical elements that has to be specified prior to training, this would again limit the scope of applicability of the results presented.\\n\\n--------------- Point 4 ----------------\\nStates as state vectors or density matrices? It wasn\\u2019t clear to me what the actual state that is being fed into the optical table is. This might be a minor point, but Equation 1 suggests it is in fact a state vector, while I thought quantum optical experiments work with density matrices. This is important for the dimensionality considerations involved, since e.g. for a set of M qubits, the state vector would have 2^M elements (-constraints), while the density matrix would have 2^(2M) elements (-constraints). Solving the latter problem would therefore be more impressive than the former.\\n\\n--------------- Point 5 ----------------\\nAre you always working with 3 subsystems/photons? From reading the paper I didn\\u2019t understand whether you were restricting your setup to always work in the 3 photon regime that you mentioned in Equation 1. What is the range of the values that can appear in the Schmid rank vector? If they are small, then setting the threshold r=3 would be very generous and making it easy for the LSTM. \\n\\n--------------- Conclusion ----------------\\nWhile I like the problem and the attempted solution, the results presented (as I understood them) are more restricted and therefore weaker than I originally expected. I believe the results are promising, but more clarity and more work on generalizing beyond the restricted regime presented would greatly improve the paper.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper investigates how LSTM networks learn quantum optical experimental setup and predict characteristics of resulting quantum states. While the content of this paper may be of interest to quantum computing specialists (I am unable to actually judge that), my impression is that it is out of scope for ICLR. Not by default, but by the style of writing and lack of effort to make the studied problem accessible to non-expert audience. After a general introduction the paper starts immediately with a physics notation for wavefunctions (ok this one could argue is still generic enough for many scientists to understand), but right after is talks of orbital angular momentum and Schmidt rank vector. I do have a background in physics, but not speciality in quantum computing, and at the end of section 2.1 I am already lost. Subsequent section are clearer but since I did not understand how is the data really represented it is impossible for me to gain insight on what is actually being done. The same is true with the evaluation as I do not know what I should be comparing to and what the standard methods give. The paper could have been a nice opening towards new applications for ICLR but it would have to be written in a much more pedestrian manner. In the present form I argue for rejection.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed to use machine learning models to predict certain properties of complex quantum systems. In quantum physics experiments, one need to randomly search millions of experimental setups to search for interesting experiments. This paper shown that machine learning models can provide significant improvement over random search.\\n\\nIn general, this paper is easy to follow and clearly presents the main idea and verifies the effectiveness of the proposed method. However, I still have some concerns about this paper. This paper only used the machine learning method, i.e. LSTM model and binary cross entropy loss function for solving quantum physical tasks, which provides less knowledge to the machine learning or representation learning community. In my opinion, this paper should be submitted to a quantum physics journal or conference, rather than a machine learning conference.\"}" ] }
S1l_ZlrFvS
Why do These Match? Explaining the Behavior of Image Similarity Models
[ "Bryan A. Plummer", "Mariya I. Vasileva", "Vitali Petsiuk", "Kate Saenko", "David Forsyth" ]
Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.
[ "explainable artificial intelligence", "image similarity", "artificial intelligence for fashion" ]
Reject
https://openreview.net/pdf?id=S1l_ZlrFvS
https://openreview.net/forum?id=S1l_ZlrFvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qD1q5Z6ta", "rJxnfmhhiB", "rkgosaFjoB", "Hygmw6tsor", "BygfkaYosr", "H1xPdntjiH", "HJl17Yad5B", "S1gkMmh1qH", "BJeHgcM6KH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741580, 1573860115783, 1573784994917, 1573784922589, 1573784794228, 1573784687179, 1572555030766, 1571959558721, 1571789293121 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2142/Authors" ], [ "ICLR.cc/2020/Conference/Paper2142/Authors" ], [ "ICLR.cc/2020/Conference/Paper2142/Authors" ], [ "ICLR.cc/2020/Conference/Paper2142/Authors" ], [ "ICLR.cc/2020/Conference/Paper2142/Authors" ], [ "ICLR.cc/2020/Conference/Paper2142/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2142/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2142/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This submission proposes an explainability method for deep visual representation models that have been trained to compute image similarity.\", \"strengths\": \"-The paper tackles an important and overlooked problem.\\n-The proposed approach is novel and interesting.\", \"weaknesses\": \"-The evaluation is not convincing. In particular (i) the evaluation is performed only on ground-truth pairs, rather than on ground-truth pairs and predicted pairs; (ii) the user study doesn\\u2019t disambiguate whether users find the SANE explanations better than the saliency map explanations or whether users tend to find text more understandable in general than heat maps. The user study should have compared their predicted attributes to the attribute prediction baseline; (iii) the explanation of Figure 4 is not convincing: the attribute is not only being removed. A new attribute is also being inserted (i.e. a new color). Therefore it\\u2019s not clear whether the similarity score should have increased or decreased; (iv) the proposed metric in section 4.2 is flawed: It matters whether similarity increases or decreases with insertion or deletion. The proposed metric doesn\\u2019t reflect that.\\n-Some key details, such as how the attribute insertion process was performed, haven\\u2019t been explained. \\n\\nThe reviewer ratings were borderline after discussion, with some important concerns still not having been addressed after the author feedback period. Given the remaining shortcomings, AC recommends rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Addition\", \"comment\": \"- ...the saliency map might highlight regions of the zipper and where the black color is present...\\n\\nAs you noted, a saliency map might represent more than one attribute. We evaluated the performance of the top ranked attribute, but one could return the top K attributes using our model. We didn\\u2019t do this because attributes are often very strongly correlated and there is no generally accepted procedure for accounting for correlated attributes in the score.\"}", "{\"title\": \"Response to R1\", \"comment\": \"We have used many of your comments to improve our paper in our updated pdf. Direct responses to questions are addressed below.\\n\\n- What is the context in improving image similarity explainability? I believe examples in industry or medical could be found to highlight the story of the paper.\\n\\nSome example applications have been added to the introduction.\\n\\n\\n\\n- Saliency-based explanations, the paper refers to white-box models but does not offer explanations as to why Mask is chosen over other methods (gradcam, guided backprop etc).\\n\\nMask has shown to perform better than many other white-box alternatives like gradcam and guided backprop on many tasks.\\n\\n\\n\\n- TCAV is mentioned, as far as I know, the method works with concepts as images against random images. Here attributes are used as the concepts, how are the random counterparts selected?\\n\\nTCAV is now mentioned in the related work. The random images are selected from those which are not annotated with the target concept.\"}", "{\"title\": \"Response to R2\", \"comment\": \"- Applications of such a combined explanatory system don\\u2019t seem to be highly motivated in the introduction. I suggest the authors discuss more of the image similarity based applications and less on the discussion and heavy citation of generalized deep neural networks.\\n\\nWe have made updates to the pdf to discuss this more.\\n\\n\\n\\n- It would have been more useful to give the reasoning for the selection of the L1 and L2 losses compared to other similarity and divergence based losses.\\n\\nAs discussed in the general comments, the L1 loss performs better than alternatives like sigmoid + binary cross entropy. When comparing saliency map and attribute activation maps we use L2 loss, but these maps are compared for the same image. Thus, these maps should align with each other exactly, making distance-based losses like L2 a good choice.\\n\\n\\n\\n- Similar to the above point, the choice of cosine similarity to compare match b/n attribute activation maps and saliency maps seem arbitrary. The method is described well but why cosine similarity was chosen in terms of its benefits compared to other similarity metrics is not that clear.\\n\\nAs with the above point, since the maps being compared are from the same image, using distance-based metrics is ideal. Cosine similarity, in fact, is a very desirable similarity function since it effectively normalizes the features.\\n\\n\\n\\n- Evaluation on more datasets such as person/pedestrian attributes datasets would have demonstrated the generalizability of the proposed method across multiple practical domains. As such, I would suggest the authors test their method on at least one person/pedestrian attributes dataset such as PETA, Market1501, etc.\\n\\nWe show that our approach performs well on two datasets from very different domains. Unfortunately, running additional datasets such as those referred to by the reviewer is not feasible within the rebuttal period.\\n\\n\\n\\n- A simple template based explanation that incorporated the selected/matched attribute would have been more effective.\\n\\nA template-based explanation is how we would expect our approach to be used in practice, and Figure 1(b) shows an example of how this would work. However, due to space constraints, for other qualitative results we showed the explanation attribute alone.\\n\\n\\n\\n- The results are too concise and a few ablation results on different losses etc. could have helped.\\n\\nAs shown in the general comments and discussed earlier, we found other losses tend to hurt performance.\"}", "{\"title\": \"Response to R4\", \"comment\": \"- I would suggest the authors to include a brief explanation of the architecture used for the attribute predictor since it will help to understand how the attribute activation is computed. I am assuming that a Fully-Convolutional Neural Network is being used, where the output of the last convolutional layers has as many channels as the numbers of classes. Is this correct?\\n\\nYes, this is correct, the pdf has been updated accordingly.\\n\\n\\n\\n- Why using softmax + L1 loss to train the multi-attribute predictor? Aren't there other activations and losses better suited for multi-label classification, such as sigmoid + binary cross entropy loss, where there's no need to divide the ground-truth labels?\\n\\nAs discussed in our general comments, this is because softmax + L1 loss performed better.\\n\\n\\n\\n- My first question is: how is the best matching attribute match? I missed this explanation in the paper and, to my understanding, this is a very crucial step.\\n\\nAt test time, the attribute explanation is selected using Eq. (4). During training, we compute the loss function when supervising the attribute explanation maps with the saliency maps using Eq. (2).\\n\\n\\n\\n- My second concern is that I don't see why the attribute activation map should be matched with the similarity saliency map since not all the regions highlighted in the similarity map might describe the attribute. Could the authors explain the intuition behind this design choice?\\n\\nOur hypothesis is that the regions identified as important by a saliency map should be able to be explained by an attribute, and so the most prominent attribute at explaining the match should have an attribute activation map that is close to the saliency map. The intuition is that, typically, the most explanatory attribute for a match would be some salient property for the query image that dominates over others, and we find that to be empirically true - for instance, looking at the qualitative results in the appendix, we can see that the saliency maps are often well-localized, highlighting specific regions (e.g., the heel of a pair of high-heels). Even though there may be some cases where we expect the loss to be noisy, overall we found it improved performance. Thus, we can infer the saliency maps do follow our intuition much of the time, and that our hypothesis appears to be valid ( i.e., the high saliency regions can be described by an attribute).\\n\\n\\n\\n\\n- How are automatically discovered attributes gonna be useful in order to provide a description, given that they are not associated with any word or concept?\\n\\nOne could produce a human-interpretable label for these regions by showing the clusters to a human annotator and asking them to label them. This would still be vastly more efficient than asking for complete attribute annotations for each individual image, and the attributes that are collected would exactly match those that would be important for explanations.\"}", "{\"title\": \"General Comments\", \"comment\": \"We thank the reviewers for their time and insightful comments. Reviewers found that our paper addressed an interesting problem (R4) and introduced an interesting model (R4, R1) that has many applications (R2).\\n\\nMultiple reviewers asked about our choice of loss functions, e.g., why softmax + L1 was used for our attribute recognition loss rather than alternatives like sigmoid + binary cross entropy. This is because softmax + L1 loss performed better in our experiments (e.g. sigmoid + binary cross entropy loss got 69.8/70.5 insertion/deletion while softmax + L1 got 71.7/73.3 on Polyvore Outfits). This is likely due, in part, to the fact that sigmoid + binary cross entropy makes independent predictions for the presence of each attribute, whereas softmax + L1 loss trains a model where the attribute scores are calibrated so that relative scores of the attributes for an image are more meaningful. Since our task is to select which attribute is most relevant as an explanation, having calibrated scores is important. That said, we still saw similar performance gains using SANE over the baselines even when using sigmoid + binary cross entropy, even though it worked worse overall than our approach.\\n\\nAdditional questions are responded to directly to each reviewer.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper introduces SANE, a new approach for explaining image similarity models by combining a saliency map generator and an attribute predictor. In this way, the method is not only able to highlight what regions contribute the most to the similarity between a query image and a reference image, but also predict an attribute that explains this match. During training, SANE jointly optimizes the attribute prediction of the query image and maximizes the overlap of the saliency map of the image similarity and the attribute activations.\\n\\nI think the paper addresses a very interesting problem that has been commonly overlooked. There are many recents works on the explainability of neural networks for images classification and other similar tasks, but very few have addressed this problem for image similarity. It is also novel and interesting the addition of an attribute predictor in the system which provides additional information that cannot be captured by the saliency map alone. Finally, the paper also does a big effort presenting a quantitative study of how SANE is able to explain similarity models.\\n\\nHowever, I would also like to raise a couple of issues/questions regarding the method and its technical contribution:\\n\\n- I would suggest the authors to include a brief explanation of the architecture used for the attribute predictor since it will help to understand how the attribute activation is computed. I am assuming that a Fully-Convolutional Neural Network is being used, where the output of the last convolutional layers has as many channels as the numbers of classes. Is this correct?\\n\\n- Why using softmax + L1 loss to train the multi-attribute predictor? Aren't there other activations and losses better suited for multi-label classification, such as sigmoid + binary cross entropy loss, where there's no need to divide the ground-truth labels?\\n\\n- In order to match image similarities with attribute descriptions the authors propose matching similarity saliency maps with attribute map activations. This is done by first computing a saliency map for the similarity between a query image and a reference image, computing the activation maps of the ground-truth attributes of the query image, then finding the attribute activation that best matches the saliency map, and finally minimizing the distance between the saliency map and the attribute activation using an L2 loss (cf last paragraph Section 3.1). My first question is: how is the best matching attribute match? I missed this explanation in the paper and, to my understanding, this is a very crucial step. My second concern is that I don't see why the attribute activation map should be matched with the similarity saliency map since not all the regions highlighted in the similarity map might describe the attribute. For example, if we're comparing two images containing a jacket and both contain the attributes \\\"zipper\\\" and \\\"black\\\", the saliency map might highlight regions of the zipper and where the black color is present, but the activations of the attribute \\\"black\\\" should not be enforces to match the regions of the zipper. Could the authors explain the intuition behind this design choice?\\n\\n- A final minor comment: as I mentioned before, the introdution of attributes in the explanation process is a very interesting contribution since they provide the user an explanation that is a step closer to a description in natural language. However, this comes at the price of needing attribute annotations at training. In order to overcome this problem, the authors suggest using an attribute discovery method when no attribute annotations are provided. My question therefore is: how are these automatically discovered attributes gonna be useful in order to provide a description, given that they are not associated with any word or concept?\\n\\n\\nAlthough the paper proposes a very interesting approach for explaining image similarity models, I also have some concerns that I think should be addressed before its acceptance. Therefore, my initial recommendation is weak reject.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Overview/Contribution:\\n====================\\nThe paper proposes an explanation mechanism that pairs the typical saliency map regions together with attributes for similarity matching deep neural networks. The authors tested their methods on two datasets, i.e. Polyvore Outfits (a clothing attributes dataset) and Animals with Attributes 2 (a dataset of attributes for animals).\\n\\nOverall, the paper has merit to be accepted to the conference with the following strengths and weaknesses. I suggest to the authors to address the weaknesses pointed out to make the paper more stronger, especially adding few more attributes datasets such as person attributes datasets as noted below in the weakness section.\", \"strength\": [\"========\", \"The paper is written clearly and is easy to understand. I have seen the additional results and visual comparisons in the supplemental material and it was useful, albeit a bit longer.\", \"Explanations have the potential to make decisions made by a deep neural model transparent to end users among other benefits especially for sensitive applications such as healthcare and security. Explaining decisions made by similarity matching models has many applications including person attribute recognition and person re-identification for surveillance scenarios [1]. So, in this respect, this paper is relevant to the target audience.\", \"There is a bit of confusion between explanation and interpretation of decisions made by deep neural network models in the explainable AI literature and in most cases the two are used interchangeably. Hence, saliency maps are considered as explanation on their own by many. Combining saliency map based interpretations together with higher level concepts such as attributes has the potential to generate more realistic explanations of the decisions. The authors made this point at the second paragraph of the introduction.\", \"Fig. 1 (b) also is a clear example of the kind of explanations generated using a template with the key attribute in question accompanied by the visual saliency map interpretation.\", \"Fig. 2 clearly shows the overall proposed method and the attribute ranking based on the attributes explanation prior and the match between the saliency map and attribute activation maps.\", \"The attribute ranking and selection method of informative attributes using combinations weighted TCAV and cosine similarity between the attribute activation map and the generated saliency map is novel.\"], \"weakness\": [\"===========\", \"Applications of such a combined explanatory system don\\u2019t seem to be highly motivated in the introduction. I suggest the authors discuss more of the image similarity based applications and less on the discussion and heavy citation of generalized deep neural networks.\", \"The forms of the two loss components are both variants of l_{1} and l_{2} standard losses and they could be subject to issues with the standard variants of the l_{1} and l_{2} losses such as lack of translation and other transformation invariances. Hence, it would have been more useful to give the reasoning for the selection of the losses employed compared to other similarity and divergence based losses that are less sensitive to such variations.\", \"Similar to the above point, the choice of cosine similarity to compare match b/n attribute activation maps and saliency maps seem arbitrary. The method is described well but why cosine similarity was chosen in terms of its benefits compared to other similarity metrics is not that clear.\", \"Evaluation on more datasets such as person/pedestrian attributes datasets would have demonstrated the generalizability of the proposed method across multiple practical domains. As such, I would suggest the authors test their method on at least one person/pedestrian attributes dataset such as PETA, Market1501, etc.\", \"Although Fig. 1 (b) motivated a more practical high level explanation, in the results section, the attribute explanations are reduced to just the selected attribute that matched with the saliency well. Human-like concise attribute-based high level explanation just like the example given in Fig. 1 (b) would have made the paper stronger. Even if NLP is beyond the scope of this paper, a simple template based explanation that incorporated the selected/matched attribute would have been more effective.\", \"The results are too concise and a few ablation results on different losses etc. could have helped. There is too many qualitative results especially in the supplementary.\", \"1) Bekele, E., Lawson, W. E., Horne, Z., & Khemlani, S. (2018). Implementing a Robust Explanatory Bias in a Person Re-identification Network. In\\u00a0Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops\\u00a0(pp. 2165-2172).\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"I Summary\", \"This paper proposes a novel method for image similarity models explanation, introducing Salient Attributes for Network Explanation (SANE). The method identifies attributes that contribute positively to the similarity score, thus explaining the important image properties, and pair them with a generated saliency map unveiling the important regions of the image. The method combines three major components:\", \"An attribute explanation model\", \"A saliency map generator where three \\\"black box\\\" algorithms are tested (sliding window, RISE, and LIME) and one \\\"white box\\\" (Mask)\", \"An attribute explanation suitability prior is computed by the weighted combination of the TCAV scores of an attribute, its confidence score and the matching of its activation map with the generated saliency map\", \"Using the saliency maps as supervision for the attribute activation maps seems to improve attribute explanations. The obtained explanations help users understand the model's predictions and build trust.\", \"II Comments\", \"Overall the paper is well written and presents an interesting method for explaining image similarity models. However, from a writing perspective, it can be hard to follow as the paper lacks story-telling as to why such or such methods were chosen/implemented.\", \"1. Content\", \"While this work is conceptually interesting, the technical novelty and contributions don't stand out as much as they could. What is the context in improving image similarity explainability? I believe examples in industry or medical could be found to highlight the story of the paper. Why a method is used over another? (TCAV, Mask etc, what lead to this choice?)\", \"In 2. Related work, Saliency-based explanations, the paper refers to white-box models but does not offer explanations as to why Mask is chosen over other methods (gradcam, guided backprop etc).\", \"In 3.3 TCAV is mentioned, as far as I know, the method works with concepts as images against random images. Here attributes are used as the concepts, how are the random counterparts selected? Moreover, the section on TCAV should be in the related work, whereas how it is used for this specific case would be described in 3.3.\", \"In eq 4, \\u00e2 is mentioned but s is used.\", \"In 4.2 there is a small user study to verify if the explanations were useful, the study is a nice addition, I really like this kind of results! It would be even more interesting if it compared the results with other baselines.\", \"The \\\"discovering attributes\\\" part in the appendix is promising, this is something that could be referred to in the conclusion.\", \"2. Writing\", \"Intuitive and well-described explanations are given in most paragraphs with examples (3.2, Manipulating similarity scores, the button example) which give a good understanding of the problem. This led to a better comprehension of the challenges.\", \"Small typos, did not impact the score:\", \"section 3. l 7 explanation -> explain\", \"section 4.2, results, l 5 effects -> affects\", \"III Conclusion\", \"The idea is interesting and seems to yield good results, especially in the appendix with the discovering attributes methods. The paper could sell itself a little better with more context/applications where it could be used.\"]}" ] }
rkeO-lrYwr
Mode Connectivity and Sparse Neural Networks
[ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M. Roy", "Michael Carbin" ]
We uncover a connection between two seemingly unrelated empirical phenomena: mode connectivity and sparsity. On the one hand, there is growing catalog of situations where, across multiple runs, SGD learns weights that fall into minima that are connected (mode connectivity). A striking example is described by Nagarajan & Kolter (2019). They observe that test error on MNIST does not change along the linear path connecting the end points of two independent SGD runs, starting from the same random initialization. On the other hand, there is the lottery ticket hypothesis of Frankle & Carbin (2019), where dense, randomly initialized networks have sparse subnetworks capable of training in isolation to full accuracy. However, neither phenomenon scales beyond small vision networks. We start by proposing a technique to find sparse subnetworks after initialization. We observe that these subnetworks match the accuracy of the full network only when two SGD runs for the same subnetwork are connected by linear paths with the no change in test error. Our findings connect the existence of sparse subnetworks that train to high accuracy with the dynamics of optimization via mode connectivity. In doing so, we identify analogues of the phenomena uncovered by Nagarajan & Kolter and Frankle & Carbin in ImageNet-scale architectures at state-of-the-art sparsity levels.
[ "sparsity", "mode connectivity", "lottery ticket", "optimization landscape" ]
Reject
https://openreview.net/pdf?id=rkeO-lrYwr
https://openreview.net/forum?id=rkeO-lrYwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "iFpAc-yVKO", "B1gQ7ILjsB", "S1g7aB8isB", "S1eMPBLoiS", "HkeWmS8iiS", "BkxR3EIjjH", "BJeG9EUsor", "rJlhQU77jr", "HygFbYdatS", "HyxwWTFAOS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741547, 1573770779173, 1573770683505, 1573770586297, 1573770521086, 1573770421534, 1573770378511, 1573234212130, 1571813633272, 1570835710654 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2141/Authors" ], [ "ICLR.cc/2020/Conference/Paper2141/Authors" ], [ "ICLR.cc/2020/Conference/Paper2141/Authors" ], [ "ICLR.cc/2020/Conference/Paper2141/Authors" ], [ "ICLR.cc/2020/Conference/Paper2141/Authors" ], [ "ICLR.cc/2020/Conference/Paper2141/Authors" ], [ "ICLR.cc/2020/Conference/Paper2141/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2141/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2141/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper investigates theories related to networks sparsification, related to mode connectivity and the so-called lottery ticket hypothesis. The paper is interesting and has merit, but on balance I find the contributions not sufficiently clear to warrant acceptance. The authors made substantial changes to the paper which are admirable and which bring it to borderline status.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author Response to Reviewer 4 (Part 1)\", \"comment\": \"\", \"note\": \"We have posted an updated version of the paper that has been substantially restructured and rewritten to address your concerns. We highly recommend looking over the new paper.\\n\\nWe have summarized these changes in a general response (posted as a top-level comment). We ask that you read our general response before returning to this point-by-point response. We address many of your concerns there.\\n\\n--------------------\\n\\n> 1) The scope of the experiment is limited to a quite specific setting, \\n> The experiments only show [the relationship between mode connectivity and sparsity] is true in a limited setting, focusing on specific pruning method and at a specific sparsity level.\\n> Stability was tested only at one specific sparsity level\\n> The paper also focused on cases where matching subnetworks were found by IMP, but matching subnetworks can also be found by other pruning methods. \\n\\nWe choose to focus specifically on IMP and the most extreme sparsities for which IMP can find a matching subnetworks for any rewinding iteration.\\n\\nWhy IMP? IMP produces particularly sparse matching subnetworks and is the algorithm behind current lottery ticket results, so we are interested in studying the networks that it produces for both scientific understanding of the lottery ticket hypothesis and potential practical lessons for training extremely sparse networks to high accuracy.\\n\\nWhy extreme sparsities? In general, sparse neural networks are more difficult to train from scratch. At extreme levels of sparsity, many classes of sparse networks (e.g., those produced by randomly reinitializing pruned networks and randomly pruning) train to lower accuracy than the full network [HPT+15, FC19, LSZ+19]. However, if it were possible to train sparse networks from scratch to the same accuracy as the full network, then it would present a new opportunity to improve the efficiency of neural network training. We are therefore interested in understanding the properties of special classes of sparse networks that are indeed matching (e.g., winning lottery tickets produced by IMP). Studying extremely sparse matching subnetworks from IMP provides the best contrast with (1) the full, overparametrized neural networks and (2) other classes of sparse networks that are not matching at these sparsities.\\n\\nAlthough we are interested in understanding this behavior at all levels of sparsity, computational limitations force us to focus on a single level of sparsity. IMP entails training a network at least a dozen times to reach high levels of sparsity, and instability analysis requires training each of these networks on three different data orders for three kinds of sparsity. For rigor, we replicate each of these experiments three times with different initializations.\\n\\n> 2) there are unsupported strong claims which need to be clarified.\\n> In the abstract the paper claims that sparse subnetworks are matching subnetworks only when they are stable, but the results are shown in a limited setting only at a very high sparsity. \\n\\nAs noted in the top-level comment, we have revised our claims about sparse networks to focus specifically on IMP at the most extreme sparsities for which matching subnetworks are known to exist. As we argue, IMP subnetworks at these sparsities are particularly valuable for scientific study.\\n\\n> They tested stability on the highest sparsity level at which there was evidence that matching subnetworks existed, but how would the result generalize to other sparsity levels? With lower sparsity level (if weights are pruned less), is stability easier to achieve?\\n> it is not obvious it would be stable at all lower sparsity levels where IMP found matching subnetworks.\\n\\nIn short, we would not necessarily expect the results to generalize to lower sparsity levels. This is not a weakness. but just a matter of fact. As we explain in the top-level comment, the full networks are generally unstable at initialization but become stable later in training. However, they reach full accuracy regardless of whether they are stable. This means that stability and accuracy do not appear to be linked for the full network. We expect that particularly moderate sparsity levels will resemble the full network case, while higher sparsity levels will resemble the experiments in the paper.\\n\\n> I think the paper needs to show how the same relationship might generalize to different sparsity levels, or alternatively modify the claim (to what it actually shows)\\n> Some of these stronger claims can be modified to describe what the experiments actually show.\\n> The relationship found between stability and matching subnetworks in the high sparsity regime is a valuable insight that I believe should be conveyed correctly in this paper.\\n\\nAs we discuss in the top-level comment, we have narrowed the scope of our claim to only cover the connection between stability and matching subnetworks found by IMP in this highly sparse regime.\"}", "{\"title\": \"Author Response to Reviewer 4 (Part 2)\", \"comment\": \"> highlight the significance of the connection between matching subnetworks and stability in this highly sparse subnetwork regime.\\n\\nWe have substantially restructured and rewritten our paper to make the significance of this connection clear. We ask that you take a look at our revised draft.\\n\\n> Furthermore, the statement is contradicted in Footnote 7: \\u201cfor the sparsity levels we studied on VGG (low), the IMP subnetwork is stable but does not quite qualify as matching\\u201c\\n\\nIn the submitted version of the paper, we tried to use the same sparsity level for all variants of VGG (i.e., standard, warmup, and low) and likewise for all variants of Resnet. However, our chosen sparsity level for VGG (low) was too sparse for IMP to produce a matching subnetwork at any rewinding iteration. In the updated version of the paper, we have chosen a separate sparsity level for each hyperparameter configuration based on the sparsest level for which IMP finds a matching subnetwork under any rewinding iteration we consider. We illustrate this process in Appendix A of the updated paper. The VGG (low) results now align with the other experiments.\\n\\n> Nagarajan & Kolter\\u2019s observation about linear interpolation was on a completely different setup: using same duplicate network but training on disjoint subset of data, whereas in this paper it uses different subnetworks and trains it on full dataset with different data order. \\n\\nThat's correct. In the updated paper, we make sure this distinction is clear. We wanted to give Nagarajan and Kolter ample credit, since their experiment is the closest extant experiment to ours in the literature. We have emphasized this distinction in our revised paper, and we will implement any further feedback you have on clarifying this relationship to related work.\\n\\n> How was the sparsity level (30%) of Resnet-50 and Inception-v3 chosen in Table 1? (which was later used in Figure 5)\\n\\nThe sparsity level is actually 70% (that is, 30% of weights remaining). These were the sparsest IMP subnetworks of Resnet-50 and Inception-v3 for which IMP found matching subnetworks at any rewinding iteration under one-shot pruning. The new Appendix A clarifies how we chose our sparsity levels for every network in the paper.\\n\\n> In Figure 3 and 5, the y-axis \\u201cStability(%)\\u201d is unclear and not explained how this is computed. I first thought higher amount of stability(%) was good but it doesn't seem to be true.\\n\\nCalling the rise in error \\\"stability\\\" was a bad choice on our part. We fixed this and now call this rise in error \\\"instability\\\" and so lower instability is \\\"better\\\". Namely, when instability is 0, then the network is stable.\\n\\n> In some figures VGG-19 come first and then Resnet-20 while for others it was the other way around, which was confusing to read. (Also same for Resnet-50 and Inception-v3)\\n\\nThis order is now consistent in the updated draft of the paper.\\n\\n> There are same lines in multiple graphs, but the labeling is inconsistent, potentially confusing readers:\\n\\nLabeling is now consistent in the updated draft of the paper.\\n\\n[FC19] Frankle and Carbin. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. ICLR 2019.\\n[HPT+15] Han et al. Learning both Weights and Connections for Efficient Neural Networks. NeurIPS 2015.\\n[LSZ+19] Liu et al. Rethinking the Value of Network Pruning. ICLR 2019.\\n[NK19] Nagarajan and Kolter. Uniform convergence may be unable to explain generalization in deep learning. Arxiv.\"}", "{\"title\": \"Author Response to Reviewer 2\", \"comment\": \"\", \"note\": \"We have posted an updated version of the paper that has been substantially restructured and rewritten to address your concerns. We highly recommend looking over the new paper.\\n\\nWe have summarized these changes in a general response (posted as a top-level comment). We ask that you read our general response before returning to this point-by-point response. We address many of your concerns there.\\n\\n--------------------\\n\\n> It is unclear from the paper what are the immediate / straightforward applications\\n\\nSee the PRACTICAL IMPLICATIONS section of the top-level comment.\\n\\n> First, this paper lacks a structured literature review.\\n\\nWe have integrated a review of relevant literature into the body of the revised paper. For the final version of the paper, we are working on a structured literature review that we will insert after the introduction.\\n\\n> Does the full network have the property of mode connectivity (when trained using different data orders), or this only occurs under sparsity. \\n\\nYes! In Section 3 of the new version, we show that full networks indeed have the property of mode connectivity.\\n\\n> Provide some metrics on how \\u201cfar\\u201d are the two final weights upon which mode connectivity (or stability) is explored.\\n\\nIn A.3 of the submission and Appendices D and E of our revised paper, we include the L2 distances between networks when we perform instability analysis. We include this data for all of our experiments (full networks and all three kinds of sparse networks). In the final version, we will also include bases for comparison (e.g., the distance between the initial and final weights, the distance between two networks trained with different initializations) to contextualize these L2 distance values.\\n\\nBriefly, we observe that L2 distances between the sparse networks seem to be at two different levels. When the networks are unstable (as in the case of unstable IMP subnetworks, randomly reinitialized subnetworks, and randomly pruned networks), L2 distance is at the higher level; that is, the networks are further apart. As the IMP subnetworks transition to stability, L2 distance decreases, reaching a lower (non-zero) level when they become stable. We do not observe any relationship between the stability of the unpruned networks and the L2 distances between them.\\n\\n> The introduction mentions connectivity was previously observed using \\u201cdisjoint subsets\\u201d of data...I wonder if this is a typo.\\n\\nIt is not a typo. The only prior work that looks at linear mode connectivity starting from the same initialization is [NK19]. That paper trains two copies of an MLP from the same initialization on disjoint subsets of MNIST. In our paper, we study different data orders rather than disjoint samples from the same distribution. We mentioned this work because wanted to give Nagarajan and Kolter ample credit since their experiment is the closest extant experiment to ours in the literature. We have emphasized this distinction in our revised paper.\\n\\n> Exploring if the findings still apply on disjoint data and/or varying amount of data, besides different data orders, is helpful.\\n\\nWe agree that there are a wide range of other behaviors of neural networks that we can explore with our instability analysis framework. We are particularly interested in studying instability when training with disjoint datasets (as you mention) and when varying batch size, learning rate, network width, optimizer, and learning rate schedule (e.g., cyclic learning rates [Smith17] and exponential learning rates [LA19]). Each of these investigations could be a paper in its own right and is beyond the scope of the current work.\\n\\n> The writing...can definitely use more work.\\n\\nWe have heavily revised the paper, and we believe the writing is substantially more polished. We are happy to accept further feedback that we will incorporate into the final version of the paper.\\n\\n[LA19] Li and Arora. An Exponential Learning Rate Schedule for Deep Learning. Arxiv.\\n[NK19] Nagarajan and Kolter. Uniform convergence may be unable to explain generalization in deep learning. Arxiv.\\n[Smith17] Leslie Smith. Cyclical Learning Rates for Training Neural Networks. WACV 2017.\"}", "{\"title\": \"Author Response to Reviewer 3\", \"comment\": \"\", \"note\": \"We have posted an updated version of the paper that has been substantially restructured and rewritten to address your concerns. We highly recommend looking over the new paper.\\n\\nWe have summarized these changes in a general response (posted as a top-level comment). We ask that you read our general response before returning to this point-by-point response. We address many of your concerns there.\\n\\n--------------------\\n\\n> The content are poorly presented for me fully appreciate the importance and practical implications of this work\\n\\nWe apologize that our original presentation did not clearly articulate the importance and practical implications of our work. We have taken your feedback to heart, and we have substantially restructured and rewritten the paper to ensure that these aspects are clear. We have summarized our clarified framing in the top-level comment.\\n\\nWe specifically address practical implications in the PRACTICAL IMPLICATIONS section of the top-level comment and in the discussion sections of our revised paper.\\n\\n> I don't understand why the connection between mode connectivity and lottery ticket hypothesis is an important one to reveal.\\n\\nIn the top-level comment, we present our clarified framing for the paper designed to emphasize why both linear mode connectivity on full networks and its connection to the lottery ticket hypothesis are important. We discuss concrete practical implications of our observations in the PRACTICAL IMPLICATIONS section of the top-level comment.\\n\\n> I can not extract useful intuitions/messages from the demonstration here on why this happens.\\n\\nAt the moment, the phenomena we observe are entirely empirical. We do not yet have a theoretical model to describe this behavior, although we are exploring various connections (e.g., it is consistent with the so-called neural tangent kernel regime where very wide neural networks behave like linear models). However, we contend that our experiments are sufficiently rigorous to convincingly establish the existence of these phenomena. We believe that recording these phenomena rigorously is a significant contribution that will inspire theoretical work to understand and explain these behaviors.\\n\\n> Minor comments for improving the paper.\\n\\nThank you for these detailed comments. We have addressed them in the new version of the paper.\"}", "{\"title\": \"Author Response - Overall Comment (Part 1)\", \"comment\": \"\", \"note\": \"We have posted an updated version of the paper that has been substantially restructured and rewritten to address your concerns. We highly recommend looking over the new paper.\\n\\n--------------------\\n\\nWe thank the reviewers for their feedback.\\n\\nUpon reading reviews 2 and 3, we recognize that we failed to adequately communicate the significance of our results. Upon reading review 4, we recognize that we failed to clarify the scope of our claims and justify the importance of our chosen methodology.\\n\\nBased on your feedback, we have substantially restructured and rewritten our paper to address these concerns. We believe that our \\u201cstability analysis\\u201d framework (which as per R4 we now call \\u201cinstability analysis\\u201d) and our new observations about IMP subnetworks are significant contributions, and we hope to convince you that this is the case with our updated version.\\n\\nHere, we summarize our revised framing. We have responded to specific concerns of individual reviewers in separate replies to their comments.\\n\\nSUMMARY OF REVISED FRAMING\\n\\nIn our original submission, we framed our contribution as a surprising connection between two empirical phenomena of recent interest: mode connectivity and sparse neural networks in the context of the lottery ticket hypothesis.\\n\\nIn the revised version, we instead emphasize that our \\u201cinstability analysis\\u201d framework provides a new lens through which to study the behavior of neural networks by way of linear mode connectivity.\\n\\nWe demonstrate the value of this framework in two ways. First, we study the instability of full, unpruned networks. We now recognize that this data, which was previously buried in the appendices, is a significant contribution in its own right and an important part of our story. The central finding of this experiment is that all networks become stable early in training. That is, early in training, the outcome of optimization is determined modulo linear mode connectivity.\\n\\nWe then use instability analysis to better understand \\u201clottery ticket\\u201d networks found by IMP. Our core finding is that, at extreme sparsities, an IMP subnetwork is matching (i.e., it can train in isolation to full accuracy) only when it is stable. This insight provides the first basis for understanding the mixed results on IMP in the literature. In addition, we modify IMP to \\u201crewind\\u201d subnetworks to their values at an iteration k > 0 rather than to initialization. For values of k that are early in training, IMP subnetworks become stable and matching in all cases that we consider, including large-scale settings where IMP fails to do so at initialization. In response to R4\\u2019s suggestions, we have modified the scope of our claims to focus exclusively on IMP subnetworks at the highest sparsity for which IMP at any rewinding iteration produces a matching subnetwork. \\n\\nCONTRIBUTIONS AND IMPLICATIONS\\n\\nOur revisions aim to clarify that our work makes significant contributions to both (1) our understanding of SGD on neural networks and (2) our understanding of sparse IMP subnetworks and the lottery ticket hypothesis. Please see the updated \\u201cContributions\\u201d paragraph in our new introduction.\"}", "{\"title\": \"Author Response - Overall Comment (Part 2)\", \"comment\": \"PRACTICAL IMPLICATIONS\\n\\nOur paper is scientific in nature, with the goal of better understanding the relationship between SGD noise and the outcome of neural network optimization for both dense neural networks and sparse subnetworks found by IMP. Although our focus is not on immediate or straightforward applications, there are several ways that our results might lead to applications:\\n\\n* Others have already adopted our modified version of IMP with rewinding to build practical techniques. For example, the networks generated by rewinding transfer between datasets, making it possible to train sparser networks from the start [MYP+19]. In addition, replacing fine-tuning with rewinding when pruning a neural network makes it possible to maintain full accuracy at more extreme sparsities [Anon19b]. IMP with rewinding has also been adopted to study the lottery ticket hypothesis [Anon19c, Anon19d, Anon19e, Anon19f, Anon19g].\\n\\n* In larger-scale settings, we find that IMP subnetworks at extreme sparsities only become stable and matching after the full network has been trained for a small amount of time. Recent methods have explored pruning neural networks at initialization [LAT19, Anon19a], but our results suggest that the best time to prune may be slightly later in training. By that same token, most modern pruning methods only begin to sparsify networks late in training or after training [HTP+15,GEH19]. In these cases, our work suggests that there is potentially a substantial unexploited opportunity to prune neural networks much earlier in training.\\n\\n* Our observations on full networks implicitly divide training into two phases: an initial, unstable phase in which the final \\u201clinearly connected\\u201d mode is undetermined on the account of SGD noise, and a subsequent, stable phase in which the final linearly connected mode becomes determined. One possible way to exploit our this observation could be to explore changing aspects of the optimization process (e.g., learning rate schedule or optimizer) once the network enters the stable phase in order to improve the performance of training. Other techniques already follow this template; for example, Goyal et al. find that warmup is necessary early in training when using large batch sizes and high learning rates [GDG17+]. Instability analysis makes it possible to evaluate the consequences of these interventions. \\n\\nSUMMARY OF TECHNICAL CHANGES\\n\\n* We have renamed \\u201cstability\\u201d to \\u201cinstability\\u201d so that a network is \\u201cstable\\u201d when \\u201cinstability\\u201d is 0.\\n\\n* We have moved results on full networks from the appendices into the main body of the paper as Section 3.\\n\\n* In our analysis of full networks, we have examined instability with respect to train error in addition to test error.\\n\\n* We have updated our implementations of Resnet-20 and VGG-16 to reach higher, state-of-the-art accuracy. At this higher accuracy, the IMP subnetworks of these networks now become stable and matching slightly later in training than before, but they still do so 1-2% into training.\\n\\n* We compare the instability of IMP subnetworks to that of randomly reinitialized IMP subnetworks in addition to randomly pruned subnetworks.\\n\\n[Anon19a] Anonymous. Picking Winning Tickets Before Training by Preserving Gradient Flow. In submission to ICLR 2020.\\n[Anon19b] Anonymous. Comparing Fine-Tuning and Rewinding in Neural Network Pruning. In submission to ICLR 2020.\\n[Anon19c] Anonymous. Playing the Lottery with Rewards and Multiple Languages. In submission to ICLR 2020.\\n[Anon19d] Anonymous. Finding Winning Tickets with Limited (or No) Supervision. In submission to ICLR 2020.\\n[Anon19e] Anonymous. Winning the Lottery with Continuous Sparsification. In submission to ICLR 2020.\\n[Anon19f] Anonymous. The Sooner the Better: Investigating the Structure of Early Winning Lottery Tickets. In submission to ICLR 2020.\\n[Anon19g] Anonymous. The Early Phase of Neural Network Training. In submission to ICLR 2020.\\n[GDG+17] Goyal et al. Accurate, Large Minbatch SGD: Training Imagenet in 1 Hour. CVPR 2018.\\n[GEH19] Gale et al. The State of Sparsity in Deep Neural Networks. Arxiv.\\n[HPT+15] Han et al. Learning both Weights and Connections for Efficient Neural Networks. NeurIPS 2015.\\n[LAT19] Lee et al. SNIP: Single-Shot Network Pruning Based on Connection Sensitivity. ICLR 2019.\\n[MYP+19] Morcos et al. One Ticket to Win them All: Generalizing Lottery Ticket Initializations Across Datasets and Optimizers. NeuIPS 2019.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper empirically examines an interesting relationship between mode connectivity and matching sparse subnetworks (lottery ticket hypothesis). \\n\\nBy mode connectivity, the paper refers to a specific instance where the final trained SGD solutions are connected by a linear interpolation path without loss in test accuracy. When networks trained with SGD reliably find solutions which can be linearly interpolated without loss in test accuracy despite different data ordering, the paper refers to these networks as \\u2018stable.\\u2019 \\n\\nMatching sparse subnetworks refer to subnetworks within a full dense network that matches the test accuracy of the full network when trained in isolation. \\n\\nThe paper introduces a novel improvement on the existing iterative magnitude pruning (IMP) technique that is able to find matching subnetworks even after initialization by rewinding the weights. This allowed the authors to find matching subnetworks for deeper networks and in cases where it could not be done without some intervention in learning schedule. \\n\\nThe paper then finds a relationship that only when the subnetworks become stable, the subnetworks become matching subnetworks.\\n\\u2014\\u2014\\u2014\\n\\nAlthough finding a connection between two seemingly distinct phenomena is novel and interesting, I would recommend a weak reject for the following two reasons: \\n1) The scope of the experiment is limited to a quite specific setting, \\n2) there are unsupported strong claims which need to be clarified.\\n\\u2014\\u2014\\u2014\\n\\n1)\\nIn the abstract the paper claims that sparse subnetworks are matching subnetworks only when they are stable, but the results are shown in a limited setting only at a very high sparsity. \\nThey tested stability on the highest sparsity level at which there was evidence that matching subnetworks existed, but how would the result generalize to other sparsity levels?\\nWith lower sparsity level (if weights are pruned less), is stability easier to achieve? \\n\\nThe paper also focused on cases where matching subnetworks were found by IMP, but matching subnetworks can also be found by other pruning methods. \\nAs acknowledged in the limitations section, other relationships may exist between stability and matching subnetworks found by other pruning methods, or in different sparsity levels,\\nwhich could be quite different from this paper\\u2019s claim.\\n\\nIn order to address this concern, I think the paper needs to show how the same relationship might generalize to different sparsity levels, \\nor alternatively modify the claim (to what it actually shows) and highlight the significance of the connection between matching subnetworks and stability in this highly sparse subnetwork regime.\\n\\n2) \\nAs addressed above, in the Abstract and Introduction, the paper\\u2019s claims are very general about mode connectivity and sparsity, claiming in the sparse regime, \\u201ca subnetwork is matching if and only if it is stable.\\u201d However, the experiments only show it is true in a limited setting, focusing on specific pruning method and at a specific sparsity level.\\nFurthermore, the statement is contradicted in Footnote 7: \\u201cfor the sparsity levels we studied on VGG (low), the IMP subnetwork is stable but does not quite qualify as matching\\u201c\\n\\nThere are also a few other areas where there are unsupported claims.\\n\\n\\u201cNamely, whenever IMP finds a matching subnetwork, test error does not increase when linearly interpolating between duplicates, meaning the subnetwork is stable.\\u201d \\n-> Stability was tested only at one specific sparsity level, and it is not obvious it would be stable at all lower sparsity levels where IMP found matching subnetworks.\\n\\n\\u201cThis result extends Nagarajan & Kolter\\u2019s observation about linear interpolation beyond MNIST to matching subnetworks found by IMP at initialization on our CIFAR10 networks\\u201d \\n-> Nagarajan & Kolter\\u2019s observation about linear interpolation was on a completely different setup: using same duplicate network but training on disjoint subset of data, whereas in this paper it uses different subnetworks and trains it on full dataset with different data order. \\n\\nRelated to the first issue, I think some of these stronger claims can be modified to describe what the experiments actually show. \\nThe relationship found between stability and matching subnetworks in the high sparsity regime is a valuable insight that I believe should be conveyed correctly in this paper.\\n\\n\\u2014\\u2014\\u2014\\n\\nI also have some minor clarification question and suggestions for improvement. \\n\\nHow was the sparsity level (30%) of Resnet-50 and Inception-v3 chosen in Table 1? (which was later used in Figure 5)\\n\\n\\u2014 In Figure 3 and 5, the y-axis \\u201cStability(%)\\u201d is unclear and not explained how this is computed. I first thought higher amount of stability(%) was good but it doesn't seem to be true.\\n\\n\\u2014 The ordering of methods for plots could be more consistent. In some figures VGG-19 come first and then Resnet-20 while for others it was the other way around, which was confusing to read. (Also same for Resnet-50 and Inception-v3)\\n\\n\\u2014 There are same lines in multiple graphs, but the labeling is inconsistent, potentially confusing readers:\", \"figure_1\": \"(Original Init, Standard) is the same as Figure 4: (Reset),\", \"and_figure_1\": \"(Random Reinit, Standard) is the same as Figure 4: (Reset, Random Reinit)\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper empirically presents a very interesting connection between two also very interesting phenomena (mode connectivity and lottery ticket hypothesis), while removing a previous limitation of the lottery ticket hypothesis on larger networks. through a good amount of experiments, the authors empirically showed these two phenomena co-occur together (i.e. matching networks are stable) and have positive correlation (i.e. the more \\u201cmatching\\u201d the network the more \\u201cstable\\u201d), under different network architectures and datasets.\\n\\nThough it is unclear from the paper what are the immediate / straightforward applications, the findings do present interesting contributions. Several things I found that can further improve this paper.\\n\\nFirst, this paper lacks a structured literature review. It is suggested that the findings may provide insights to our understanding of how SGD works in neural network. Laying some proper background knowledge on this area is needed in the literature review. \\n\\nThere are several experiments that I\\u2019m curious to see. Though I must say the existing amount of experiments sufficiently validates the existence of the connection authors put forth and hence not required.\\n\\na) Provide some metrics on how \\u201cfar\\u201d are the two final weights upon which mode connectivity (or stability) is explored. For the sake of comparison, distance between the initial weights and final weights can be added. \\n\\nb) First off, the introduction mentions connectivity was previously observed using \\u201cdisjoint subsets\\u201d of data, whereas later in the paper only different orders of the same data are explored. I wonder if this is a typo. Regardless, exploring if the findings still apply on disjoint data and/or varying amount of data, besides different data orders, is helpful. \\n\\nc) Does the full network have the property of mode connectivity (when trained using different data orders), or this only occurs under sparsity.\\n\\nLastly, the writing of the paper doesn\\u2019t interfere with understanding, but can definitely use more work. Abstract can be tightened. Several typos throughout the paper:\\n- in the abstract, \\\"with the no change\\\" -> remove \\\"the\\\"\\n- bottom of page 1, subt -> sub\\n- second bullet point under \\\"contributions\\\": remove \\\":\\\"?\\n- page 3, paragraph starting with \\\"stability\\\": \\\"the increase in worst-case increase\\\" -> \\\"the worst-case increase\\\"?\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper works on empirically demonstrating the connection between model connectivity and the lottery ticket hypothesis, which are individually explored in the literature. Here the model connectivity refers to the fact that SGD produces different solutions (from the randomness, such as data ordering) that are connected through model parameter transition paths of approximately equal loss/accuracy. The lottery ticket hypothesis tells that there exist sparse subnetworks of the corresponding full dense network which can attain as strong loss / accuracy as the full dense network.\\n\\nAs the primary contribution, the authors demonstrated that the following two observations often emerge together: 1) A sparse subnetwork can match the performance of the corresponding full dense network; 2) Running SGD on this sparse subnetwork produces solutions which are connected via a linear model parameter transition path of similar loss/accuracy; this is observed across both small tasks (using CIFAR10) and ImageNet-level tasks. Another contribution I can see besides the primary one is that the lottery ticket hypothesis still holds on large tasks, which is against the conventional wisdom demonstrated in recent papers (e.g. Gale et al., 2019); the authors show that it needs to rewind to weights after a short period of training instead of rewinding to the initialized weight in Iterative Magnitude Pruning to produce the \\\"lottery ticket\\\" in large tasks (such as CNN for ImageNet). \\n\\nI think the primary contribution on the connection between model connectivity and lottery ticket hypothesis is an interesting observation, but the content are poorly presented for me fully appreciate the importance and practical implications of this work. Thus I give weak reject. The major concerns and questions are as the following:\\n\\n1. From the paper, I don't understand why the connection between model connectivity and lottery ticket hypothesis is an important one to reveal. Is it important because it implies some practical approaches / heuristics to figure out performant sparse subnetworks? Is it intrinsically interesting because it validates some hypothesis in the training dynamics of SGD? These are not clear to me.\\n\\n2. I think the current presentation of the content is only limited to the empirical demonstration. And I can not extract useful intuitions/messages from the demonstration here on why this happens. These message should provide intuitions on why this connection exists. E.g. These message can be extracted from some SGD on some simple (toy) non-convex models with multiple local minimum regions.\", \"minor_comments_for_improving_the_paper\": \"1. At the end of line 1 in algorithm one, it is not clear what 1^|W0| means.\\n\\n2. The terms in figure legend needs to be properly defined to enable clear reading. Currently words such as \\\"reset\\\" is not mentioned in the text but appears in the legend of figure 4 and etc.\"}" ] }
HyePberFvH
Monte Carlo Deep Neural Network Arithmetic
[ "Julian Faraone", "Philip Leong" ]
Quantization is a crucial technique for achieving low-power, low latency and high throughput hardware implementations of Deep Neural Networks. Quantized floating point representations have received recent interest due to their hardware efficiency benefits and ability to represent a higher dynamic range than fixed point representations, leading to improvements in accuracy. We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic.We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss. The method makes no assumptions regarding the underlying parameter distributions. We evaluate our method on pre-trained image classification models on the CIFAR10 andImageNet datasets. For the same network topology and dataset, we demonstrate the ability to gain the equivalent of bits of precision by simply choosing weight parameter sets which demonstrate a lower loss of significance from the Monte Carlo trials. Additionally, we can apply MCA to compare the sensitivity of different network topologies to quantization effects.
[ "deep learning", "quantization", "floating point", "monte carlo methods" ]
Reject
https://openreview.net/pdf?id=HyePberFvH
https://openreview.net/forum?id=HyePberFvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "gDoYF23FX", "SygYoBI2oS", "B1xsDNhPjS", "rJeM1WhwiS", "S1gD9l3wiS", "rkgOn7jaYS", "S1xgzttTFH", "rJxUTXU9tS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741519, 1573836192959, 1573532771357, 1573531866349, 1573531791153, 1571824559771, 1571817735986, 1571607486256 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2140/Authors" ], [ "ICLR.cc/2020/Conference/Paper2140/Authors" ], [ "ICLR.cc/2020/Conference/Paper2140/Authors" ], [ "ICLR.cc/2020/Conference/Paper2140/Authors" ], [ "ICLR.cc/2020/Conference/Paper2140/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2140/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2140/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper studies the impact of rounding errors on deep neural networks. The\\nauthors apply Monte Carlos arithmetics to standard DNN operations. \\nTheir results indeed show catastrophic cancellation in DNNs and that the resulting loss of \\nsignificance in the number representation correlates with decrease in validation \\nperformance, indicating that DNN performances are sensitive to rounding errors. \\n \\nAlthough recognizing that the paper addresses an important problem (quantized / \\nfinite precision neural networks), the reviewers point out the contribution of \\nthe paper is somewhat incremental. \\nDuring the rebuttal, the authors made an effort to improve the manuscript based \\non reviewer suggestions, however review scores were not increased. \\n \\nThe paper is slightly below acceptance threshold, based on reviews and my own \\nreading, as the method is mostly restricted to diagnostics and cannot yet be used \\nto help training low-precision neural networks.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary Of Changes Made\", \"comment\": \"We thank all the reviewers for the constructive comments and valuable suggestions. We have uploaded a revised version of our paper following the suggestions. In the revised paper, we have highlighted the main changes in blue/aqua. The changes for each section can be summarized as follows:\\n\\nIn Section 1, we updated Figure 1 to show a more insightful operator comparison. We now directly compare fixed and floating point of the same precision width of 8bits. This more clearly shows that a drop of a single bit of precision in floating point, from 8->7, can have significant hardware benefits and shows improvement over the 8-bit fixed point operator. We also make a clear distinguishment between our work 'MCDA' and previous work 'MCA'. \\n\\nIn section 2, we further discuss the literature and contrast our work to others on Bayesian NNs.\\n\\nIn section 3, we fix some of the notation and wording for correctness and add an equation to more clearly describe how we calculated the loss of significance value, K.\\n\\nIn section 4, we fix some of the notation and the wording for the issues of applying traditional MCA to DNNs. We also remove an equation which we deemed unnecessary. \\n\\nIn section 5, we provide much greater detail on how we ran MCDA experiments, including a reference to the MCALIB repository which we used to produce regression plots for calculating K and t_min. We also fix the scale and labelling of the axes in Figure 3 and we order the networks in Figure 5 based on their respective K values. This makes the figures more clear and coherent.\\n\\nWe also added an appendix which describes how MCALIB calculates k and t_min specifically. Additionally, we add more detail on the quantization function used to produce our quantized floating point representations.\"}", "{\"title\": \"Response To Reviewer 3\", \"comment\": \"Thank you for your review. We have provided an updated version of the paper with the main changes highlighted to address all reviewers comments.\", \"contribution\": \"Deeper theory and making connections with Bayesian networks are indeed interesting lines of research but we believe that it is beyond the scope of this paper. We hope that other researchers can take our work in these and other directions. \\n\\nAlthough related, our work is complementary to work using variational Bayes or Monte Carlo methods to estimate a posterior. We have updated Section 2 with a brief review of key papers and a statement regarding the new aspects of the present work.\", \"originality\": \"We agree that our references to \\u201cMCA\\u201d are misleading to the reader in regards to where the novelty of the paper lies. We have altered Paragraph 4 of the introduction. This allows us to distinguish between our method \\u201cMCDA\\u201d and the known method \\u201cMCA\\u201d, which we have now updated throughout the paper.\", \"writing_quality\": \"We thank you for your suggestions on what to improve, it has made the updated paper more clear and concise.\", \"technical_quality\": \"We have added more detail to the experiments (Section 5) on how the experiments and analysis was performed.\", \"in_answer_to_your_particular_questions\": \"-Each Monte Carlo trial is done on the same batch of images in the training set. This has now been stated explicitly in Section 5.\\n-In sections 5.1 and 5.2, linear regression analysis is used just as in 5.3. Different t are combined for all our experiments by averaging all the K values when t>t_min. Calculating t_min and K was done using the MCALIB library [1]. We have now added equation (7) and a more detailed description in Section 3.3 and Section 5 to make this more clear. \\n\\nPreviously we had only cited the MCALIB paper, we have now also cited the paper and github repository in Section 5. In the appendix we have also added the linear regression curves for all models used in Sections 5.1 and 5.2 and we have explained explicitly in detail how MCALIB calculates K and t_min. This is all in the updated version of the paper. \\n-We have now explicitly stated which of train and validation accuracy we are referring to at each instance.\\n-We have now added an explanation of the quantization with stochastic rounding in Appendix A.3 and we reference this in Section 5.\\n-In 5.2, the model chosen as the baseline was done based on whichever achieved the highest single-precision (unquantized) validation accuracy. We have stated this more clearly in the title of Table 1 and the text in 5.2.\", \"specific_suggestions_for_improvement\": \"-The citation format has now been fixed using \\\\citep\\n\\n-We have replaced the Fixed(2,8) results with Fixed(8,8) results in Figure 1 as we believe this helps provide clarity for our paper motivations. We have also modified the comments on this in paragraph 3 of the Introduction. \\n\\n-We have fixed the grammatical areas in Section 2 and in other parts of the paper.\\n\\n-We have now added more papers to the related work section and have modified some of our descriptions of previous literature to be more specific. Namely, we have cited 5 more papers in relation to Monte Carlo methods for Bayesian Neural Networks, 3 of them relating specifically to quantization/compression methods. As mentioned, we have updated Section 2 with a brief review of key papers and a statement regarding the new aspects of the present work.\\n\\n-Section 3:\\n\\u2014We have now clarified this.\\n\\u2014We have now fixed this\\n\\u2014We have now fixed this\\n\\u2014yes it should be \\ud835\\udeff, we have now fixed this\\n\\u2014we have removed the wording \\u201cfrom numerical analysis literature\\u201d and have reworded the sentence.\\nIn equation (4) is now represented as (1+m) for clarity and consistency with equation (1). When subbing equation (1) into (4), it has to be reminded that the sign bit (-1)^s is not relevant to the random variable which we have defined as U~(-0.5,0.5) and hence it is discarded. Once you remove this, subbing this in does yield the same result.\\n\\u2014we have removed this sentence \\n\\u2014In IEEE floating point arithmetic, operators are implemented such that the error must be bounded by \\ud835\\udeff. The inequality needed to be flipped, this has now been done.\\n\\u2014We have now fixed this.\\n\\u2014This is correct, we have now fixed this.\\n\\u2014We have now fixed this.\\n\\nSection 4.1\\n\\u2014We have now fixed this notation error\\n\\u2014We have fixed equation 8 by using |X| to represent the size. We also changed the \\u201cL\\u201d to \\u201closs\\u201d in the right hand part of the equation and in the text so that the total network loss and loss function are distinguishable.\\n\\u2014This grammatical error has now been fixed.\\n\\nOur explanation in the second bullet point of 4.1 was incorrect. As the reviewer points out, the reason for MCA not working well is that the output is discrete. This means that, for high values of t, Monte Carlo results across different t become indistinguishable. This has now been corrected.\\n\\nAll relevant figures and grammatical errors have been updated to address reviewers concerns.\"}", "{\"title\": \"Response To Reviewer 2\", \"comment\": \"Thank you for your review. We have provided an updated version of the paper with the main changes highlighted to address all reviewers comments.\", \"weaknesses\": \"- The straightforward application of MCA to NN inference on conventional GPU/CPU machines precludes most optimizations used in GEMM, and would result in a significant performance loss (explained in Section 4.1 and 4.2). Thus a key idea of this paper is to provide a technique which demonstrates how to overcome these issues for applying MCA to NN (which we call MCDA). In doing so, we gain useful insights into the sensitivity of the NN to rounding, and demonstrate that we can rank different networks, and choose good ones among those with the same loss score.\\n\\n-We hope our work is a precursor to further research in areas such as end-to-end low precision NN training, informative priors for Bayesian NN, training methods for weight-insensitive inference, etc. We have added additional papers on Bayesian Neural Networks to Section 2.\", \"other_comments\": \"-As pointed out by reviewer 3, a better explanation of the issue in the second bullet point of 4.1 is that the output was a discrete value. To overcome this, we use the loss function as the instead, which is a continuous value. Thus for high values of t (small perturbations), it will still present an observable change. We have explicitly stated this by modifying the second bullet point in 4.1 and the first paragraph of 4.2. in the updated version.\\n\\n-The statistical methods used in MCALIB (the MCA library used for our regression analysis) assume that results are normally distributed to provide the summary statistics from which K and t_min are computed. Within MCALIB, a normality test is applied, and failing indicates that changes in rounding lead to unusual changes in the (scalar) output and t_min is strictly larger than the precision for which this occurs. No assumptions regarding the distribution of the inputs nor anything about the computational graph or loss metric are made.\\n\\n-Different t does give different K. We now use K_t to describe K for each different t in equation (6) to make this more clear. The K for a given network is reported throughout our experiment section as the average of K values for t > t_min. We have now added equation (7) to make this more clear. \\n\\nFor each model in Figure 3, using MCDA, we ran 1000 trials for all t in {1,2,3\\u2026,16} and calculated the relative standard deviation (RSD). We have now explicitly stated this detail, in the experiments (Beginning of Section 5). We have added to the Appendix, all the linear regression plots for all the models in Figure 3 which demonstrate this analytically. Additionally we have added further explanations and equations to the Appendix regarding how MCALIB calculates t_min and K in the regression analysis.\\n\\n[1] Michael Frechtling and Philip H. W. Leong. Mcalib: Measuring sensitivity to rounding error with monte carlo programming. ACM Trans. Program. Lang. Syst., 37(2):5:1\\u20135:25, April 2015. ISSN 0164-0925. doi:10.1145/2665073. URL http://doi.acm.org/10.1145/2665073\"}", "{\"title\": \"Response To Reviewer 1\", \"comment\": \"Thank you for your review. We have provided an updated version of the paper with the main changes highlighted to address all reviewers comments.\\n\\nThe idea of combining MCA with Bayesian Neural Networks is indeed interesting, but beyond the scope of this paper. We hope that this paper will provide an initial direction for further research in MCA for NNs which enables deeper understanding of quantization in inference and training. In response to your comments and those of the other reviewers, we have cited an additional 5 papers concerning Bayesian Neural Networks in Section 2.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The premise of this paper is that quantization plays an important role in the deployment of deep neural networks; ie in the inference stage. However, errors due to quantization affect different neural architectures differently. It would be useful if we could predict ahead of time which models are more amenable to quantization. I think this is a very interesting premise and the paper is very well motivated.\\n\\nThe paper is also very clear and well written, making the claims precise and backing these up with experiments.\\n\\nAt the heart of the paper is the replacement of floating point numbers with inexact values, which are treated as random variables and defined precisely in equation 4. This definition enables the authors to apply Monte Carlo methods to obtain network predictions as shown in equation (10) and figure 2, and subsequently carry out sensitivity analysis. The experiments show that a measure of sensitivity (K) is indeed a good augmentation to cross-validation for model selection for the purpose of trading-off accuracy and resource consumption when launching deep neural networks with floating point rounding errors.\", \"one_question_i_have_for_the_authors_is_the_following\": \"There has been a large body of literature on Monte Carlo methods for Bayesian neural networks. Could those works have something to say in addressing some of the challenges posed in Section 4.1?\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a scalable method based on Monte Carlo arithmetic for quantifying the sensitivity of trained neural networks to floating point rounding errors. They demonstrate that the loss of significance metric K estimated from the process can be used for selecting networks that are more robust to quantization, and compare popular architectures (AlexNet, ResNet etc.) for their varying sensitivities.\", \"strengths\": [\"The paper tackles an important problem of analyzing sensitivity of networks to quantization and offers a well-correlated metric that can be computed without actually running models on quantized mode\", \"Experiments cover a wide range of architectures in image recognition\"], \"weaknesses\": [\"The proposed method in Section 4.2 appears to be a straightforward modification to MCA for NN\", \"Experiments only demonstrate model selection and evaluating trained networks. Can this metric be used in optimization? For example, can you optimize for lowering K (say with fixed t) during training, so you can find a well-performing weight that also is robust to quantization? 1000 random samples interleaved in training may be slow, but perhaps you can use coarse approximation. This could significantly improve the impact of the paper. Some Bayesian NN literatures may be relevant (dropout, SGLD etc).\"], \"other_comments\": [\"How is the second bullet point in Section 4.1 addressed in the proposed method?\", \"Can you make this metric task-agnostic or input-distribution-agnostic (e.g. just based on variance in predictions over some input datasets)? (e.g. you may pick a difference loss function or different test distribution to evaluate afterwards)\", \"Does different t give different K? If so, what\\u2019s the K reported? (are those points on Figure 3)?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThe paper studies the sensitivity of a neural network with respect to quantizing its weights and activations. The idea is to use Monte Carlo Arithmetic (MCA) in order to calculate the number of significant bits in the training loss (e.g. cross entropy) that are lost due to floating-point arithmetic. The results show that the number of significant bits lost correlates with the reduction in classification accuracy when quantizing the weights and activations of the neural network.\", \"decision\": \"Overall, this is an interesting paper with interesting results. However, I think there is considerable room for improvement, and that more details are needed in order to assess the significance of the results, as I detail in the rest of my review. For these reasons, I recommend weak reject for now, but I encourage the authors to continue working on improving the paper and to provide more details in the updated version.\", \"contribution\": \"The paper considers an important problem, that of quantizing the weights and activations of a neural network in order to reduce computational and memory cost, while maintaining the machine-learning performance as high as possible. \\n\\nIn my opinion, the main contribution of the paper is the experimental findings, and in particular that the sensitivity of the training loss with respect to the precision of the weights and activations correlates with the accuracy of the network. It seems to me that these results may relate to work on Bayesian neural networks, sharp vs flat minima, and minimum-description length approaches to variational inference. Work on these areas has also shown that sensitivity of the training loss with respect to the precision of the weights (which intuitively happens when the network is at a \\\"sharp\\\" local minimum vs a \\\"flat\\\" one) is related to poor generalization performance, and vice versa. I would encourage the authors to explore the potential relationship of their work with these areas, and possibly discuss them in an updated version of the paper.\", \"originality\": \"The paper describes a method for assessing the sensitivity of a neural network with respect to the precision of the weights and activations. The method is a straightforward application of Monte Carlo Arithmetic (MCA) to neural networks. I believe that the application of MCA to neural networks for this particular purpose is novel, and that the results are original. However, the introduction of the paper gives the impression that the proposed method is brand new, and even uses the acronym MCA to refer to the proposed method, which can be confusing to readers. I would suggest to the authors to rewrite the introduction so as to reflect more accurately that the contribution is not a brand-new method, but rather the application of an existing method (MCA) in a novel way.\", \"writing_quality\": \"The paper is generally easy to read, but there is considerable room for improvement. There are mistakes, and often the writing is sloppy and imprecise. I give some more specific suggestions on what to improve later on.\", \"technical_quality\": [\"The method is well motivated and the experiments seem reasonable. However, there is very little detail on the experiments, which makes it hard to assess their correctness/significance. I would suggest to the authors to rewrite the experimental section with full detail, or put more details in an appendix. In particular:\", \"Is each Monte Carlo trial done on the same batch of training images or a different one? If different, how are the trials averaged, and does that mean that the standard deviation over trials also includes a contribution due to different batches?\", \"In sections 5.1 and 5.2, how were the results for different t combined/aggregated? Did you use linear-regression analysis as in section 5.3?\", \"When you say \\\"accuracy\\\", do you mean accuracy on the training set, validation set, or test set? This is particularly important for assessing the significance of the results, and is something that is currently missing from the description of the experiments.\", \"How was the quantization of the neural networks performed? It would be good to explain this at least on a high level, in addition to citing Wang et al., (2018).\", \"In section 5.2, how was the model selection for each method performed exactly? In the baseline method, was the model to be quantized selected based on validation performance before quantization or after quantization?\"], \"specific_suggestions_for_improvement\": \"The citation format, i.e. (Smith et al., (2019)), is unusual and uses unnecessarily many parentheses. Use \\\\citep for (Smith et al., 2019), and \\\\citet for Smith et al. (2019).\\n\\nThe illustration of fig. 1 is not fully convincing as a motivation for floating-point arithmetic. Even though it makes the case that Float(7, 7) is more efficient than Float(8, 8) and Float (9, 9), the comparison between Float(7, 7) and Fixed(12, 12) is hard to interpret, as we can't conclude whether the efficiency gain is due to reducing the number of bits or to switching from fixed-point to floating-point arithmetic. A more convincing illustration would compare fixed-point with floating-point arithmetic using the same number of bits.\\n\\nIt would be better if fig. 1 were 2D, as 3D doesn't add anything but makes it harder to compare sizes visually.\\n\\nPlease avoid exaggerations, such as \\\"exquisitely sensitive\\\" or \\\"extremely sensitive\\\", when \\\"sensitive\\\" would suffice.\", \"section_2_is_grammatically_sloppy\": [\"arihtmetic --> arithmetic\", \"Last line of page 2 seems to be missing a verb.\", \"this has lead --> this has led\", \"The related-work section is too short and in many cases it doesn't explain what previous work has actually done. For example, \\\"rounding of inexact values to their nearest FP approximation has been studied in several publications\\\" is vague: what exactly these publication have done? This lack of detail makes it hard to assess the originality of the current paper, and how it differs from existing work.\"], \"section_3_is_often_unclear_with_imprecise_mathematical_notation\": [\"\\\"e is the base-2 exponent of x in binary floating point arithmetic\\\": surely, the exponent is represented as an integer?\", \"(bs, be1, be2, ..., bex, bm1, bm2, ..., bmx) is sloppy, as it indicates that the indices run from 1 to x.\", \"Bx = sx + ex + mx is also sloppy; what is meant here is the number of bits to represent sx, ex, mx and not the values themselves.\", \"F(x) = x(1 + \\u03b8), shouldn't \\u03b8 be \\u03b4?\", \"\\\"which is typically the cause of horrific numerical inaccuracy from numerical analysis literature\\\", the phrase \\\"from numerical analysis literature\\\" doesn't make much sense here.\", \"In eq. (4), substituting the expression for x from eq. (1) doesn't yield the same result.\", \"\\\"The number of trials is an important consideration because [...] it can produce adverse effects on results\\\". What is meant by \\\"adverse effects\\\"? Do you mean that with few trials Monte Carlo doesn't give accurate results? Please be more specific.\", \"\\\"we can determine the expected number of significant binary digits available from a p-digit FP system as p \\u2265 \\u2212log2(\\u03b4)\\\". I'm unable to follow this statement, please explain further. Also, from applying logs to \\u03b4 \\u2264 2^{\\u2212p} one gets an inequality that doesn't match the one in p \\u2265 \\u2212log2(\\u03b4).\", \"\\\"The relative error of an MCA operation is, for virtual precision t, is \\u03b4 \\u2264 2^-t\\\", \\\"is\\\" is used twice here.\", \"\\\"the expected number of significant binary digits in a t-digit MCA operations is at least t\\\", operations --> operation. Also, shouldn't it be at most t, otherwise K becomes negative?\", \"is discussed in the section --> is discussed in the next section.\", \"Some mistakes in section 4.1:\", \"y = (x; w) --> y = f(x; w)\", \"Eq. (8) is sloppy, it uses X for both the set and its size. Use |X| or something similar for the size.\", \"In the caption of fig. 2, baes --> base\", \"I'm not convinced by the second bullet point in section 4.1, that the averaging over many images used to obtain the accuracy is the reason why MCA doesn't work well. Surely, the training loss (cross entropy) is also an average over many images? To me it would seem more plausible that the main reason MCA works with training loss but not accuracy is because accuracy is discrete, whereas training loss is continuous.\", \"Fig. 3 would be much easier to read if the axes were labelled, and if the axes had the same range (so that different plots can be compared visually).\", \"Fig. 5 would be easier to read if the networks were sorted with respect to K.\", \"In section 5, CIFAR-10 is sometimes written as CIFAR10.\", \"Appendix A is empty, so it should be removed.\"]}" ] }
r1xI-gHFDH
How can we generalise learning distributed representations of graphs?
[ "Paul M Scherer", "Pietro Lio" ]
We propose a general framework to construct unsupervised models capable of learning distributed representations of discrete structures such as graphs based on R-Convolution kernels and distributed semantics research. Our framework combines the insights and observations of Deep Graph Kernels and Graph2Vec towards a unified methodology for performing similarity learning on graphs of arbitrary size. This is exemplified by our own instance G2DR which extends Graph2Vec from labelled graphs towards unlabelled graphs and tackles issues of diagonal dominance through pruning of the subgraph vocabulary composing graphs. These changes produce new state of the art results in the downstream application of G2DR embeddings in graph classification tasks over datasets with small labelled graphs in binary classification to multi-class classification on large unlabelled graphs using an off-the-shelf support vector machine.
[ "graphs", "distributed representations", "similarity learning" ]
Reject
https://openreview.net/pdf?id=r1xI-gHFDH
https://openreview.net/forum?id=r1xI-gHFDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "UiUyNvGyv7", "H1lWHaNhir", "rJgHTiE2sr", "H1lnJeRqiH", "H1eWRE9ciS", "rylK_u2bsH", "rkxdXdnbor", "Sylfcv2bjS", "SyxdvPnZjr", "BkxnW6eCFS", "ryexkwlRKr", "S1xV-886FS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741489, 1573829945449, 1573829564723, 1573736420247, 1573721289405, 1573140593341, 1573140511901, 1573140362040, 1573140320136, 1571847427553, 1571845847693, 1571804667610 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2137/Authors" ], [ "ICLR.cc/2020/Conference/Paper2137/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2137/Authors" ], [ "ICLR.cc/2020/Conference/Paper2137/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2137/Authors" ], [ "ICLR.cc/2020/Conference/Paper2137/Authors" ], [ "ICLR.cc/2020/Conference/Paper2137/Authors" ], [ "ICLR.cc/2020/Conference/Paper2137/Authors" ], [ "ICLR.cc/2020/Conference/Paper2137/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2137/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2137/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposed a general framework to construct unsupervised models for representation learning of discrete structures. The reviewers feel that the approach is taken directly from graph kernels, and the novelty is not high enough.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Uploaded Revision\", \"comment\": \"We would like to thank all of the reviewers for reading our work and providing feedback to improve our work and correct mistakes.\\n\\nWe have taken these into consideration and uploaded a revision. On top of including as many of the pointers and promised revisions as possible, we have changed parts of the presentation to be clearer on the different contributions.\", \"main_points_of_revision\": [\"We have updated the abstract to be clearer about the specific contributions of this work. (unfortunately we cannot update the abstract on this webpage.)\", \"We have included suggested related work from Reviewer #3 and made comments within the paper, as well as including their results on our selection of benchmark datasets.\", \"We have expanded on our heuristic to prune the subgraph pattern vocabulary to handle the indirect influence of diagonal dominance as suggested by Reviewer #2 within Section 4.2.1. We point out how this helps the skipgram model learn a more useful representations (for the downstream graph classification task).\", \"We have attempted to better separate the description of the framework for building models which can learn distributed representations of graphs, and the presentation of an extended version of Graph2Vec, described using this framework.\", \"Changes to the section titles and seperation of some previous subsections into their own to reflect above point, and make the content clearer.\", \"Removal of figure 1 and Algorithm 2, replaced with equation of the objective function to be optimised in the learning phase to save space and include revisions promised elsewhere.\", \"Updated the results tables, as we previously presented the standard deviation of the downstream SVM instead of multiple iterations of the entire system as pointed out by Reviewer #1. We have also included the suggested models of Reviewer #3.\", \"We have updated sections 5.2 and 5.3 to give a better discussion on the results, comparison with other approaches, and a comment on the PTC dataset and the challenge it poses to graph classification systems.\", \"Once again we would like to thank all who have read our work, and provided pointers for improvement in the revision and future.\"]}", "{\"title\": \"Building blocks\", \"comment\": \"In general I am ok with comments of the authors.\\n\\nHowever, I think that it is rather obvious (taking into account so many papers on this topic) a general structure of such type of algorithms, what kind of building blocks we should use. I think that the main remaining issue here is to improve some of the blocks and prove under which conditions those modifications can have some impact on theoretical properties, e.g. ability to solve graph isomorphism.\\n\\nTherefore, I do not think I can increase my grade significantly.\"}", "{\"title\": \"Thank you for your comment\", \"comment\": \"Thank you for your comment\\n\\n\\\"- as we already discussed, the approach is the mix of several existing approaches\\\"\\n\\nAs in the first response, the paper does not describe is not a single approach or single model exclusively, we are highlighting a common framework/workflow utilised by other models which learn distributed representations of graphs (deep graph kernels, graph2vec, anonymous walk embeddings can all be described within this framework). The second portion presents an extended version of Graph2Vec described using this framework as an example, we call this G2DR for easier reference. We will revise the abstract to make this more clear from the start.\\n\\n\\\"-one of the main differences of the proposed approach is that the authors use some other method (which is already known) to construct a vocabulary of subtree patterns for large graphs\\\"\\n\\nYes one extension to the current implementation of Graph2Vec is to use the more general WL node relabeling algorithm described by Shervashidze et al. We don't believe the use of a good algorithm described by a well known paper in graph kernels is a bad thing. \\n\\n\\\"- the computational complexity of the proposed approach is high due to high computational complexity of constructing the vocabulary. Is it worth using this approach due to its high computational cost?\\\"\\n\\nThe construction of the substructure vocabulary is O(|G|dm) where |G| is the number of graphs in the dataset, d is the highest degree of subtree pattern we wish to extract, and m is the highest number of edges in a graph within the dataset. An advantage of using the distributed approach is clear associations that can be drawn between graphs that are deemed similar as we can expect and actually present the different subgraph patterns as they are recorded as contexts. This can be useful subsequent analysis on the motifs present in different classes of graphs.\\n\\n\\\"- the experimental section is rather weak. Since the approach is based on using different building blocks, it is necessary to know which of building blocks is the most important and provides the most contribution to increase in accuracy. Is it due to the new vocabulary? Or doc2vec? or what?\\\"\\n\\nThe presented results are a result of the pruned vocabulary. This is the only operation that differs from Graph2Vec that directly affects the learning model (skipgram). \\n\\nThe other \\\"building blocks\\\" such as using Shershavidze et al's WL-relabeling algorithm and suggestion on labeling nodes by degree allows application of the model to learn distributed representations of unlabelled graphs; hence the results on the Reddit datasets. Otherwise the WL algorithm produces exactly the same subtrees as the WL algorithm described in Graph2Vec in the labeled case; which then get pruned based on their frequency. \\n\\nOnce again thank you for your comment. Hopefully this addresses some of the new questions.\"}", "{\"title\": \"Still I am not convinced\", \"comment\": [\"as we already discussed, the approach is the mix of several existing approaches\", \"one of the main differences of the proposed approach is that the authors use some other method (which is already known) to construct a vocabulary of subtree patterns for large graphs\", \"the computational complexity of the proposed approach is high due to high computational complexity of constructing the vocabulary. Is it worth using this approach due to its high computational cost?\", \"the experimental section is rather weak. Since the approach is based on using different building blocks, it is necessary to know which of building blocks is the most important and provides the most contribution to increase in accuracy. Is it due to the new vocabulary? Or doc2vec? or what?\", \"as a results still I am not convinced in the provided comments.\"]}", "{\"title\": \"Response for Blind Review #1\", \"comment\": \"First of all thank you very much for your review of this work, we will attempt to address some of the comments and questions individually\\n\\n\\u201cThis paper studied unsupervised graph representation learning. The authors combined the techniques for Deep Graph Kernels and Graph2Vec, which essential extract substructures as words and the whole graph as documents and use doc2vec for learning the representations of both graphs and substructures.\\u201d\\n\\nYou are correct in this summary, however we may have not adequately stressed that the approach more generally highlights that graphs may be represented distributively via its internal substructure patterns (such as walks, nodes, induced subgraphs, subtrees, etc. as highlighted in Deep Graph Kernels [DGK]) as context. This allows a variety of embedding methods which exploit the distributive hypothesis to be applied on the graph-subpattern context pairs to learn vector representations of graphs (skipgram, cbow, GLOVE, pmi) etc. The revision will try to make this distinction clearer in the introduction. The intended contribution is an acknowledgement of the observation that many kernels fall under the R-Convolution framework (Haussler 1999) in DGK (Yanardag and Vishwanathan 2015); and generalisation beyond building representation with edit distance matrices and word2vec towards all embedding methods which exploit the distributive hypothesis.\\n\\nThe 2nd half of this work presents G2DR (which is a straight-forward extension of the Graph2Vec model) to exemplify an instance of this framework using a decomposition of graphs to subtrees using the WL algorithm and building distributed representations with a skipgram model. This is just one possible instance of the approach above, and we chose to extend Graph2Vec (could have been called Graph2Vec2 but felt G2DR was more appropriate whilst acknowledging the previous work) as it could be modified to be utilised on a wider set of graph types.\\n\\n\\u201cHowever, the novelty of the proposed method is very marginal. Comparing to the Deep Graph kernel methods, the authors simply changed from the word2vec style methods to doc2vec style methods.\\u201c\\n\\nYou are correct that Graph2Vec extends deep graph kernels through use of a WL subtree contexts followed by a skipgram architecture posed in the form of doc2vec. The contribution of G2DR is to modify the implementation the WL subtree decomposition with that in WL Kernel (Shervashidze et al, 2011) and take on their suggestion of relabeling unlabelled graphs by degree to allow building representations of unlabelled graphs (REDDIT graphs, for example). Furthermore we also attempt to lessen the problem of diagonal dominance in DGK/Graph2Vec by pruning the vocabulary of context subgraph patterns to improve downstream classification performance.\", \"onto_some_of_the_questions\": \"\\u201c(1) The data sets used in this paper are too small. For unsupervised pretraining methods, much larger data sets are expected. \\u201c\\nIndeed it is difficult to find good large/public/popular datasets for comparative analysis with related works. As reported in the work we have sourced our datasets from https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets (Kersting et al, 2016) around 2018. As on the list the REDDIT datasets are still the (2nd?) largest in this list and are regularly used in related literature which motivates its use in our work (as well as the fact it has unlabelled nodes).\\n\\n\\u201c(2) The results in Table 1 are really weird. Why do the performance of your method have a much lower standard deviation? It is really hard to believe the proposed methods have much stable performance compare to other methods. Can you explain this?\\u201d\\n\\nThank you for this observation! You are the only one that noticed this in the results table. The authors sincerely apologise for this mistake, the presented standard deviation comes from the 10 Fold SVM CV on the same embeddings output by G2DR through an old development file that reused pretrained embeddings for experiments. The revision will present the standard deviation of 10 Fold SVM CV being run on 10 trained embedding outputs of G2DR. This should give a more realistic picture of expected performance on the benchmarks. \\n\\nWe hope that this clarifies some points and thank the reviewer for constructive feedback, and would be happy to discuss more.\"}", "{\"title\": \"Response for Blind Review #2\", \"comment\": \"Thank you very much for reading this work and providing feedback. We will attempt to address each point individually.\\n\\n\\u201c1. The main issue with this method is the computational complexity due to exponential growth of vocabulary of subtree patterns size for large graphs. Particularly , for experiments with unlabeled graphs, the performance is significantly worse than CNN based models. How would the performance be on unlabeled small graphs? For example, have you verified the performance on small graphs of section 4.2 when labels are ignored? (downstream clustering task)\\u201d\\n\\nIndeed the computational complexity of this approach is high in the embedding learning stage due to the exponential growth of the subtree patterns extracted as the graphs get larger and more heterogeneous in terms of node labels. However we believe it is nonetheless interesting to look at alternative inductive biases (such as a distributive one, with various definitions of context) to learn representations of graphs. We believe intelligent definitions of \\u201ccontext\\u201d or vocabulary pruning can help significantly in this regard.\\n\\nWe have not tried applying this to small unlabelled graphs. If time permits this will be in the revision (within an appendix) with the labeled datasets such as Mutag. Thank you for this suggestion.\\n\\n\\u201c 2. The neural language models rely on the concept of context in documents. How the concept of context defined for subtree patterns extracted by Weisfeiler-Lehman algorithm?\\u201d\\n\\nYes defining the context is very important for learning useful distributive representations, and there are many different ways this can be done in natural language processing. For learning whole graph representations the context for a graph was its induced subtree patterns (which are extracted using the WL algorithm).\\n\\n\\u201c3. The issue of diagonal dominance should be clarified. How does the pruning tackles this issue?\\u201d\\n\\nWe will attempt to describe the issue of diagonal dominance more concretely in the revision. Essentially diagonal dominance is related to the explosive increase of unique induced subgraph patterns when building our vocabularies. An example of this can be seen for our work, using the WL relabeling algorithm for the NCI1 dataset on the first iteration there are 267 subtrees, on the second there are 4033, and in the third iteration 22923 subtrees patterns within the graphs of NCI1. Consequently as the number of features (vocabulary size) grows, we run into the sparsity problem, where only a few substructures will be common across the graphs. This leads to the phenomenon known as diagonal dominance, where graphs become more similar to themselves but more distant from other graphs in the dataset. The naive pruning directly tackles this approach by removing dimensions along vocabulary instances that only appear a few times. Smarter ways of reducing this effect would lead to better distributed representations as we lightly touch upon in the discussion of the final results. We will try to make this more apparent in the revision. Thank you for this comment.\\n\\nWe thank the reviewer for reading our work and the constructive feedback, we will work to integrate some of the comments into our revision.\"}", "{\"title\": \"Response for Blind Review #3 Part 2 (of 2)\", \"comment\": \"\\u201cAlso, the Figure 1. is taken from the original paper of WL kernel. The algorithms 1 and 2 are taken from the original papers with slight modifications. \\u201c\\n\\nAs previously we explicitly say we use the algorithms from their respective papers with acknowledgement to aid description of the G2DR and Graph2Vec approaches with notational changes for consistency in the explanations. \\n\\n-For algorithm 1 \\u201cWL-Relabel\\u201d in section 3.1.1. <<... This is achieved as a byproduct of WL test\\u2019s node relabeling (Shervashidze et al,. 2011) and is fully described in algorithm 1 ...\\u201d>>\\n\\n-For algorithm 2 \\u201cTrain-Graph2Vec\\u201d in section 3.1.3 <<... We follow Graph2Vec (Narayanan et al., 2017) and use a PV-DBOW\\u2026 as outlined in algorithm 2>>\\n\\nWe think presenting the algorithms helps the reader refer to details within the paper itself to get more exposition if they wish to do so with notation that is consistent within this work. However together with the comment on the PVDBOW name being an ill-suited name for the method we may remove algorithm 2 and replace it with the objective function of the word2vec/graph2vec algorithm to save space and address other points. Similarly for the figure 1 which was used for explanation purpose (it actually has an additional node number 5 in comparison to the figure in the WL Kernel, so the extracted subtree is accordingly different as well), we may remove this in favour of addressing some of the other points in the reviews, or make a showcase of the subtree extraction on a more obviously different graph. Thank you for the comment and we will revise the document as necessary.\\n\\n\\u201cThere is no discussion of [1], which uses CBOW framework, has theoretical properties, and produces good results in experiments. There is no comparison with GNN models such as [2]. \\u201c\\n\\nThank you for introducing us to [1] (AWE). We have simply not come across this work during the time working on this project. This is a very nice paper with clear parallels to this work as it has to DGK and Graph2Vec as well. In fact the style of this work is very similar to Graph2Vec with the usage of anonymous walks instead of subtree patterns as input into context based language models. In one sense the AWE is another method that can fall under the framework described in the introduction and section 3 alongside DGK, Graph2Vec, G2DR. This is very neat and thank you for pointing this out, we will try to include it in the revision and results tables.\\n\\nThank you for pointing out [2]. Our original comparison for GNN/convolution based methods was to compare between DiffPool and PATCHYSAN as described in the paper. Because DiffPool had only published results for one of the datasets within our selection without standard deviations we did not include this, in favour of PATCHY-SAN which did cover all the datasets with appropriate standard deviations. The GIN in [2] seems to have results for almost all of the datasets so we will include it in the results table of the revision. To help clarify relations with the GNN based methods we will retitle section 2.2 as \\u201cDeep learning approaches: GNNs and convolutional approaches\\u201d.\\n\\n\\u201cI would be more interested to see explanation of the obtained results for each particular dataset (e.g. why MUTAG has 92% accuracy and PTC 67%); what so different about dataset and whether we reached a limit on most commonly used datasets. \\u201c\\n\\nYes thank you, this is an interesting question on what defines the difficulty of the classification tasks based on the properties of the dataset. From the point of view of building distributed representations we think an interesting way to look at it would be the characterisation of the substructure pattern distributions for graphs of different classifications. In PTC there may be clear overlap between the distributions which makes it hard to make representations that are easy to seperate downstream. If time permits within the revision period we will either answer directly on a comment or within an appendix section.\\n\\nWe thank the reviewer for reading our work and the constructive feedback. We will work to incorporate tips from the comments into the revision.\"}", "{\"title\": \"Response for blind review #3 Part 1\", \"comment\": \"First of all thank you very much for your review of this work, we will attempt to address some of the comments and questions below.\\n\\n\\u201cDespite having good experimental results, the paper is not of the quality to be accepted to the conference yet. The approach is rather a mix of previous works and hence not novel.\\u201d \\nAnd\\n\\u201cIn particular, the algorithm for WL decomposition is almost fully taken from the original paper with a slight modification... \\u201c\\n\\nThis paper relies on previous models such as Deep Graph Kernels and Graph2Vec to extract and explicitly specify a general pipeline for building models capable of learning distributed representations of graphs. The pipeline is based on two parts: the decompositions of graphs into substructures (walks, subtrees, nodes, etc) and the learning distributed representations using such substructures with different definitions of context and associated embedding methods (word2vec, GLoVe, etc.).\\n\\nThe second half of the write-up focuses on G2DR (explicitly stated as an extension of Graph2Vec) as an instance of this pipeline described above. G2DR is a straightforward extension of the Graph2Vec to more graph types (unlabelled graphs) through adoption of Shervashidze et al\\u2019s WL algorithm to find subtree patterns, we have put it in this work with minor modification for notation because otherwise it wouldn\\u2019t be the same WL algorithm. We believe in keeping the algorithm in the paper as it aids description of the specific implementation used and is correctly acknowledged as being the Shervashidze WL algorithm within the paper (section 3.1.1). We are afraid that simply pointing to the Shervashidze et al\\u2019s exact presentation would detract from the reading and flow of the paper as different notation is used.\", \"to_summarise_we_can_garner_two_contributions_here\": \"Specification of a general pipeline for building models capable of learning distributed representations of graphs.\\nAn extended version of Graph2Vec, called G2DR which is applicable to unlabelled graphs and is also more amenable to diagonal dominance through pruning of the subgraph vocabularies. This makes it perform better on larger graphs/datasets. \\n\\n\\u201dAdvantage of using it for unlabeled data is poorly motivated as unlabeled graphs can easily take statistics such as degree as the node labels, which was shown well in practice.\\u201d\\n\\nWe explicitly state our use of Shervashize et al\\u2019s suggestion to label unlabelled nodes initially by their degree, otherwise the WL algorithm cannot be run for the unlabelled graphs such as the Reddit datasets. The contribution here is the application of this suggestion within another existing algorithm (Graph2Vec) to expand its applicability to more graph types and improve the performance of the GetSubgraph() (which is their rendition of the subtree decomposition algorithm) algorithm stated in Graph2Vec. \\n\\nOnce the unlabelled nodes are labelled by their degree, the motivation of using the WL algorithm falls upon motivating the usage of the rooted subtree patterns extracted. We touch upon this section 3.1.1 and is potentially better covered in the WL Kernel and Graph2Vec works. Essentially the motivation is that they are higher order substructures (than nodes), non-linear around definition of the neighbourhood around a node (as compared to a random walk), and the exhaustive nature of decomposition for subtree patterns for every node in the graph is useful to characterise all the patterns (subtree patterns) within a given graph. Another pragmatic motivation is that the WL Kernel has been shown to work well in graph classification tasks. We will try to make these motivations more clear in the paper, thank you for this comment and suggestion. \\n\\n\\u201cModified PV-DBOW is in fact the same algorithm as the original CBOW model but applied to different context. It has been used in many papers, including Deep GK, graph2vec, anonymous walks. \\u201c\\n\\nYes you are completely correct! We explicitly say that we are using the embedding method from Graph2Vec (hence the name of the algorithm also being TrainGraph2Vec). We kept the misleading Doc2Vec analogies used in Graph2Vec as it aided exposition of how one can think of a graph as composition of substructures, like documents being compositions of words. As the contexts of the graphs are defined as the subtree patterns within it, it is actually more similar to training a word2vec model as you mention. To make this clear we will change the title of this section in the revision. Thank you for this comment.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an unsupervised method for graph embedding.\\n\\nDespite having good experimental results, the paper is not of the quality to be accepted to the conference yet. The approach is rather a mix of previous works and hence not novel. \\n\\nIn particular, the algorithm for WL decomposition is almost fully taken from the original paper with a slight modification. Advantage of using it for unlabeled data is poorly motivated as unlabeled graphs can easily take statistics such as degree as the node labels, which was shown well in practice. \\n\\nModified PV-DBOW is in fact the same algorithm as the original CBOW model but applied to different context. It has been used in many papers, including Deep GK, graph2vec, anonymous walks. \\n\\nAlso, the Figure 1. is taken from the original paper of WL kernel. The algorithms 1 and 2 are taken from the original papers with slight modifications. \\n\\nThere is no discussion of [1], which uses CBOW framework, has theoretical properties, and produces good results in experiments. There is no comparison with GNN models such as [2]. \\n\\nI would be more interested to see explanation of the obtained results for each particular dataset (e.g. why MUTAG has 92% accuracy and PTC 67%); what so different about dataset and whether we reached a limit on most commonly used datasets. \\n\\n[1] Anonymous Walk Embeddings? ICML 2018, Ivanov et. al. \\n[2] How Powerful are Graph Neural Networks? ICLR 2019, Xu et. al.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Strength:\\n-- The paper is well written and easy to follow\\n-- Learning the unsupervised graph representation learning is a very important problem\\n-- The proposed approach seems effective on some data sets.\", \"weakness\": \"-- The novelty of the proposed approach is very marginal\\n-- The experiments are very weak. \\n\\nThis paper studied unsupervised graph representation learning. The authors combined the techniques for Deep Graph Kernels and Graph2Vec, which essential extract substructures as words and the whole graph as documents and use doc2vec for learning the representations of both graphs and substructures. Experimental results on a few data sets prove the effectiveness of the proposed approach. \\n\\nOverall, the paper is well written and easy to follow. Learning unsupervised graph representation learning is a very important problem, especially for predicting the chemical properties of molecular structures. However, the novelty of the proposed method is very marginal. Comparing to the Deep Graph kernel methods, the authors simply changed from the word2vec style methods to doc2vec style methods. The paper could be better fit to a more applied conference. Moreover, I have some concerns on the experiments. \\n(1) The data sets used in this paper are too small. For unsupervised pretraining methods, much larger data sets are expected. \\n\\n(2) The results in Table 1 are really weird. Why do the performance of your method have a much lower standard deviation? It is really hard to believe the proposed methods have much stable performance compare to other methods. Can you explain this?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a framework for learning distributional representations of graphs in the following way: First, each graph is represented as a collection of subtree patterns. Second, the neural language model of doc2vec is applied to these collections of patterns to learn graph embeddings. These embeddings are then exploited in downstream analyses such as classification. Overall, the idea of formulating graph representation learning as a language model is interesting. The experiments show that it perform better than kernel methods. I have the following major comments:\\n\\n1. The main issue with this method is the computational complexity due to exponential growth of vocabulary of subtree patterns size for large graphs. Particularly , for experiments with unlabeled graphs, the performance is significantly worse than CNN based models. How would the performance be on unlabeled small graphs? For example, have you verified the performance on small graphs of section 4.2 when labels are ignored? (downstream clustering task)\\n\\n2. The neural language models rely on the concept of context in documents. How the concept of context defined for subtree patterns extracted by Weisfeiler-Lehman algorithm?\\n\\n3. The issue of diagonal dominance should be clarified. How does the pruning tackles this issue?\"}" ] }
BJl8ZlHFwr
Relation-based Generalized Zero-shot Classification with the Domain Discriminator on the shared representation
[ "Masahiro Suzuki", "Yutaka Matsuo" ]
Generalized zero-shot learning (GZSL) is the task of predicting a test image from seen or unseen classes using pre-defined class-attributes and images from the seen classes. Typical ZSL models assign the class corresponding to the most relevant attribute as the predicted label of the test image based on the learned relation between the attribute and the image. However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i.e., the \emph{domain bias problem}. Recently, many methods have addressed this difficulty using a synthesis-based approach that, however, requires generation of large amounts of high-quality unseen images after training and the additional training of classifier given them. Therefore, for this study, we aim at alleviating this difficulty in the manner of the relation-based approach. First, we consider the requirements for good performance in a ZSL setting and introduce a new model based on a variational autoencoder that learns to embed attributes and images into the shared representation space which satisfies those requirements. Next, we assume that the domain bias problem in GZSL derives from a situation in which embedding of the unseen domain overlaps that of the seen one. We introduce a discriminator that distinguishes domains in a shared space and learns jointly with the above embedding model to prevent this situation. After training, we can obtain prior knowledge from the discriminator of which domain is more likely to be embedded anywhere in the shared space. We propose combination of this knowledge and the relation-based classification on the embedded shared space as a mixture model to compensate class prediction. Experimentally obtained results confirm that the proposed method significantly improves the domain bias problem in relation-based settings and achieves almost equal accuracy to that of high-cost synthesis-based methods.
[ "classification", "difficulty", "domain bias problem", "domain discriminator", "representation", "test image", "images", "domain", "training", "requirements" ]
Reject
https://openreview.net/pdf?id=BJl8ZlHFwr
https://openreview.net/forum?id=BJl8ZlHFwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "5OPLk_z4sS", "HkeUU1sniH", "Skg-wJPsir", "BJgl-GG5jB", "ByxAKp-coS", "BygJU3-ciB", "H1lg2tWcjr", "rJgIEz285r", "rJg9dq0kqr", "S1lrWRQvtH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741458, 1573855054028, 1573773144678, 1573687800132, 1573686661568, 1573686342638, 1573685672497, 1572418094061, 1571969650107, 1571401212973 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2136/Authors" ], [ "ICLR.cc/2020/Conference/Paper2136/Authors" ], [ "ICLR.cc/2020/Conference/Paper2136/Authors" ], [ "ICLR.cc/2020/Conference/Paper2136/Authors" ], [ "ICLR.cc/2020/Conference/Paper2136/Authors" ], [ "ICLR.cc/2020/Conference/Paper2136/Authors" ], [ "ICLR.cc/2020/Conference/Paper2136/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2136/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2136/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a relation-based model that extends VAE to explicitly alleviate the domain bias problem between seen and unseen classes in the setting of generalized zero-shot learning.\\n\\nReviewers and AC think that the studied problem is interesting, the reported experimental results are strong, and the writing is clear, but the proposed model and its scientific reasoning for convincing why the proposed method is valuable is somewhat limited. Thus the authors are encouraged to further improve in these directions. In particular:\\n\\n- The idea of using a variant of the widely-used domain discriminator to make seen and unseen classes distinguishable is somewhat contradicted to the basic principle of zero-shot learning. How to trade off the balance between seen and unseen classes has been an important problem in generalized ZSL. These problems need further elaboration.\\n\\n- The proposed model itself is not a real \\\"VAE\\\", making the value of an extensive derivation based on variational inference less prominent. \\n\\n- There is also the need to compare with the baselines mentioned by the reviewers. \\n\\nOverall, this is a borderline paper. Since the above concerns were not addressed convincingly in the rebuttal, I am leaning towards rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Update our paper\", \"comment\": \"We have corrected some more errors and typographical errors. In particular, the description of the hyperparameter that adjusts the variance of the inference model in the computation of the objective function of the domain discriminator was missing, so we mentioned this in the appendix.\\n\\nThank you.\"}", "{\"title\": \"Update our paper\", \"comment\": \"We thank all reviewers for their insightful and detailed comments. We updated our paper according to your feedback. In addition to the parts pointed out by the reviewers, we also fixed the following:\\n\\n- Simplification and clarification of descriptions of related studies and experiments. We also moved some parts (explaining CADA-VAE and domain discriminator learning) to the appendix.\\n\\n- Fixed some errors and mistakes. In particular, we corrected Eq.8 because it was wrong.\\n\\n- We redrew some figures, such as Fig.1, to make them easier to see.\\n\\nWe emphasize that these revisions do not harm the overall contribution of this paper.\\nMoreover, we're sorry for Review #3 but we couldn't reduce the main text to 8 pages, but we reduced it from 10 pages to 9 pages.\\n\\nThank you.\"}", "{\"title\": \"Response to Review #3 (2/2)\", \"comment\": \">> Besides, in Tab.2 there lacks of necessary comparisons with recent relation-based approaches e.g.[r3][r4], which makes the evaluation less sufficient.\\n\\nThank you for presenting related works. [r4] is not the result of GZSL, so we will not list it in Table 2.\\nOn the other hand, [r3] is a study on relation-based GZSL, which is related to our method. However, I have doubts about the results of this paper. This is because it is not written how many epoch numbers the training was completed at the time of training. We ran the implementation by the authors of this paper[1], but we confirmed that its performance overfits as the number of epochs increases and becomes much lower than the results of the original paper. \\nIn this implementation, the model in the epoch with the best *test* performance is stored as ```\\\"Best_model_GZSL_H_X_S_X_U_X.tar\\\"`. From this, it is suspected that the accuracy described in this paper selects at the epoch number with the best test accuracy, but it is illegal. Therefore, we have doubts about this result, so we will not compare this in Table 2. \\nWe are also considering to present the evaluation of the proposed method at the epoch when test performance is best in the appendix for reference.\\n[1] http://vipl.ict.ac.cn/resources/codes\\n\\n> 5.2) How is the class separation formulated in the framework?\\nThe class separation means the degree to which the representation $z$ inferred by $q (z | x)$ and q $(z | a)$ is separated in the representation space for each class.\\n\\n> 5.3) In Sec.3.2, why is the log-likelihood of the generative models can be obtained by the L1 loss?\\nThis was our mistake. To be correct, we use a Laplace distribution with a fixed scale as the generative model. Therefore, the log-likelihood is *negative * absolute-difference loss. When sampling from this generated model, only a mean parameter is output deterministically, as with many VAE methods. \\nWe will fix these in the revised version later.\\n\\n> 2. Incomplete reference: for Probabilistic semantic embedding (PSE), the reference should add the conference information.\\n\\nThis research was rejected at ICLR last year and is only available in openreview (not uploaded to arXiv). Therefore, we cannot add conference information, so we will add the openreview link of this paper to the reference instead.\\n\\n> 5.1) The formulation of GZSL is incorrect. Y= union(y_s y_u), but not intersection(y_s y_u)\\n> 1. Better use vectorgraphs for clear view (especially for Figure 3 and 4). \\n> 3. Grammar and spelling mistakes: \\n> 4. The color bar for the contours at the rightmost of Figure 3 is not clear (not the standard way to draw a color bar, better refer to what a color bar is usually drawn).\\n\\nThank you. We will fix these later.\\n\\n>> 5. If possible, better reduce the main text to 8 pages as recommended by the submission instructions (e.g. some content of the method part can be moved to the appendix?).\\n\\nWe think that all explanations of the proposed method are important and it is difficult to move them to the appendix, but in the revised version we will try to reduce the number of pages as much as possible.\\n\\nAgain, we appreciate all of your comments.\"}", "{\"title\": \"Response to Review #3 (1/2)\", \"comment\": \"Thank you very much for providing detailed comments.\\nWe will answer your questions. In addition, we will upload a revised version reflecting your comments later.\\n\\n> 1. Although the author claims that the proposed method is a relation-based method, it is strange that the proposed method is called xxVAE but in Table 2 it doesn't fall into synthesis-based methods (as CVAE-ZSL and CADA-VAE do). Although it is derived from VAE, the current method doesn't seem to be called a VAE any more (some of the regularizations of the VAE are relaxed). Also, are the two terms -- relation-based and synthesis-based -- first proposed by the author? Is there a clear boundary between those two groups of methods?\\n\\nAs you pointed out, the terms \\\"relation-based\\\" and \\\"synthesis-based\\\" were proposed by us, but we believe that the difference between them is clear. \\nThe relation-based method learns a compatibility function during training and uses it to predict labels during testing. This method does not require an explicit classifier but classifies labels according to Eq.1. Therefore, this method has the advantage that it can be easily extended to any number of classes if the compatibility function is generalized to the unseen domain.\\nThe synthesis-based method, on the other hand, learns a generative model from attributes. However, since it cannot perform classification including unseen classes itself, it is necessary to prepare and learn a classifier that predicts (both seen and unseen) class labels from images generated from the generative model. In other words, synthesis-based is a framework that requires the training of a classifier given the synthesized data.\\n\\nTherefore, the difference between relation-based and synthesis-based is whether we need to synthesize data from a model and learn a classifier. Hence, it is not always consistent with the use of deep generative models. \\nFor example, CVAE-ZSL generates images from a decoder after learning VAE and classifies the labels of them using a classifier such as SVM. On the other hand, although our proposed model is a deep generative model, it learns the compatibility function (Eq.2) and makes predictions using it. Therefore, our proposed model does not require both data synthesis and training of another classifier, so it can be regarded as a relation-based approach.\\n\\nIn addition, as you pointed out, the proposed model is no longer a VAE, so we will change the name of the proposed method to MCMAE (Modality-invariant and Class-separable Multimodal AutoEncoder) in the modified version.\\n\\n> 2. It is recommended that an additional figure that depicts the framework is added (similar to Figure 2 in CADA-VAE) to promote better understanding. \\nCurrently, the method part only contains formulas with many parameters, making it difficult to grasp the idea of the whole framework at first glance.\\n\\nI understand. As you pointed out, we will include an additional figure of the framework in the revised version.\\n\\n>> 3. The novelty of this paper is somewhat limited while missing some relevant works, e.g.[r1, r2]. [r1] learns a latent space where the compactness within class and separateness between classes are considered. [r2] uses a two-stage prediction for GZSL.\\n\\nThank you for introducing relevant works. We will refer them to the related work section and discuss the differences with the proposed method. In particular, we would like to list [r2] as a synthesis-based GZSL in Table 2.\\n\\n> 4. It is a question whether the seen and unseen classes can be separated (Whether a two stage process is correct?). The key for ZSL is knowledge transfer and the base is that seen and unseen classes are related [r3]. If they are separated, can one use the model trained on seen classes to recognize the unseen classes? This is quite problematic. Besides, in Tab.2 there lacks of necessary comparisons with recent relation-based approaches e.g.[r3][r4], which makes the evaluation less sufficient.\\n\\nWe conducted an experiment that we trained MCMAE-D with training data, which was the seen domain, and then classified test data using the domain discriminator. As a result, the domain classification performance of MCMAE-D is higher than that of PSE-D and CADA-VAE-D (relation-based). The evaluation by AUROC is as follows.\", \"pse_d\": \"0.78\\nCADA-VAE-D (relation-based) : 0.77\", \"mcmae_d\": \"0.89\\n\\nFrom this result, it was confirmed that the proposed model can discriminate the unseen classes from the test set (these results will be included in the revised version later).\\nFurthermore, from the result of acc_u of MCMAE-D, it can be seen that the unseen classes can be classified with almost the same performance as acc_s. \\nTherefore, we can conclude that the representation learning of the proposed method generalizes (transfers) to the unseen domain to some extent.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We would like to thank the reviewer for providing comments.\\nWe will answer your questions. In addition, we will upload a revised version reflecting your comments later.\\n\\n---\\n> 1.The proposed MCMVAE is No-longer a VAE but an AE with attribute matching loss. Except that a new theory of MCMVAE is proposed, it is not rigorous to relate MCMVAE to VAE.\\n\\n\\nMCMVAE is proposed based on VAE, but as you pointed out, it is no longer VAE. Therefore, we will change the name of the proposed method from MCMVAE to MCMAE (Modality-invariant and Class-separable Multimodal AutoEncoder).\\n\\n\\n> 2.Add results using synthetic architecture to get a better result will make this method more reliable.\\n\\nAlthough it is an interesting proposal, the purpose of this study is to propose a high-performance model with a relation-based approach. Therefore, I think that building a synthetic-based architecture based on the proposed model is out of the scope of this study.\\n\\n\\n> 3.Why discriminator is harmful for PSE method?\\n\\nAs explained in section 3.3, when learning the domain discriminator, we learn end-to-end inference and generative models in Eq.7 so that representation can be distinguished between domains well. \\nAt this time, in order for this domain discriminator to work well for test data, different modalities corresponding to the same example need to be embedded in the same latent space by each inference model, that is, they must be modality invariant.\\nHowever, in PSE, there is no term that guarantees modality invariance like MCMAE Eq.6. Therefore, the modality-invariant inference model is not learned in PSE, and as a result, the performance becomes worse due to the correction of the discriminator.\\nIn addition, we newly verified whether the test data domain can be correctly classified by the domain discriminator. As a result, we confirmed that the proposed method is able to classify the domain most appropriately. This means that the proposed method has obtained the most modality-invariant representation. These results will be added to the revised version later.\\n---\\n\\nAgain, we appreciate all of your comments.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"Thank you very much for providing comments.\\nWe will answer your questions. We will also upload a revision based on your comments later.\\n\\n---\\n> 1) Does the test set has some labels? How do you know your method works well? I can not find where you have defined a kind of loss so that we can compare the predicted labels \\\\hat{y}_j ? (In Section 2.)\\n\\nYes, all test set examples have true labels, but of course, they cannot be confirmed during training. To see if the model works well in the test phase, we compare these true labels with the labels predicted in Eq.1. In the experiment of section 5.2, the performance of the model is evaluated by the average per-class top-1 accuracy on both the seen and unseen classes, and the harmonic mean of them. As pointed out, it is a little difficult to understand for now, so I will revise it later.\\n\\n\\n> 2) How do you learn the \\\"replaced prior\\\" in equation (4) ?\\n\\nLike other parameterized probability distributions, $q_{\\\\phi_{a}}(z|a_y)$ is learned by maximizing the objective function with respect to $\\\\phi_{a}$.\\n\\n\\n> 3) It is not enough detail on how do you optimize the objective (8) ? a detail explain algorithm would make the paper significant, indeed.\\n\\nWe learn all models end-to-end by maximizing Equation 8 for all these parameters $\\\\theta_{x,a}, \\\\phi_{x,a}, \\\\beta$. We didn't write about this, so we'll add it later.\\nFor optimization, we used Adam optimizer, which is described in section 5.1.\\n\\n\\n> 4) In Table1, would MCMVAE in the last row be MCMVAE-D ?\\n\\nThis is as you pointed out. I will fix it later.\\n\\n\\n> Final, I expect the authors will make their codes available for the readers.\\n\\nWe are going to share our codes, but we may not be able to release them within the rebuttal period because we need to rearrange them.\\n---\\n\\nAgain, we appreciate all of your comments.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The main topic of this paper is generalized zero-shot learning. This paper modifies traditional VAE method with attribute matching prior to release the hidden features from original regularization. This paper also proposes a domain discriminator to enhance class-separability of learned features to avoid unseen classes to be covered by seen classes. Experiment results show their efficiency under relation-based setting.\", \"pros\": \"1.This paper proposes an important insight that in generalized ZSL, the unseen classes may be dominated by seen classes in the feature space.\\n2.An easy but efficient domain discriminator method is proposed to separate different classes to avoid domination. \\n3.Even without large synthetic learning architecture, the proposed method gets comparable results.\", \"comments\": \"1.The proposed MCMVAE is No-longer a VAE but an AE with attribute matching loss. Except that a new theory of MCMVAE is proposed, it is not rigorous to relate MCMVAE to VAE.\\n2.Add results using synthetic architecture to get a better result will make this method more reliable.\\n3.Why discriminator is harmful for PSE method?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper proposes a relation-based ZSL model which can effectively alleviate the domain bias problem. To this end, first, the paper claims that a good relation-based ZSL model should consider two requirements -- modality invariance and class separability. And the paper designed Modality-invariant and Class-separable Multimodal VAE (MCMVAE) based on VAEs to meet the two aforementioned requirements. Next, the paper hypothesizes that the domain bias problem is due to the overlap between seen and unseen classes in the shared space, and explicitly introduced a discriminator to separate the two domains. The paper performs experiments on ZSL benchmark datasets and shows that the proposed method outperforms other relation-based methods. Besides, the domain discriminator which can be applied to other models demonstrates its effectiveness in reducing domain bias given the experimental results.\\n\\n+Strengths:\\n1. Clear writing logic. The author clearly depicts how to get the final loss of the method step-to-step and the relationship with existing methods.\\n2. The version without the domain discriminator (i.e. MCMVAE) is similar to PSE and CADA-VAE as the author acknowledges. However, the domain discriminator has certain novelty and can be applied to other methods. The overlap among seen and unseen classes is an important problem (domain bias problem named by the author) and the add of the domain discriminator to distinguish whether a sample is from seen classes or unseen classes is reasonable, which can provide better class separability (among seen and unseen classes).\\n\\n-Weaknesses:\\n1. Although the author claims that the proposed method is a relation-based method, it is strange that the proposed method is called xxVAE but in Table 2 it doesn't fall into synthesis-based methods (as CVAE-ZSL and CADA-VAE do). Although it is derived from VAE, the current method doesn't seem to be called a VAE any more (some of the regularizations of the VAE are relaxed). Also, are the two terms -- relation-based and synthesis-based -- first proposed by the author? Is there a clear boundary between those two groups of methods?\\n2. It is recommended that an additional figure that depicts the framework is added (similar to Figure 2 in CADA-VAE) to promote better understanding. Currently, the method part only contains formulas with many parameters, making it difficult to grasp the idea of the whole framework at first glance.\\n3. The novelty of this paper is somewhat limited while missing some relevant works, e.g.[r1, r2]. [r1] learns a latent space where the compactness within class and separateness between classes are considered. [r2] uses a two-stage prediction for GZSL.\\n[r1] Jiang et al. Learning Discriminative Latent Attributes for Zero-Shot Classification. In IEEE ICCV 2017.\\n[r2] Zhang et al. Model Selection for Generalized Zero-shot Learning. In arXiv 2018.\\n4. It is a question whether the seen and unseen classes can be separated (Whether a two stage process is correct?). The key for ZSL is knowledge transfer and the base is that seen and unseen classes are related [r3]. If they are separated, can one use the model trained on seen classes to recognize the unseen classes? This is quite problematic. Besides, in Tab.2 there lacks of necessary comparisons with recent relation-based approaches e.g.[r3][r4], which makes the evaluation less sufficient.\\n[r3] Jiang et al. Transferable Contrastive Network for Generalized Zero-Shot Learning. In IEEE ICCV 2019.\\n[r4] Li et al. Discriminative Learning of Latent Features For Zero-Shot Recognition. In IEEE CVPR 2018.\\n5. Some unclear/incorrect descriptions of the method:\\n5.1) The formulation of GZSL is incorrect. Y= union(y_s y_u), but not intersection(y_s y_u)\\n5.2) How is the class separation formulated in the framework?\\n5.3) In Sec.3.2, why is the log-likelihood of the generative models can be obtained by the L1 loss?\", \"minor_issues\": \"1. Better use vectorgraphs for clear view (especially for Figure 3 and 4). \\n2. Incomplete reference: for Probabilistic semantic embedding (PSE), the reference should add the conference information.\\n3. Grammar and spelling mistakes: \\n[1] Content in Figure 2 (not caption): unseen class -> unseen classes\\n[2] Last line in 4.1: MCVAE-D -> MCMVAE-D\\n[3] Last paragraph in 4.2: close -> stay close\\n[4] Last model name in Table 1: MCMVAE -> MCMVAE-D\\n4. The color bar for the contours at the rightmost of Figure 3 is not clear (not the standard way to draw a color bar, better refer to what a color bar is usually drawn).\\n5. If possible, better reduce the main text to 8 pages as recommended by the submission instructions (e.g. some content of the method part can be moved to the appendix?).\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a novel approach for (generalized) Zero-shot learning (GZSL). As showing in the numerical experiments on some real data, the method demonstrates the significant improvement on the accuracy of prediction comparing to some state-of-the-art methods.\\nThe main key of the method is using Variational Inference, variational autoencoders. The authors have taken into account the modality of the data through reparametrize the distributions, especially the inside class invariant modality and class separability. Moreover, the authors also propose to take into account a kind of biasness domain into the learning procedure, which details in adding a regularization of the domain discriminator into the objective function.\\n\\nThe paper is nicely written, espcially with a clear formal introduction to the problem of GZSL.\\n\\nHowever, I have some questions:\\n1) Does the test set has some labels? How do you know your method works well? I can not find where you have defined a kind of loss so that we can compare the predicted labels \\\\hat{y}_j ? (In Section 2.)\\n2) How do you learn the \\\"replaced prior\\\" in equation (4) ?\\n3) It is not enough detail on how do you optimize the objective (8) ? a detail explain algorithm would make the paper significant, indeed.\\n4) In Table1, would MCMVAE in the last row be MCMVAE-D ?\\n\\nFinal, I expect the authors will make their codes available for the readers.\"}" ] }
BJxSWeSYPB
Self-supervised Training of Proposal-based Segmentation via Background Prediction
[ "Isinsu Katircioglu", "Helge Rhodin", "Victor Constantin", "Jörg Spörri", "Mathieu Salzmann", "Pascal Fua" ]
While supervised object detection and segmentation methods achieve impressive accuracy, they generalize poorly to images whose appearance significantly differs from the data they have been trained on. To address this in scenarios where annotating data is prohibitively expensive, we introduce a self-supervised approach to detection and segmentation, able to work with monocular images captured with a moving camera. At the heart of our approach lies the observations that object segmentation and background reconstruction are linked tasks, and that, for structured scenes, background regions can be re-synthesized from their surroundings, whereas regions depicting the object cannot. We encode this intuition as a self-supervised loss function that we exploit to train a proposal-based segmentation network. To account for the discrete nature of the proposals, we develop a Monte Carlo-based training strategy that allows the algorithm to explore the large space of object proposals. We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks, achieving competitive results compared to the few existing self-supervised methods and approaching the accuracy of supervised ones that exploit large annotated datasets.
[ "segmentation", "training", "background prediction", "images", "data", "object detection", "segmentation methods", "impressive accuracy", "appearance", "differs" ]
Reject
https://openreview.net/pdf?id=BJxSWeSYPB
https://openreview.net/forum?id=BJxSWeSYPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "FbM9YFgsS1", "B1lr_hNniH", "S1lT7LJYoS", "BygQUXkKoH", "H1gAXMJFjB", "HyloanZEcr", "HklAE1a2Fr", "BylC3XOcYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741425, 1573829741160, 1573611044570, 1573610315386, 1573610021529, 1572244675110, 1571766070311, 1571615670222 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2135/Authors" ], [ "ICLR.cc/2020/Conference/Paper2135/Authors" ], [ "ICLR.cc/2020/Conference/Paper2135/Authors" ], [ "ICLR.cc/2020/Conference/Paper2135/Authors" ], [ "ICLR.cc/2020/Conference/Paper2135/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2135/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2135/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This work proposes a self-supervised segmentation method: building upon Crawford and Pineau 2019, this work adds a Monte-Carlo based training strategy to explore object proposals.\\nReviewers found the method interesting and clever, but shared concerns about the lack of a better comparison to Crawford and Pineau, as well as generally a lack of care in comparisons to others, which were not satisfactorily addressed by authors response.\\nFor these reasons, we recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Categorical reparameterization vs importance sampling\", \"comment\": \"We tried replacing the importance sampling part in our method with the categorical reparameterization used in (Crawford and Pineau 2019). Since both strategies approximate the same objective, they should lead to very similar outcomes with a possible difference in the convergence speed. To this end we used Gumbel-Softmax distribution and tried this estimator with several different temperature values. Our experiments show that Gumbel-Softmax based categorical reparameterization didn\\u2019t lead to faster convergence (see page 18 in SectionA7, Fig. 13). This might be partly due to the high value of Gumbel noise added to the log probabilities. However, to have a more solid claim, a principled grid search for hyperparameters such as the temperature is necessary. We will provide the exact comparison in our final version after conducting an extensive hyperparameter search.\\n\\nOverall our importance sampling approach is simpler compared to their approach and has the advantage of being an unbiased estimator. In addition to that it does not need custom layers that behave differently in the forward and backwards passes during optimization, which is the case for the Gumbel-Softmax categorical reparameterization.\"}", "{\"title\": \"Initial Response to Review #3\", \"comment\": \"Thank you for the constructive feedback. We are glad that you find the idea original. We address your concerns below.\\n\\nThe changes in the main text are highlighted in red.\", \"reviewer_3\": \"Training of multiple images of the same scene\\n\\nThis comment is related to Reviewer 1\\u2019s: \\u201c\\u2026 assumption that the background follows relatively consistent textures\\u201d. Therefore, we restate our response here.\\n\\nIndeed, using an off-the-shelf inpainting network performed poorly because our images differ significantly from those in the datasets it had been trained on. \\nTherefore, we trained our own inpainting model (see Page 11, Section A1 Implementation Details, The inpainting network). It can deal with diverse backgrounds as well as non-consistent ones given enough data. Although in the Ski-PTZ dataset the scene looks homogeneous, in HandHeld190k the scene is cluttered with different textured objects such the houses, fences and trees. It requires multiple images of the same scene, or videos. Note, that we can leverage the input videos directly; we don\\u2019t need additional background images without persons. In the revised version we added a more detailed discussion about the inpainting network.\"}", "{\"title\": \"Initial Response to Review #2\", \"comment\": \"Thank you for your constructive comments. We address your concerns in detail below.\\n\\nThe changes in the main text are highlighted in red.\", \"reviewer_2\": \"Comparison to [R2] \\u201cUnsupervised learning of depth and ego-motion from video\\u201d and using motion as the supervision.\\n\\nThis comment is related to Reviewer 1\\u2019s: \\u201coptical flow and boundary detection, which I thought are OK cues to be used\\u201d. Therefore, we restate our response here.\\n\\nWhile one can use optical flow and other motion cues from unsupervised techniques to boost performance, the independence of such cues makes our proposed approach applicable to single images and, most importantly, avoids failure cases of optical flow. Motion-based approaches are prone to failure when there is no or too complex motion information to separate the foreground and background and in textureless areas. These failure cases occur in all our scenarios, as shown in the attached optical flow images (see page 17, section A7 Additional Comparisons to Related Work, Fig. 11) generated from our datasets using FlowNet2.0. Depth and ego-motion prediction suffers from similar problems. There are always multiple ways of addressing the same problem and only in retrospect the preferred strategy becomes clear. In this study, we investigated how far (quite far!) one can get without motion cues. Nevertheless, as we state in the outlook section, we will combine the complementary merits of motion-based strategies in the future; using [R2] will be a viable addition.\"}", "{\"title\": \"Initial Response to Review #1\", \"comment\": \"Thank you for your insightful comments. We address your concerns in detail below.\\n\\nThe changes in the main text are highlighted in red.\", \"reviewer_1\": \"Typo in Section 3.2 page 5\\n\\nThanks for pointing out the wrong equation number. It should be \\u201cthe foreground objective O of Eq. (4)\\u201d.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper provides a new self-supervised proposal-based approach for object detection and segmentation. The author introduces a Monte Carlo-based optimization to solve the inefficiency problem in the discrete proposal-based forward process defined in (Crawford and Pineau 2019). Also, the paper redefines the decoder part for self-supervised from minimizing reconstruction loss with background segmentation to maximize reconstruction error with learning a foreground segmentation. \\u00a0The method is then verified with a suite of experiments for people-detection on video datasets.\\n\\nThe main benefit over many previous unsupervised object detection/segmentation approaches is that they did not make use of optical flow or other readily available cues during training. However, given that the framework directly came from (Crawford & Pineau 2019), and the only change is from variational inference to an importance-sampling (MC) approach. This would be fine if it is verified in experiments, however, the experiments did not show any comparison w.r.t. (Crawford & Pineau 2019) hence we have no way of understanding what is the relative performance w.r.t. that baseline approach.\\n\\nBesides, in all the experiments a single object is in the view. How does the method perform in images where multiple objects are in the view?\\n\\nA little bit of a philosophical question is whether this a problem worth pursuing as well. For self-supervised motion estimation (e.g. optical flow), it is clear why we want to do that. However, the current type of algorithm is dependent on the assumption that the background follows relatively consistent textures, this may not necessarily be true in practice, and hence the application could be quite limited. Many previous unsupervised video object segmentation methods make use of optical flow and boundary detection, which I thought are OK cues to be used, especially when both can be learned in a self-supervised manner. This is not entirely related to the assessment, but I would still like to hear what the authors think.\", \"minor\": \"In the paragraph after 'Training strategy'(Section 3.2, Page 5), is it 'the foreground objective O of Eq. (2)' or 'Eq.(4)'?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces a method to self-supervised train a model for object detection/segmentation. The idea is that the background is easy to reconstruct while the foreground/object is hard. Experiments demonstrate the effectiveness of the proposed methods.\\n\\nHere are some high-level concerns.\\n\\n1. As mentioned in the \\\"Implementation details\\\", 'naive end-to-end training is difficult... we use ImageNet-trained weights for initialization'. This is worrisome to justify the effectiveness. It may be possible the imagenet-trained model has already captured salient objects. To justify the claims and effectiveness of the method, it should include a comparison with [R1], which demonstrates the possibility of doing detection with a pretrained model. Other work along this line should be also good reference.\\n\\n2. As a moving camera is available, it is also possible to segment background with frames through a 6DoF prediction on the camera, rotation and translation, e.g., [R2]. The supervision signal is from frame reconstruction through learning to predict both camera pose and pixel-level depth. This is also self-supervised learning. At least such a self-supervised trained model can act as an initialization.\\n\\nConsidering the above points, the paper does not appear compelling, due to lack of either careful claims or justification.\\n\\n\\n[R1] Learning deep features for discriminative localization\\n[R2] Unsupervised learning of depth and ego-motion from video\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This submission proposes a self-supervised segmentation method, that learns from single-object videos by finding the region where it can segment an object, remove the entire bounding box around it, inpaint it, then finally put the object back. The loss is a balance between a reconstruction error and a negative inpainting error (high error means object is probably present, due to the weak correlation with the background).\\n\\nMy decision is Weak Accept. I like the method very much and think it\\u2019s a clever and well-executed algorithm. The reason for being Weak is because the experimental evidence could be stronger, especially comparing with Croitoru et al. and Rhodin et al. The paper leaves some open problems, inviting future work to be built on top of it (i.e. leveraging time more and handling multiple objects better). I think it should be accepted to promote such future work.\", \"method\": \"The method is interesting and clever. Similar efforts have been made, such as Bielski & Favaro and Crawford & Pineau. However, key contributions relax some of the contrived requirements of these past methods (e.g. simple foreground translation over background; requirement of plain background). Thanks to the inpainter, the importance sampler, and avoidance of collapsing into trivial solutions, this paper is able to put together a method that works on moving cameras and fairly complex scene semantics (still limited to one object though). Actually getting this to converge with only two loss terms balanced against each other is impressive. Having certain heads of the network trained only on one of the two terms seems to be a key contribution.\", \"experimental_results\": \"The comparison with Rhodin et al. on H36M is characterized as \\u201cslightly\\u201d lower, but I would call 71% vs. 58% a significant difference. Of course, I understand that Rhodin et al. relies on the static background, so this is not a fair comparison.\\n\\nSki-PTZ-Dataset should then offer a better comparison, but here the method struggles to compete with Croitoru et al. 2019, which has a 11-point higher F-measure. On Handheld190k, the dataset proposed in this paper, is where the method finally shines, but still only offers a 1-point F-measure improvement over Croitoru et al. That being said, considering how different the methods are, and how Croitoru et al. requires two-stage training, there are many benefits to this method. Croitoru et al. also relies on video to extract the object features, and this requirement is not as explicit in this work. Actually, that brings me to one question I had. The paper states that \\u201cas long as videos or picture collections of a single object in front of the same scene are available.\\u201d I didn\\u2019t quite understand why this must be trained on multiple images of the same scene. If the inpainter is general to any background scenery, couldn\\u2019t it work on single-images as well? The conclusion even says you do not use temporal cues.\", \"other\": \"I don\\u2019t think it\\u2019s that meaningful to include precision/recall in the tables. It is also not that meaningful to point out that your method\\u2019s precision is higher than that of Croitoru et al., when the F-measure is shy 11 points. The reason is because many points on the precision/recall could be constructed simply by applying for instance a gamma curve on the segmentation predictions. The high precision is clearly at the cost of a low recall, and another point on this tradeoff curve could be presented. This is why F-measure and average precision are much better.\"}" ] }
S1lNWertDr
Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses
[ "Asier Mujika", "Felix Weissenberger", "Angelika Steger" ]
Learning long-term dependencies is a key long-standing challenge of recurrent neural networks (RNNs). Hierarchical recurrent neural networks (HRNNs) have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy. Yet, the memory requirements of Truncated Backpropagation Through Time (TBPTT) still prevent training them on very long sequences. In this paper, we empirically show that in (deep) HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks. This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.
[ "computable losses", "dependencies", "hrnns", "hierarchy", "memory requirements", "tbptt", "key", "challenge", "recurrent neural networks" ]
Reject
https://openreview.net/pdf?id=S1lNWertDr
https://openreview.net/forum?id=S1lNWertDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "wd-EqSBWJJ", "BkgDnZ4noH", "BJge242NjH", "SJxfjMU9YB", "HyeYxYAs_H" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741395, 1573826991179, 1573336231648, 1571607194230, 1570658545095 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2132/Authors" ], [ "ICLR.cc/2020/Conference/Paper2132/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2132/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2132/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"All reviewers gave this paper a score of 1.\\nThe AC recommends rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply\", \"comment\": \"We thank the reviewers for their thoughtful comments. Due to the low scores, we decided to not update our manuscript but we will still include the useful feedback into future revisions of the paper.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Claim: Backpropagation of gradients from a higher to lower level in a HRNN can be removed and replaced with auxiliary losses predicting input tokens at the lower level without affecting performance.\", \"significance\": \"The significance of the claim hinges on whether HRNNs are more effective than other methods designed to help RNNs capture long-term dependencies (e.g. stacking RNNs or using different architectures). I think the authors could make a more substantive argument why this would be the case in the introduction, but they do a nice job of situating their work in the context of the present literature.\", \"novelty\": \"The proposed method is not very original, since augmenting RNNs with auxiliary losses in order to better capture long-term dependencies has been used in many previous papers. The authors mention some of these papers in the related work section.\", \"clarity\": \"The paper's description of the proposed method is well-written. Some parts of the experiment section could be made clearer.\\n-- I encourage the authors to invent a new acronym to refer to \\\"our model\\\" (perhaps aux-HRNN?). In the description of the mr-HRNN (pg. 5), I find the sentence \\\"trained using only as much memory as our model requires for training\\\" confusing. I initially thought our model referred to the mr-HRNN in the setence.\\n-- Training settings (e.g. the number of ticks of the upper RNN) should be described at the beginning of each section. \\n-- A seeming contradiction is made when discussing the results in 4.3. First, it said that because short term dependencies dominate long term dependencies it is expected that the proposed method will suffer greatly (pg. 6, bottom). In the next paragraph, it is claimed that all three models perform similarly due to the same reason. Which is it?\", \"supporting_evidence\": \"The claim is empirical and the supporting evidence is experimental. As such, I find the comprehensiveness of the experiments wanting. There are several ways the experiments could be improved.\\n-- Results for each \\\\beta value should be included, to see how placing increasing significance on the auxiliary loss impacts the results.\\n-- Include all relevant details necessary to reproduce the results, such as the length of training or stopping criterion used. \\n-- Additional results when varying the number of ticks.\\n-- More results with deeper hierarchies, since the ability to capture salient information at different levels of coarseness is a key selling point of HRNNs. \\n-- Results on larger scale tasks besides character level language modelling on Penn TreeBank.\", \"other_comments\": \"-- In the intro, I think some mention of parallel architectures such as transformers or convolutional architectures is warranted here, since parallelizability of training is a significant reason why these architectures are becoming preferred over RNNs.\\n-- Citations are mishandled throughout the paper. Citations should be enclosed in parentheses unless used as a subject in the sentence (e.g. \\\"Sordoni et al. make the case that...\\\"). There is no need to refer to a citation twice in a sentence, like you do in \\\"More recently, Koutnik et al. introduced the Clockwork RNN Koutnik et al. (2014)...\\\"\\n-- I don't understand why the permuted accuracy of the gr-HRNN is so much higher than the non-permuted accuracy. One possible explanation is that the important pixels ended up at the end in each of the three trials, hence the gr-HRNN did not have to remember much information from the past. This should be addressed in the paper. \\n-- I would welcome some theoretical analysis as to why replacing the gradient path with this particular auxiliary loss does not impact results. I also think some discussion of what this means HRNNs are actually doing might be nice as well.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The proposed work investigates the problem of learning hierarchy in RNNs. Authors note that different layers of the hierarchy are trained in \\\"sync\\\". The proposed paper suggests to decouple the different layers of hierarchy using auxiliary losses.\\u00a0 The form of auxiliary losses used in the paper are of the form of local losses, where there is a decoder, which is used to decode past inputs to each level from the hidden state that is sent up the hierarchy, therebyforcing this hidden state to contain all relevant information.\", \"clarity_of_the_paper\": \"The paper is clearly written.\", \"method\": \"The proposed method\\u00a0 ignores the gradients from higher to lower levelsin the backward pass,\\u00a0 (because of this, the authors can also save some memory). In order to compensate for the lost gradients, authors propose to use local losses, and\\u00a0we introduce an auxiliary loss term to force this hidden state to contain all information aboutthe last k inputs. The authors note that the hidden state from the lower level (to the higher level) should contain the summary of the past, and hence use a decoder network (which is simply parameterized) as a feedforward network which is used to decoder a \\\"past\\\" hidden state.\", \"related_work_section\": \"The related work section is nicely written. The authors have covered mostly everything. These 3 papers may still be relevant. (a), (b), (c).\\u00a0 \\u00a0(b) could be relevant for mitigating the parameter update lock problem as mentioned by authors in the introduction of the paper. (c) is also relevant as authors in (c) also consider using auxiliary losses for learning long term dependencies.\\n(a) SkipRNN:\\u00a0https://arxiv.org/abs/1708.06834(b) Sparse Attentive Backtracking:\\u00a0http://papers.nips.cc/paper/7991-sparse-attentive-backtracking-temporal-credit-assignment-through-reminding\\n(c)\\u00a0 Learning long term dependencies in RNNs using auxiliary losses\\u00a0https://arxiv.org/abs/1803.00144\", \"experiment_section\": \"In order to validate the proposed method, authors evaluate it on copying task, pixel MNIST classification, permutedpixel MNIST classification, and character-level language modeling.\\na) Copying results show that the decoder network are essential to achieve decent results. This task though does not show the strength of the proposed method though as baseline also solves the problem completely. It might be interesting to actually scale the \\\"gap\\\" time in copying time step to something larger like T = 1000 or something.\\nb) PIXEL MNIST classification: Authors use the pixel by pixel classification task to test the proposed method. Here, the proposed method performs comparable to the hierarchical RNN (but without using too much memory).\\u00a0\\nc) Character level modelling: Authors demonstrate the performance of the proposed method on language modelling task (PTB). These results are particularly not interesting, as the performance gain is very marginal. Also, may be using other language modelling datasets like Wikitest103 or Text8 might be more useful.\\u00a0 As for the results, even unregularized LSTM performs better than the baseline in this paper. (For reference, see\\u00a0https://arxiv.org/abs/1606.01305)\", \"what_authors_can_do_to_improve_paper\": [\"The problem considered in the proposed paper is very interesting to me. Though, the results are not (yet) convincing. It might be interesting to think about a task, where there are really long term dependencies like reading CIFAR10 digit pixel by pixel and then doing classification, where the authors can actually show the promise of the proposed method.\", \"It might also be interesting to know how are the original training cost objective is weighed against the auxiliary loss. Have authors tried any search over what kind of auxiliary loss performs well ?\"]}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe paper introduces a hierarchical RNN architecture that could be trained more (memory) efficiently. The difference in the architecture seems to be an auxiliary loss that decodes k step inputs and some perturbation of TBPTT.\\n\\nComments on the paper\\n\\n1. The paper seems to be have been written in a rush. The language could be improved, the format is not always consistent and in general the paper could be much better written. There are quite some typos as well in the paper, for example , Trinh et al. is not a proper citation.\\n\\n2. The authors mentioned that TBPTT is not memory efficient, this is not very clear to me, as it only needs to keep the number of truncation steps that it backprops through and hence much more memory efficient compared to full BPTT.\\n\\n3. It is not clear to me what is the benefit of gr-HMRNN. It is not clear why cutting of the gradients from the higher level to the lower level would help.\\n\\n4. It is surprising to me that HMRNN could only solve the copy task upto a length of 108. \\n\\n5. I would also suggest another copy task from Hochreiter, Sepp and Schmidhuber, J\\u00fcrgen. Long short-term memory. Neural computation, 9(8): 1735\\u20131780, 1997.\\n\\nIn general, the paper seems to have been written in a rush. I would recommend the papers to be revised.\"}" ] }
HJe7bxBYvr
Avoiding Negative Side-Effects and Promoting Safe Exploration with Imaginative Planning
[ "Dhruv Ramani", "Benjamin Eysenbach" ]
With the recent proliferation of the usage of reinforcement learning (RL) agents for solving real-world tasks, safety emerges as a necessary ingredient for their successful application. In this paper, we focus on ensuring the safety of the agent while making sure that the agent does not cause any unnecessary disruptions to its environment. The current approaches to this problem, such as manually constraining the agent or adding a safety penalty to the reward function, can introduce bad incentives. In complex domains, these approaches are simply intractable, as they require knowing apriori all the possible unsafe scenarios an agent could encounter. We propose a model-based approach to safety that allows the agent to look into the future and be aware of the future consequences of its actions. We learn the transition dynamics of the environment and generate a directed graph called the imaginative module. This graph encapsulates all possible trajectories that can be followed by the agent, allowing the agent to efficiently traverse through the imagined environment without ever taking any action in reality. A baseline state, which can either represent a safe or an unsafe state (based on whichever is easier to define) is taken as a human input, and the imaginative module is used to predict whether the current actions of the agent can cause it to end up in dangerous states in the future. Our imaginative module can be seen as a ``plug-and-play'' approach to ensuring safety, as it is compatible with any existing RL algorithm and any task with discrete action space. Our method induces the agent to act safely while learning to solve the task. We experimentally validate our proposal on two gridworld environments and a self-driving car simulator, demonstrating that our approach to safety visits unsafe states significantly less frequently than a baseline.
[ "Reinforcement Learning", "AI-Safety", "Model-Based Reinforcement Learning", "Safe-Exploration" ]
Reject
https://openreview.net/pdf?id=HJe7bxBYvr
https://openreview.net/forum?id=HJe7bxBYvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "p4aynpOYq-", "rygzKGNTtH", "HkxTiwWTYB", "HkgkkpsmtB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741363, 1571795578469, 1571784612930, 1571171543138 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2130/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2130/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2130/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper tackles the problem of safe exploration in RL. The proposed approach uses an imaginative module to construct a connectivity graph between all states using forward predictions. The idea then consists in using this graph to plan a trajectory which avoids states labelled as \\\"unsafe\\\".\\n\\nSeveral concerns were raised and the authors did not provide any rebuttal. A major point is that the assumption that the approach has access to what are unsafe states, which is either unreasonable in practice or makes the problem much simpler. Another major point is the uniform data collection about every state-action pairs. This can be really unsafe and defeats the purpose of safe exploration following this phase. These questions may be due to a miscomprehension, indicating that the paper should be clarified, as demanded by reviewers. Finally, the experiments would benefit from additional details in order to be correctly understood.\\n\\nAll reviewers agree that this paper should be rejected. Hence, I recommend reject.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to use learned transition models to do two separate things: (i) avoid unsafe states and (ii) allow an alternative channel for task reward specification. The idea is to create a comprehensive connectivity graph of the states in the environment. Once done, an agent can avoid unsafe states by avoiding states that are unconnected to a specified safe state. A practitioner might also specify safe/unsafe states as an additional source of information about the reward.\\n\\nThis paper suffers from poor and loose writing, incomplete specification of its experiments, unrealistic assumptions during evaluation (Sec 5.3 \\\"we create the graph using rollouts from the actual environment\\\" to avoid errors from learning a transition model).\", \"the_paper_does_not_address_basic_concerns_with_its_approach\": \"how is the model to be learned at all, if it is to be comprehensive in the way that is necessary for the connectivity graph (which this paper calls an \\\"imaginative module\\\")? The authors say this is done through multiple agents performing random actions in the environment, in which case, isn't this extremely unsafe training time by the paper's own definition of safe exploration?\\n\\nFurther, creating a complete connectivity graph is unrealistic even for fully known transition models in most reasonably complex settings, such as, say, Go or Chess. \\n\\nIf the transition model is fully known as in the car racing setting, why not directly use that to plan and solve the game?\\n\\nExperiments show fewer \\\"unsafe\\\" states for the paper's approach compared to a method that has no way to know that those states are unsafe. How is this a reasonable validation, especially when the transition model is fully known? Also, this is an insufficient metric by itself as it says nothing about whether the method actually performed well at the task.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a model-based approach to safety in RL, where the agent uses a transition model to plan ahead to avoid actions that can lead it to unsafe states. They call the planning component an imaginative module. The agent takes the baseline state as input - that can be used to define either a safe or unsafe state, that is used in the planning component. The authors claim that using these two techniques they can tackle both the safe exploration (not violating safety constraints during learning) and irreversible side-effects (unintended irreversible behavior due to poorly designed reward-function). They validate their approach on two grid world environments and self-driving car simulators.\\n \\nThis paper should be rejected because of the assumptions it makes goes against the very task they are trying to solve. In the sense, the task is trivial given the assumptions they have. \\n\\n1) The inconsistent assumption regarding the access to trajectories to learn a model. \\nThe authors start with the assumption that the agent does not have access to the model (Sec 3) , and they explicitly learn the model. However, in the very next section (Sec 4), the authors assume that they can deploy a number of agents that interact with the environment randomly and collect that data to learn a complete transition model. Note that this assumption is wrong because:\\nIf the random data agents are \\u201csafe\\u201d, i.e., don\\u2019t violate any safety constraint or cause any harmful behavior in the environment, then it is equivalent to assuming the agent having access to all the data to learn the model. This is a very big assumption that essentially says the agent has access to the model, which defeats the purpose of the safe-exploration problem. \\nIf the random agents are \\u201cunsafe\\u201d, i.e., they can violate the safety constraint, then it goes against the very claim made about their method being able to respect the constraints throughout the learning process. \\n\\n2) The assumption about the baseline state(s). \\nThis is also a pretty big assumption to have, that is not acknowledged in the paper. If the agent already has the set of all the states it needs to avoid (or the set of states that are safe), then along with the assumption regarding access to the model, solving reversibility is significantly easier task then the general safe exploration problem [1, 2]\\n\\n3) The results reported in Figure 4 are not statistically significant. The experiments are only run over 3 random seeds [3] \\n\\n4) Can you give a few more details about the assumptions? In terms of how realistic they are or how essential they are to the method.\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"- The negative side-effects problem that this work address is only based on reversibility criteria. \\nClaim about learning the dynamics model is sample efficient is unsupported.\", \"references\": \"[1] Berkenkamp, Felix, et al. \\\"Safe model-based reinforcement learning with stability guarantees.\\\" Advances in neural information processing systems. 2017.\\n\\n[2] Dalal, Gal, et al. \\\"Safe exploration in continuous action spaces.\\\" arXiv preprint arXiv:1801.08757 (2018).\\n\\n[3] Henderson, Peter, et al. \\\"Deep reinforcement learning that matters.\\\" Thirty-Second AAAI Conference on Artificial Intelligence. 2018.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Thie paper proposes using an \\\"imagination\\\" module to provide safe exploration during RL learning. The imagination module is used to perform forward predictions, constructing a graph between possible states. If any action would lead to a \\\"base state\\\" that is an unsafe state that action will not be executed and another \\\"safe\\\" action is selected from the policy.\", \"i_have_a_number_of_comments_and_questions_after_reading_the_paper\": [\"How do you get the forward model to be usably accurate? You do say that the model is a CNN model and is shared to learn the reward function as well. In the paper it says your method will lead to the agent never reaching an unsafe state, do you train the network in some way to make sure it does not make an inaccurate prediction around the unsafe state?\", \"There is a lot of repetitive content in the paper that can be discarded to condense down the paper and make it more readable.\", \"It would be nice to see more tasks or at least one that was more realistic... The tasks used in the paper appear to be common ones but they still feel rather artificial. Also, it seems in the paper there are only learning curves for 2 of the 3 tasks.\", \"In the related work you say \\\"However, these methods are difficult to quantify and depend a lot on the diversity of settings in which they are performed.\\\" Can you expand on this? Why do they depend on these issues so much? The motivation for your method stems from these methods not being good enough, so further detail on this facet is important.\", \"If you are using environments with discrete actions and performing prediction I am not sure if that can be called imaginative. Rather it should be called sampling. The forward model does not even appear to be stochastic.\", \"The description of the imagination module training is not very clear. Is it trained on 5000 tuples or is the network trained for 5000 updates? There needs to be much more detail on this process.\", \"Overall the method seems interesting but does not appear to be a significant improvement.\"]}" ] }
Hkem-lrtvH
BayesOpt Adversarial Attack
[ "Binxin Ru", "Adam Cobb", "Arno Blaas", "Yarin Gal" ]
Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input. Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries. Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost. We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.
[ "Black-box Adversarial Attack", "Bayesian Optimisation", "Gaussian Process" ]
Accept (Poster)
https://openreview.net/pdf?id=Hkem-lrtvH
https://openreview.net/forum?id=Hkem-lrtvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "P4HBDZALQ", "SJlXnJtvjS", "BJep-hOPjB", "BJg3u7uDiS", "S1lJ4g_voS", "Bkxnc7g79r", "rye_muURtr", "HJx2OHC2tS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741333, 1573519275218, 1573518341499, 1573516148502, 1573515303228, 1572172692177, 1571870752105, 1571771764420 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2129/Authors" ], [ "ICLR.cc/2020/Conference/Paper2129/Authors" ], [ "ICLR.cc/2020/Conference/Paper2129/Authors" ], [ "ICLR.cc/2020/Conference/Paper2129/Authors" ], [ "ICLR.cc/2020/Conference/Paper2129/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2129/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2129/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a query-efficient black-box attack that uses Bayesian optimization in combination with Bayesian model selection to optimize over the adversarial perturbation and the optimal degree of search space dimension reduction. The method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks. The paper should be further improved in the final version (e.g., including more results on ImageNet data).\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply to Reviewer #3\", \"comment\": \"We thank the reviewer for his insightful comments. We address the concerns below:\\n\\n1. \\\"Need a discussion on the surprising phenomenon that vanilla GP-BO can work at all for such problem of extraordinarily high dimensionality.\\\"\\nPlease refer to our reply to all Reviewers for the detailed discussion.\\n\\n2. \\\"a lack of results on ImageNet images, which is much harder for general purpose blackbox optimizers given the initial dimensionality of those images is ~150000\\\"\\nTo verify the feasibility/applicability of using our BayesOpt methods to perform \\\"targeted\\\" attacks on ImageNet, we select 50 correctly classified images from the ImageNet test data and perform random targeted attacks with a query budget of 2000. We found that direct application of the BayesOpt attacks on the ImageNet image to do targeted attack rarely work due to the extremely high dimensionality of search space. However, we experimented a hierarchical decoding process: \\n a) first performance BayesOpt (ADDGP-BO) on a reduced dimension of d^r_1=48x48x3 or perform GP-BO-auto-dr by learning the optimal reduced dimension in the range up to d^r_1=48x48x3 and then \\n 2) decode the adversarial perturbation found in d^r_1 to d^r_2=96x96x3 via bilinear upsampling. \\n 3) This is followed by another bilinear decoder projecting the adversarial perturbation in d^r_2 back to image dimension of d=299x299x3. \\nSuch hierarchical decoding leads to an ASR of 60% by ADD-GP-BO and an ASR of 32% by GP-BO-auto-dr, which are higher than the ASR of 12% by GenAttack on the same image-target pairs with the same upsampling.\\n\\nWe conduct further experiments on for our ADDGP-BO and GenAttack. ADDGP-BO achieves 60% ASR within 1985 queries but GenAttack takes 4711 (2.4 times more) queries to achieve the same ASR (See Section E in the Appendix). We will update the paper with more experimental results on ImageNet. \\n\\nIn addition, another ICLR 2020 submission titled \\\"Black-box Adversarial Attacks with Bayesian Optimization\\\" has empirically demonstrated the superior query efficiency of vanilla GP-BO on ImageNet dataset in the \\\"untargeted\\\" attack setting.\\n\\n3. \\\"\\u2026 missing related literature studies including the QL Attack (Ilyas et al., 2018), Bandits-TD (Ilyas et al., 2019)...\\\"\\nWe thank the reviewer for the additional references. Bandits-TD (Ilyas et al., 2019) focuses on the simpler case of untargeted attacks and another ICLR 2020 submission titled Black-box Adversarial Attacks with Bayesian Optimization has demonstrated on ImageNet dataset that their simple GP-based BO attack, together with upsampling, (which is almost the same as the GP-BO baseline in our paper) can achieve higher attack success rate than Bandits-TD and Parsimonious attack (Moon et al., 2019) under a small query budget of 200 for the \\\"untargeted\\\" attack setting. And Du et al., (2019) (https://arxiv.org/abs/1906.02398) has shown that our baseline method, AutoZOOM, is more query efficient than Bandits-TD for MNIST, CIFAR10 and tiny ImageNet. We didn\\u2019t compare against QL Attack (Ilyas et al., 2018) because two of our baseline methods, GenAttack and AutoZOOM, had shown to be more query efficient than QL Attack in (Alzantot et al., 2018). In addition, both QL Attack and Bandits-TD require gradient estimation while our proposed method doesn\\u2019t.\\n\\n4. \\\"\\u2026 additional details on learning the decomposition for the additive GP surrogate.... the additive structure in Kandasamy et al., 2015 usually needs to be learned through Metropolis-Hastings or Gibbs sampling.\\\"\\nWe follow the approach proposed in (Kandasamy et al., 2015) to treat the decomposition as an additional hyperparameter and learn the optimal decomposition by maximising marginal likelihood. However, exhaustive search over all possible (M!d!/(d_s!^M)) decompositions (i.e. decomposing d-dimensional space into M subspaces of d_s dimensions) is expensive. We adopt a computationally cheap alternative by randomly selecting 20 decompositions and choosing the one with the largest marginal likelihood. The decomposition learning procedure is repeated every 40 BO iterations. As the reviewer mentioned, sophisticated sampling procedures can also be used to learn the decomposition and usually lead to better performance. However, they are computationally much more expensive than the maximum marginal likelihood approach.\\n\\nWe verified the effectiveness of our way of learning decomposition by testing another alternative way to learn the decomposition; pixels are grouped together if the magnitude of change in their pixel values over iterations are close. This is similar to importance sampling in ZOO. The performance of such pixel-value-change-based decomposition learning gives lower attack success rate than our approach of learning the decomposition via marginal likelihood. We have added this comparison as Section D in the Appendix.\"}", "{\"title\": \"Reply to Reviewer #1\", \"comment\": \"We thank the reviewer for his helpful comments. We address the concerns below:\\n\\n1. \\u201cDifference to https://arxiv.org/pdf/1907.11684.pdf\\\"\\nWe thank the reviewer for pointing out this additional reference (Zhao et al., 2019). We have cited it and highlighted the differences in Section 2. The method proposed in that work, BO-ADMM, is effectively similar to our vanilla GP-BO but without the use of decoder to reduce the search space dimension. We propose improvements tailored to the commonly high-dimensional nature of adversarial attack problems, thus further enhancing the efficiency of BayesOpt attack for such application. Specifically, the key differences between our work and their work are: \\n a) BO-ADMM applies GP-based BO directly on the space of image dimension to minimise the joint objective of attack loss and distortion loss. This makes the problem much harder for BO and leads to low-quality adversarial examples (mean L_{\\\\inf}=0.62 for CIFAR10). Our BayesOpt attacks minimise the attack loss under the L_{\\\\inf}-norm constraint, and uses a decoder to reduce the search to a low-dimensional latent space, making the problem more amenable to GP-based BO. As a result, even our vanilla GP-BO, can find the adversarial examples with much smaller distortion (mean L_{\\\\inf} = 0.028 for CIFAR10).\\n b) The method proposed in their work, BO-ADMM, only uses the simple Gaussian process as the BO surrogate. However, we propose a method to explicitly handle the high-dimensional search space effectively and build query efficient attacks by using an additive GP surrogate. This allows us to further decompose the reduced latent search space into low-dimensional subspaces. As shown in Table 2 of Section 5.2, our ADDGP-BO attack achieves much higher success rate (14% higher) given the same query budget compared to previous data efficient approaches (GP-BO).\\n c) We further propose the use of a Bayesian model selection method to learn the optimal dimensionality of the latent space in the process of optimisation/on the fly. Such Bayesian learning of the reduced dimensionality integrates naturally with a BayesOpt attack by employing the statistical surrogate but can also be applied independently with other adversarial attack methods to decide the reduced dimensionality in a principled way. We further demonstrate its effectiveness in our experiments. As shown in Table 2 of Section 5.2, it leads to 15% increase in ASR for GP-BO.\\n\\n2. \\u201cWhat\\u2019s the acquisition function and its hyperparameter? What decoder is used? The importance of the decoder?\\u201d\\nWe use the UCB and set the exploration-exploitation parameter to be a constant of 2, which is adopted in BO packages such as GPyOpt. We briefly explored the use of different acquisition functions such as EI and found similar performance. We have added these clarifications in our paper. \\n\\nWe adopt bilinear interpolation as the decoder. Please refer to Response 3 to Reviewer #2 for more details.\\n\\nThe use of the decoder is essential for the query efficiency of BayesOpt attacks because it helps reduce the BO search space significantly. For example, as shown in Table 3 in Section B of the Appendix for MNIST, the use of decoder reduces the median query count for GP-BO from 201 (d=28x28x1) to 53 (d^r =14x14x1) to achieve comparable attack success rate.\\n\\n3. \\u201cthe number of tested images is not sufficient\\u201d\\nWe conducted more experiments on 50 random CIFAR-10 images. Please refer Response 2 to Reviewer #2.\\n\\n4. \\u201cIn Table 1, what does 0,0,0, mean in ZOO?\\u201d\\nSorry for the confusion. This means ZOO succeeds in attacking the 2 simplest image-target pairs at its first batch (batch size of 128) of adversarial perturbations but fails to make successful attacks on the other cases under the budget constraints.\\n\\n5. \\u201cBO is not computationally efficient\\u201d \\nWe update the GP hyperparameters every 5 iterations and relearn the reduced dimension or the additive decomposition every 40 iterations to reduce the computational cost. BO algorithms are indeed computationally more expensive than most adversarial methods. That\\u2019s why, as highlighted in the introduction, we focus on the adversarial setting where the cost of evaluating the target model, being it the monetary costs, computational costs or the risk of being detected, is much higher than the computational cost of BO algorithm itself and thus query efficiency is highly prioritised. \\n\\n6. \\u201cplot the attack loss value vs BO iterations... Clarification on Figure 3.\\u201d\\nWe have added the plots on the objective value (the negative of loss) against BayesOpt iterations(query count) for various BayesOpt methods in Section C of the Appendix. They show the case of attacking a CIFAR10 image of label class 9 and how our proposed modifications (additive GP and Bayesian learning of d^r) can lead to better convergence. Figure 3 in the paper shows the ASR up to the current query counts. We have clarified this in the Figure caption.\"}", "{\"title\": \"Reply to Reviewer #2\", \"comment\": \"We thank the reviewer for the positive feedback and would like to address the issues raised.\\n\\n1. \\\"Given that the typical dimensionality for BayesOpt is d <= 20, how are the experiments with dimensions up to 14x14x3 provided for GP-BO and GP-BO-auto-dr performed?\\\"\\nWe use a GP kernel without ARD and learn the GP hyperparameters every 5 BO iterations. The optimal reduced dimension d^r is updated every 40 iterations. Please refer to our reply to all Reviewers for a more detailed discussion.\\n\\n2. \\\"The image selection protocol does not correspond to the Tu et al. protocol which selects 50 random images from CIFAR-10.\\\"\\nWe conducted more experiments by selecting 50 random images from CIFAR-10 and attacking each image on the other 9 classes except its original class. We have updated the Table 2 and Figure 3 in Section 5.2 with new CIFAR10 results. Note that the relative ranking among different methods is the same as the original results in the paper and the magnitude of improvement in query efficiency and L_2 norm by BayesOpt attacks over competing methods also remains highly similar to, if not the same as, the original results presented.\\n\\n3. \\\"The experiments lack some details: which is the decoder used for dimensionality reduction?\\\"\\nAs stated in the first paragraph of Section 4.1, we adopt bilinear interpolation as the decoder, which is used in GenAttack (Alzantot et al., 2018) and Auto-ZOOM (Tu et al., 2018). This is to ensure fair comparison. However, the approach can be combined with different decoder types.\"}", "{\"title\": \"Reply to all Reviewers\", \"comment\": \"We thank all the reviewers for their valuable comments and hope our responses address the issues raised. Following the reviewers\\u2019 suggestion, we have added results on 50 random CIFAR10 images (Response 2 to Reviewer#2, Table 2 and Figure 3 in our paper Section 5.2), and further verified the feasibility of our proposed BayesOpt attacks on ImageNet data (Response 2 to Reviewer#3, Section E in the Appendix of our paper).\\n\\nBoth Reviewer#2 and #3 find it surprising that Bayesian optimisation(BO) based on Gaussian process(GP) can handle such high-dimensional (d^r=14x14x3 even after dimensionality reduction) adversarial attack application. We\\u2019d like to take this space to elaborate our thoughts on this: \\n\\nIn general, there is no strong reason why GP-based BO cannot handle high dimensional problem. The performance of GPs depends on the kernel used and the complexity of the objective function. The 'high dimension' itself doesn't really mean much because for many applications, the effective dimension may be small (only a small number of input dimension have a significant impact on the objective function) (Chen et al., 2012; Wang et al.,2016b; Munteanu and Nayebi et al., 2019). The issue arises when there is complex dependencies among all the dimensions. Then simple kernels like RBF will fail while more complicated kernels such as deep neural network kernels (Wilson et al., 2016) will be costly to optimise and/or require more query data to train. The performances of GP-BOs in our paper, as well as in BO-ADMM(Zhao et al., 2019) kindly raised by Reviewer #1, suggest that the objective function in the adversarial problem, even though being very high-dimensional, probably doesn't depend on a complex combination of the input dimensions. \\n\\nMoreover, the reasons why GP-BOs is undesirable for high-dimensional problem are usually:\\n a) GP hyperparameter optimisation becomes high-dimensional and thus challenging, if we use automatic relevance determination(ARD). In our work, we turn off the ARD for our GP kernel. Although this compromises the expressiveness of our surrogate, it significantly ease the GP hyperparameter learning. As for our ADDGP-BO variant, we decompose the original search space of d^r=14x14x3 into 12 subspaces of d_s=49 and assign a different length-scale for each subspace. This contributes to more expressive modelling of the underlying function and thus high attack success rate than vanilla GP-BO. \\n b) For high-dimensional search space, we usually require a large number of query data (N) to build an accurate model. Both of our contributions, ADDGP-BO attack and Bayesian learning of optimal reduced dimension, aim to alleviate this effect. By assuming additive structure $y =\\\\sum_j^M y^j$ and decomposing the latent search space disjoint subspaces, we can model each subspace with a separate GP and better model the overall objective y with fewer query data. As for the online learning of optimal reduced dimension, it gives the chance to discover effective latent space of dimensions lower than 14x14x3 in the process of optimisation so that we can get a more accurate GP model given limited query number. \\n c) The computation complexity of GPs scale cubically with N. In our paper, we focus on the attack query-efficiency and limit the query counts to a reasonable level of 1000, which standard GP can still handle. However, if we want to extend to the case with excessively large query budget, we need to resort to sparse GP methods to achieve the scalability. Moreover, we thank Reviewer #3 for the suggestion of using GPU acceleration (e.g. GPyTorch) to reduce the computation expense of GPs and sparse GPs.\\n\\nAnother possible reason why the GP-BO attack works is that the GP surrogate allows us to better exploit all the previous query information to infer the attack patterns. To briefly assess the quality of the GP surrogate, we actually explored another BO surrogate option, kernel density estimator/tree parzen estimators (Bergstra et al, 2013), which is supposed to better handle high-dimensional data and scales linearly with N. However, we found its performance is significantly worse than that of GP. We hypothesise that the good uncertainty estimation provided by GP is quite useful to finding the adversarial example.\\n\\nFinally, it could be that the adversarial problem is actually a not-too-difficult optimisation problem as it doesn\\u2019t necessitate a global optimum to make a successful attack. In most of cases, a good local optimum is enough. As we can see that many gradient-based adversarial attacks can still get high attack success rates, despite that their gradient estimation may not be accurate and gradient descent can easily lead to a local optimum. Therefore, GP-BO, whose optimisation ability may be compromised by the high-dimensionality of the search space, is still able to find the successful adversarial perturbations.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper proposes a black-box attack method that optimises both the adversarial perturbation and the optimal dimensionaity reduction in a Bayesian Optimization framework. The formulation seem sound and the experiments show improvements wrt competitors in terms of performance and query efficiency with comparable attack success rates.\", \"In section 4.3 the authors claim that the additive surrogate makes the GP-based BO able to deal with the problem of high dimensionality. Given that the typical dimensionality for BayesOpt is d <= 20, how are the experiments with dimensions up to 14x14x3 provided for GP-BO and GP-BO-auto-dT performed?\", \"The image selection protocol seems arbitrary and it does not correspond to the Tu et al. protocol which selects 50 random images from CIFAR-100 and MNIST.\", \"I feel the experiments lack some details: which is the decoder used for dimensionality reduction?\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studied the problem of black-box adversarial attack generation by leveraging Bayesian optimization (BO).\", \"merits_of_this_paper\": \"1) The combination of BO and dimension reduction, which makes BO more efficient under a low-dimensional space. \\n2) Good experiment results.\\n\\nComments/questions about this paper:\\n\\n1) Comment on \\\"Finally, to the best of our knowledge, the only prior work that uses Bayesian optimisation is a workshop paper by...\\\". BO was also used for generating black-box adversarial examples at https://arxiv.org/pdf/1907.11684.pdf\\nThis is a missing related work, and please elaborate on the differences. \\n\\n2) The presentation of the proposed algorithm is not clear. Please explicitly state the acquisition function. And how to tune the hyperparameter in the acquisition function? What decoder is used in experiments? Have authors tested the sensitivity of the decoder (not reduced dimension)? \\n\\n3) In experiments, the authors mentioned \\\"we randomly select 3 correctly classified images for each class from CIFAR10 test data which sums up to 27 CIFAR10 images, and randomly select 7 correctly classified images from MNIST test data.\\\"\\nI feel that the number of tested images is not sufficient. How about conducting experiments on a large number of tested images for untargeted attack?\\n\\n4) In Table 1, what does 0,0,0, mean in ZOO? \\n\\n5) It is known that BO has itself parameter to tune, and is not computationally efficient. It might be good to show the computation efficiency of BO for different reduced dimensions together with the corresponding attack performance. \\n\\n6) The convergence of BO is usually not stable. However, Figure 3 shows that BO converges very smoothly in terms of ASR. Could authors also show the loss value of using BO-attacks against iteration numbers? \\nMeanwhile, in Figure 3 is the best ASR (up to the current query counts) reported or the ASR at the current query number?\\n\\n\\nBased on the aforementioned questions, my initial rating is weak reject. \\n\\n\\n############## Post-feedback ################\\nThanks for the response. Most of my questions have been addressed. Thus, I increased my score to 6. \\nI suggested to have a clearer presentation on the possible pros and cons of BO in attack generation, e.g., making a comparison between BO and other methods in both query efficiency and computation efficiency.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors propose to use Bayesian optimization with a GP surrogate for adversarial image generation. In addition to the standard BayesOpt algorithm, the authors use a variant that exploits additive structure, as well as a variant that uses Bayesian model selection to determine an optimal dimensionality reduction.\\n\\nFor the experimental results, I find it extremely surprising that vanilla GP-BO works at all, even downsampling e.g. to d=588 (Table 2). This is extraordinarily high dimensionality for vanilla BayesOpt, and conventional wisdom suggests that this should not work at all. I'd like to see a discussion of this, particularly as I've seen unsuccessful attempts at this in the past. What differentiating factors lead to it working here? The set of images considered is quite small, presumably because of the rather extreme wall clock expense of running hundreds of sequential BayesOpt iterations without GPU acceleration. This is particularly true for methods that require Bayesian model selection and therefore training multiple GPs in each iteration of BayesOpt.\\n\\nAlong the same lines of dimensionality concerns, I would view a lack of results on ImageNet images as a significant weakness, particularly as these are probably much harder for general purpose blackbox optimizers, as the initial dimensionality of those images is ~150000. A decent amount of missing related literature studies transformations of ImageNet images, including the QL Attack (Ilyas et al., 2018), Bandits-TD (Ilyas et al., 2019) and others. These papers also focus specifically on query budget, so it would be hard to claim that BayesOpt is SOTA if it can't scale to images this large.\\n\\nCan you provide additional details on the learning mechanism for the additive decomposition? Are you learning kernel outputscales for different predefined additive components as in Duvenaud et al., 2011? Note that this is a fairly different structure than considered in Kandasamy et al., 2015 (despite both being called \\\"additive GPs\\\") -- the type of additive structure in Kandasamy et al., 2015 usually needs to be learned through approximate model selection mechanisms (usually via Metropolis-Hastings or Gibbs sampling).\"}" ] }
SyeMblBtwr
CrossNorm: On Normalization for Off-Policy Reinforcement Learning
[ "Aditya Bhatt", "Max Argus", "Artemij Amiranashvili", "Thomas Brox" ]
Off-policy temporal difference (TD) methods are a powerful class of reinforcement learning (RL) algorithms. Intriguingly, deep off-policy TD algorithms are not commonly used in combination with feature normalization techniques, despite positive effects of normalization in other domains. We show that naive application of existing normalization techniques is indeed not effective, but that well-designed normalization improves optimization stability and removes the necessity of target networks. In particular, we introduce a normalization based on a mixture of on- and off-policy transitions, which we call cross-normalization. It can be regarded as an extension of batch normalization that re-centers data for two different distributions, as present in off-policy learning. Applied to DDPG and TD3, cross-normalization improves over the state of the art across a range of MuJoCo benchmark tasks.
[ "RL", "Normalization" ]
Reject
https://openreview.net/pdf?id=SyeMblBtwr
https://openreview.net/forum?id=SyeMblBtwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "lDXI0VGDVg", "HkeMrgMqoS", "rJlRvAZcir", "S1ghBRZ5or", "B1gHVaZ5iH", "Skg1CMy0tS", "SylC_-5aFB", "BJemg8D6Yr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741303, 1573687353994, 1573686886109, 1573686851935, 1573686573322, 1571840711161, 1571819894061, 1571808747340 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2128/Authors" ], [ "ICLR.cc/2020/Conference/Paper2128/Authors" ], [ "ICLR.cc/2020/Conference/Paper2128/Authors" ], [ "ICLR.cc/2020/Conference/Paper2128/Authors" ], [ "ICLR.cc/2020/Conference/Paper2128/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2128/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2128/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This is certainly a boarderline paper. The reviewers agreed this paper provides a good explanation and empirical justification of why popular normalization schemes don't help in DRL. The paper then proposes a simple scheme and demonstrates how it improves learning in several domains. The main concerns are the nature of these gains and how broadly useful the new approach is. In many cases there appear to be somewhat clear wins in the middle of the learning curves, but by the end of each experiment the errorbars overlap. The most clear results are those with TD3. There are some oddities here: using half SD error bars and smoothing---both underline the concern about significance.\", \"the_reviewers_requested_more_experiments_and_the_authors_provided_three_more_domains\": \"two in which their method appears better. These are not widely used benchmarks and it was hard to compare the performance of the baselines with fan et al (different setup) to evaluate the claims. The paper nicely provides lots of insight and empirical wisdom in the appendix, explaining how they got the algorithms to perform well.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their valuable comments and useful suggestions.\\n\\n> This paper only shows benefit 4 tasks in the MoJoCo domain.\\n\\nThis is a valid concern; therefore, we have added more experiments to the paper.\\n\\n In addition to the four MuJoCo tasks and our experiments with linear function approximators we have also successfully tested CrossNorm on non-locomotion SURREAL/robotsuite tasks, and the much harder gym humanoid task. These are described in the \\\"Additional Experiments\\\" section added to the appendix. Further, we also successfully applied CrossNorm to achieve top-10 placement in the NeurIPS 2018 AI for Prosthetics challenge. (Not specified further to avoid breaking anonymity.)\\n\\n\\n> As the solution is relatively straightforward, with very restrictive applicable settings (particular normalization trick with particular function approximator). It\\u2019s less clear to me whether this provides enough contribution and inspiration to other work as a conference paper.\\n\\nWe regard the simplicity of the method as an advantage. The method is applicable to all DPG-style approaches using deep neural networks as function approximators, which is a major area of continuous-control Deep RL research. \\n\\nUsing Target Networks (TNs) in Deep RL is an empirically motivated algorithmic fix. It changes the original TD learning algorithm into a two-timescale algorithm; TD learning methods do not generally have polyak averaging of parameters, and it is not understood how using a TN provides stability. In fact, there is evidence that TNs do not avoid divergence, but merely delay it [1], \\n\\nPrevention of divergence without target networks has been an open problem that has seen much attention [2][3][4]. All of these approaches, while valuable and interesting, are *new algorithms* with special loss functions and update rules that do not perform the original DPG-style off-policy actor-critic training. Our demonstration, on the other hand, is significant because it is the simplest approach of them all - simply augmenting the function approximator class with BatchNorm/LayerNorm - and it works just fine with the old algorithms. We believe this is an interesting and surprising result. \\n\\nBatch normalization is not a trick but widespread in supervised learning. Being rarely used in RL, it is important to show how to apply normalization properly, as it provides substantial performance improvements. Moreover, we demonstrate that LayerNorm, which is being used by several DDPG implementations [e.g. OpenAI Baselines], allows stable training without target networks, which has not been shown before.\\n\\n[1] https://arxiv.org/abs/1812.02648\\n[2] https://arxiv.org/abs/1903.08894\\n[3] https://openreview.net/forum?id=Bk-ofQZRb \\n[4] https://www.ijcai.org/proceedings/2019/0379.pdf\\n\\n\\n> The dilemma is totally caused by that BatchNorm... It needs not to be a weakness for the algorithm itself as we appreciate simple but effective algorithm. However this makes the problem itself more like a design weakness of BatchNorm and a simple patch to fix it. I doubt how much algorithmic insight this paper could contribute, to inspire related research ...\\n\\nJoining two batches and doing a joint forward pass through BatchNorm is how we implement the alpha = 0.5 case (we describe it in the paper). However, we found that TD3 is particularly sensitive to the details of normalization (shown in Figure 3) and empirical results for alpha = .99 were better with renormalization.\", \"we_think_that_our_paper_yields_the_following_insight\": \"in the set of deep function approximators, there could exist function classes (such as those containing LayerNorm / BatchNorm layers) with which off-policy TD learning is stable under certain additional assumptions. While a theoretical proof may be difficult, this insight gives others another tool to improve stability.\\n\\n> As the paper pointed out \\u201cusing the training mode in the target calculation would result in different mean subtractions of the Q function and its target.\\u201d This means it will have a systematic difference in Q function used to compute Q(s, a) and Q(s\\u2019, a\\u2019), ... So why target network will have no problem but this will. In general, I\\u2019d like to see a more clear analysis about the dilemma of BatchNorm in off-policy data, and why the two simple ways won\\u2019t work.\\n\\nIt is true that for target networks there is a systematic difference in the Q function used to compute Q(s, a) and Q(s\\u2019, a\\u2019). This does not prevent training because this is part of the algorithm\\u2019s design. There is also a difference in Q functions as result of normalizing with different data statistics (s,a), (s\\u2019,a\\u2019), which causes more problems. It is difficult to compare these differences to one another because these methods are also effectively different algorithms, in fact the Q functions *are required* to be different as part of the design of the TN algorithm.\\n\\n\\nWe hope this has addressed all of your concerns.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the positive feedback on our work.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the constructive feedback, and for pointing out that our \\u201cresults are surprisingly good when the simplicity of the algorithm is considered\\u201d.\\n\\n\\n> I think it would be much better if the paper develops some theory behind the normalization\\n\\nThere are theoretical studies and empirical studies. Both can be valuable. This paper is an empirical study. In fact, we tried to find a theoretical proof for stability in the simpler case of linear function approximators. However, we found counterexamples, which proves that there cannot be a proof for stability without additional assumptions. Nonetheless, the empirical study shows clear benefits of the approach, which makes it valuable. In Deep learning, the mechanism by which techniques like batch normalization and layer normalization improve optimization is still an open area of investigation [1][2]. \\n[1] https://papers.nips.cc/paper/7996-understanding-batch-normalization.pdf\\n[2] https://papers.nips.cc/paper/7515-how-does-batch-normalization-help-optimization.pdf\\n\\n\\n> In the introduction, the paper says that the paper investigates convergence: where is the convergence investigation?\\n\\nThis sentence was talking about the linear function approximator case, it has been changed to \\u201cempirically evaluates stability\\u201d. \\n\\n\\n> Below eq (1), it is written as \\\"the second order moments of the variance\\\". Is it the second moment, or the variance? How do you convex combinate variances? Is it OK to do so?\\n\\nThank you for pointing this out! That was a mistake in our writing and we have removed it. We do not convex-combinate the variances: the variance of the concatenated minibatch is computed as a whole (this happens automatically via the joint forward pass, with no special code on our part).\\n\\n\\n> While it is claimed in the paper that TD3 + CrossRenorm (alpha=0.99) performs well, it is not really justified why (alpha=0.99) is crucial. While it is written in the paper that \\\" As the distribution of the off-policy actions from the experience replay changes considerably slower than the action distribution of the constantly changing current policy,\\\" such point can also be applied to DDPG and it does not explain why alpha=0.99 is needed for the only TD3.\\n\\nIt is not \\u201ccrucial\\u201d to use alpha=0.99. As shown by the comparison with the alpha=0.5 runs, we found that TD3 is particularly sensitive to the details of normalization in a way that DDPG is not. This is why we focused the effort on TD3 and chose to tune the mixing with alpha and empirically found .99 to be appropriate with renormalization. \\n\\n> The paper also lacks experiments about BatchRenorms on DDPG and TD3, which would be a fair comparison against CrossRenorm.\\n\\nWe evaluated BatchRenorm on DDPG and TD3 and have included these results in the supplemental material. Neither of these variants worked consistently better than the baselines already included. TD3 + BatchRenorm was often not stable.\\n\\n\\n> Why do we need Figure 4? Is it only for the comparison against SAC?\\n\\nFig. 4 compares our modifications against the author-reported original hyperparameters of TD3 (and also SAC). Also the original hyper-parameters of TD3 (batch size=100) are different compared to Fig. 3 (batch size=256), giving it slightly better performance.\\n\\n\\n> Below eq (2), the paper says about big \\\\Phi, but it is never defined and not used anymore. What is it about?\\n\\nThis describes how we normalize in the linear function approximator case, it has been clarified in the paper by replacing with \\\\phi.\\n\\n\\n> The stability improvement analysis implies that the mean-only crossnorm is sufficient for stabilization. Why do we need variance normalization then?\\n\\nThe mean-only crossnorm is only required to avert divergence, however the additional variance normalization significantly improves the training speed. \\n\\n\\nWe hope that these additional experiments and clarifications have improved the paper.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for the feedback and the detailed reviews. We have incorporated these suggestions to improve our paper.\", \"to_address_the_main_point_of_concern\": \"the applicability of the method to a diverse set of environments, we did additional experiments with the gym humanoid environment and two SURREAL/robosuite robotic manipulation tasks, both now shown in the supplemental material. Further we added comparison to normal batch renormalization to the supplemental material.\\n\\nWe address the questions and comments of the reviewers below.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper introduces a new normalization scheme, cross normalization, that stabilizes the off-policy reinforcement learning algorithm. The results show that by simply performing batch-normalization, where the mean and variance statistics are computed with both behavior and target action samples, it can increase the performance of DDPG and TD3 algorithm consistently. The paper also shows that it prevents the algorithm from diverging even when the target network is removed, showing the source of stabilization.\\n\\nThe results are surprisingly good when the simplicity of the algorithm is considered. Nevertheless, I think the paper is not providing enough theoretical backups for the claimed algorithm, and it prevents me from being completely convinced. Also, the paper does not seem to be a complete draft - there are many points that seem to be incomplete. I think it would be much better if the paper develops some theory behind the normalization, referring some previous results as (Liu, Yao, et al. \\\"Representation balancing mdps for off-policy policy evaluation.\\\" Advances in Neural Information Processing Systems. 2018.). For now, I feel that the paper is not ready for publication.\", \"here_are_some_problems_with_the_paper_i_found\": \"1. In the introduction, the paper says that the paper investigates convergence: where is the convergence investigation?\\n\\n2. At the top of page 4, the sentence is not complete\\n\\n3. Below eq (1), it is written as \\\"the second order moments of the variance\\\". Is it the second moment, or the variance? How do you convex combinate variances? Is it OK to do so?\\n\\n4. While it is claimed in the paper that TD3 + CrossRenorm (alpha=0.99) performs well, it is not really justified why (alpha=0.99) is crucial. While it is written in the paper that \\\" As the distribution of the off-policy actions from\\nthe experience replay changes considerably slower than the action distribution of the constantly\\nchanging current policy,\\\" such point can also be applied to DDPG and it does not explain why alpha=0.99 is needed for the only TD3. The paper also lacks experiments about BatchRenorms on DDPG and TD3, which would be a fair comparison against CrossRenorm.\\n\\n5. Why do we need Figure 4? Is it only for the comparison against SAC?\\n\\n6. Below eq (2), the paper says about big \\\\Phi, but it is never defined and not used anymore. What is it about?\\n\\n7. The stability improvement analysis implies that the mean-only crossnorm is sufficient for stabilization. Why do we need variance normalization then?\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes a novel normalization strategy for Off-Policy TD Reinforcement learning. Normally, Off-Policy TD RL is stabilized by usage of a target network, which has the disadvantage of slowing down the learning process. The paper first shows the effects of using existing normalization methods (batch and layer normalization) in the context of OPTD methods. Those approaches are shown to be inferior to target networks, because the data (actions in off-policy transitions and on-policy transitions) is coming from two different distributions. Experiments show that those normalization methods do not lead to consistent improvements over the benchmarks.\\nTo tackle this problem, the authors introduce a cross-normalization scheme that works across the two datasets in this context. Cross-normalization is achieved by calculating a mixture of the mean values of both on- and off-policy state-action pairs. The weight of the contribution of those distributions is handled by a hyperparameter, which defaults to 0.5; thus using a balanced influence on both distributions (CrossNorm). Since the mean features of the off-policy data are more stationary, two strategies are applied: First, the hyperparameter is set to 0.99, thus giving more weight to the off-policy data, which is less volatile. Second, mean and variances are computed over several batches to increase stability (CrossRenorm). It finally is shown that the CrossRenorm approach is able to surpass state-of-the-art performance on the MuJoCo benchmark while having the benefit of not needing a target-network. In further experiments, it is shown that CrossNorm stabilizes learning in most contexts, but does not guarantee to converge in all settings.\\n\\nOverall, the paper manages in a very clear and structured manner, (1) to show the current approaches for stabilizing learning and their downsides, (2) to show why common normalization methods fail and (3) formulates a possible solution for this problem. Furthermore, empirical results are not only shown, but also analysed.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the problem of feature normalization in off-policy RL, more specifically, learning a Q function with continuous action from off-policy data. It shows standard feature normalization methods in supervised learning is indeed not effective for RL settings, due to the fact that a and a\\u2019 are from very different distributions with different dynamics. Since the batch of a and a\\u2019 come to model iteratively, standard normalization method suffers from this 2-periodic distribution shift. This paper proposes a normalization method, by merging a and a\\u2019 into a * single* update step to the batch normalization layer.\\n\\nThis paper does catch a problem and uses a straightforward solution but empirically effective, but I still have two main concerns. 1) This paper only shows benefit 4 tasks in the MoJoCo domain. 2) As the solution is relatively straightforward, with very restrictive applicable settings (particular normalization trick with particular function approximator). It\\u2019s less clear to me whether this provides enough contribution and inspiration to other work as a conference paper. I tend to vote for reject at this time.\\n\\nI would like to first point out the pros of this paper from my perspective then explain my main concerns point by point. This paper does a great job of capturing the dilemma of batch normalization in RL settings. My understanding is that the problem is caused by a periodic distribution shift between a and a\\u2019. Because we have to pass a and a\\u2019 in separate batch and BatchNorm does an online updating after each batch, we are in a dilemma, as the paper pointed out. If we don\\u2019t update BatchNorm in one of them (e.g. target value) that will be biased and make BatchNorm ineffective, and if we do so we will have a systematic difference between Q(s, a) and Q(s\\u2019, a\\u2019).\", \"main_concerns\": \"1) This paper only shows benefit 4 tasks in the MoJoCo domain. Given that the empirical result is pretty much the only support of the claim in this paper, the lack of more diverse experiments would weaken the contribution.\\n\\n2) The dilemma is totally caused by that BatchNorm will immediately perform update according to a batch after it is input, then a lazy update will cancel this: do a single update to the BatchNorm layers, for two batches of data (a, a\\u2019). This is equivalent to the proposed solution when alpha=0.5. It needs not to be a weakness for the algorithm itself as we appreciate simple but effective algorithm. However this makes the problem itself more like a design weakness of BatchNorm and a simple patch to fix it. I doubt how much algorithmic insight this paper could contribute, to inspire related research.\", \"minor_point\": \"It also makes me doubt whether we really need an alpha or not.\\n\\n3) As the paper pointed out \\u201cusing the training mode in the target calculation would result in different mean subtractions of the Q function and its target.\\u201d This means it will have a systematic difference in Q function used to compute Q(s, a) and Q(s\\u2019, a\\u2019), but isn\\u2019t this also true for the target network since we are using a different network to compute Q(s\\u2019, a\\u2019). Eventually, if the policy converges, those differences will disappear. So why target network will have no problem but this will. In general, I\\u2019d like to see a more clear analysis about the dilemma of BatchNorm in off-policy data, and why the two simple ways won\\u2019t work.\"}" ] }
BJeGZxrFvS
A Simple Technique to Enable Saliency Methods to Pass the Sanity Checks
[ "Arushi Gupta", "Sanjeev Arora" ]
{\em Saliency methods} attempt to explain a deep net's decision by assigning a {\em score} to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input. Recently \citet{adebayosan} questioned the validity of many of these methods since they do not pass simple {\em sanity checks}, which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs. % for the inputs. %Surprisingly, the tested methods did not pass these checks: the explanations were relatively unchanged. We propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call {\em competition for pixels}. This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map. Some theoretical justification is provided for it and its performance is empirically demonstrated on several popular methods.
[ "saliency", "attribution", "interpretability", "sanity checks" ]
Reject
https://openreview.net/pdf?id=BJeGZxrFvS
https://openreview.net/forum?id=BJeGZxrFvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "NeSqmWx34", "rJgOz7c2jH", "Bkgs3f93jr", "Hyg7GfcnjH", "SJx3NyisqS", "S1enNy8TFr", "BJg6iRPDYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741273, 1573851919783, 1573851827180, 1573851658993, 1572740915600, 1571802931609, 1571417765402 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2127/Authors" ], [ "ICLR.cc/2020/Conference/Paper2127/Authors" ], [ "ICLR.cc/2020/Conference/Paper2127/Authors" ], [ "ICLR.cc/2020/Conference/Paper2127/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2127/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2127/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This submission proposes a method to pass sanity checks on saliency methods for model explainability that were proposed in a prior work.\", \"pros\": \"-The method is simple, intuitive and does indeed pass the proposed checks.\", \"cons\": \"-The proposed method aims to pass the sanity checks, but is not well-evaluated on whether it provides good explanations. Passing these checks can be considered as necessary but not sufficient.\\n-All reviewers agreed that the evaluation could be improved and most reviewers found the evaluation insufficient.\\n\\nGiven the shortcomings, AC agrees with the majority recommendation to reject.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Official Blind Reviewer 2\", \"comment\": \"We thank reviewer 2 for their thoughtful review. We agree that the question of how to optimally apply competition among labels remains open.\"}", "{\"title\": \"Response to Official Blind Reviewer 1\", \"comment\": \"We thank reviewer 1 for their thoughtful review.\\n\\nWe apologize for the typos, and are correcting them in the revised version. As for the sentence beginning Section 4, we attempted to show, in Figure 1, that that although nearly all the pixels in the actual digit \\u20183\\u2019 are highlighted in the saliency map with chosen label 3, some of these pixels appear more relevant for other classes, for example in the map for logit 7 the top and backbone of the 3 can clearly be seen. This suggests that just because a pixel is present in the saliency map for logit \\u20183\\u2019, does not mean that is it primarily indicative of the label being \\u20183\\u2019, especially if it assigns a lower score to the label being \\u20183\\u2019 than to \\u20187\\u2019.\"}", "{\"title\": \"Response to Official Blind Reviewer 4\", \"comment\": \"We thank reviewer 4 for a thoughtful review.\\n\\n4. In Figure 1, the data was normalized before feeding it to the neural network, so the background values are not all zero. We would also like to note that it is possible that any anomalous values in the input image may propagate spuriously to the logit, whether they be edges or some other image feature. \\n6. Figure 1 shows the CGI map for chosen logit \\u20183\\u2019. \\n7. In order to compare across logits, either completeness or approximate completeness (meaning there is a correlation between the sum of saliency scores for each logit and the logit value) must be satisfied. LRP, gradient * input for ReLU nets with no bias, DeepLift for ReLU nets with no bias, all satisfy completeness. Gradient *input seems to satisfy approximate completeness even for ReLU networks with bias. \\n8. The authors do not mean to imply that such \\u2018shared features\\u2019 (those indicative for multiple classes) are irrelevant. However, our competition idea forces each pixel to choose one label (the one giving it the maximal score). While at first glance this seems to discard potentially relevant pixels, the theory suggests reasons why it doesn\\u2019t happen too often.\\n9. This may be seen in Figure 1 in the rightmost column. \\n10. Taking the gradient of the post-softmax probability with respect to the input does not satisfy completeness for the logit value for gradient * input, and we found it did not pass the sanity checks well. The referenced paper computes a different calculation, which is using the derivative of the post softmax probability with respect to the logit as the initial value for LRP. We did not test this method, as it is more specific to LRP and we were examining a more broad class of saliency methods.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\n\\nThe paper proposes a simple technique to address the problem introduced by Adebayo et al. that several saliency approaches do not pass sanity checks. The proposed approach computes the saliency maps for all the classes and removes the pixels that play a role in predicting several classes.\", \"strengths\": \"1. Simple and intuitive approach. \\n2. Well written and easy to read paper.\\n3. The introduced approach makes Grad.Input pass the sanity checks introduced by Adebayo et al.\", \"weaknesses\": \"1. For any interpretability technique, passing the sanity check is a must, but just because a saliency technique passes the sanity checks, it doesn\\u2019t mean that these maps explain the network\\u2019s decision well. \\n2. Lack of any quantitative evaluation (such as localization or pointing experiment) of their approach. \\n3. Failure to show if the resultant maps are class-discriminative. Show performance on images with multiple classes. \\n4. In fig 1, In Grad . Input, I see positive values or negative values even when the original pixels are not active. This doesn\\u2019t explain the presence of edges causing high values in the G.I map for such pixels, right?\\n5. In figure 1, These maps only assign values to the pixels that need to be removed to make a certain classification decision. The regions that need to be active but are not present are not highlighted. \\n6. In figure 1 the shown CGI Map is for which class?\\n7. So, is the approach only applicable to such systems where the completeness is true? Can the authors provide a list of approach that satisfy completeness:\\n8. Page 3 last paragraph: Consider the example in figure 1. Let's consider the maps for digit 3 and 5. For the top horizontal part of the digit, it plays a role in determining both 3 and 5. Assume that for one such pixel the value of h_5_i is greater thatn h_3_i (looking at the figure it is not unreasonable to expect that). Just because the g.input value of h_5_i is greater that h_3_i , are the authors saying that the top part is irrelevant?\\n9. How does CGI look for the original 3 on standard model?\\n10. Could the authors provide more intuition as the why the gradients of outputs from softmax layer doesn\\u2019t give good results? The proposed approach from https://arxiv.org/pdf/1908.04351.pdf suggests that computing gradients from last layer improves the class discriminative behaviour.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I Summary\\nThe paper directly answers two sanity checks for saliency maps proposed by Adebayo et al (2018): 1. randomizing the weights of a model to prove that the input's resulting saliency map is different from a trained model' saliency map. 2. randomizing the inputs' labels to make the same proof. The authors propose a \\\"competitive version of saliency method\\\" which uses the saliency scores of every pixel for each labels and zero out: positive scored pixels that would not be maximal for the predicted class and negative scored pixels that are not minimal for the worst predicted label.\\nOverall the method solves the aforementioned sanity checks, the authors claim it also generates more refined saliency maps.\\n\\nII Comments\\n\\n1. Content\\nThe paper can be hard to read, due to multiple writing mistakes, abrupt phrasing, not well-articulated sentences. However, the idea is easy to understand and interesting but the contribution does not seem strong enough in its actual state.\", \"my_main_concern_is_that_the_method_seems_to_be_designed_only_to_answer_the_sanity_checks\": \"the resulting saliency maps can hardly be seen as more informative as other existing methods (eg figure 1). Quantitative measures (ROAR & KAR, Hooker et al. 2018) or surveys to show that the newly obtained saliency maps are more refined or help to best localize regions of interest would be a big bonus.\\n\\n2. Writing\\nThe paper comports numerous typos, those do not impact the score of the review except if the sentence is not understandable. Please see the following points as support to improve the clarity of the paper.\\n- Abstract last sentence: \\\"Some theoretical justification is provided\\\" -> \\\"Some\\\" is vague and makes your claim less credible -> \\\"theoretical justifications are given in the last paragraph to support our method...\\\"\\n- Intro \\n paragraph 2 first sentence lack some words, l 2 product -> a product\\n \\\"See paper XX et al\\\" -> \\\"As in XX et al, we can see that\\\" or \\\"As stated in XX et al\\\", \\\"See\\\" is too familiar, formalizing the phrasing gives more credibility to your work\\n- Related work\\n \\\"To give an example, does the map change a lot if we blank out or modify a portion of the image that humans find insignificant Kim et al. (2019)? \\\" This is not very well articulated, \\\"a lot\\\" is vague and a little familiar, \\\"significantly\\\" could be used here. Moreover, the citation is a little abrupt \\\"as we can see in XX\\\" would work better \\nLittle typo on etc..\\n\\\"fare best\\\" -> far better? The wording is still vague, it would help to add a quantitative measure\\nthat's -> that is\\n- Section 3\\n\\\"This figure highlights\\\" -> which figure? (I think you just missed citing the fig here)\\n- Section 4\", \"first_sentence\": \"Why is it a good idea? The claim is a little abrupt and could be detailed a little more\\n\\\"destroy the saliency map\\\" -> destroy is a very strong word\\n\\\"These random variables are complicated.\\\" -> This statement seems a little out of place and abrupt\\n\\\"some constants\\\" -> \\\"constants\\\" (too vague otherwise as before)\\n- Subsection 4.1\\n\\\"randomly sampling a few such methods\\\" I believe there is a typo?\\n\\\"See figure 3\\\" is abrupt as a sentence itself \\\"as you can see in figure 3 bla bla\\\"\\nFigure 4 The image is small and hard to see on printed paper (same for the images in the appendix, they could be stacked over multiple lines instead of just one horizontal row)\\nDefinition 2 punctuation at the end\\n- Section 5\\n\\\"The available code for these maps is slow, and computing even gradient for all 1000 ImageNet labels can be rather slow.\\\" What is the aim of this sentence?\\n- subsection 5.3 \\nlables -> labels\\n\\nIII Conclusion\\nThe method itself is interesting, it would be interesting to see more qualitative results on the obtained saliency map itself: Does it produce more information? Is it more meaningful etc. Because as of now, it only seems to answer the two aforementioned sanity checks. As for the writing, it is not always clear and can impede the understanding of the paper. I would be glad to change my review if those points are addressed.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a strategy for visualizing activation in networks that corresponds to features in the input layer. It addresses a problem posed for existing methods for characterizing saliency in activation subject to sanity checks which measure the degree to which the activation (saliency) map changes subject to different randomization tests.\\nThe proposed solution involves a simple competition mechanism across saliency maps produced when different logits are considered such that small values are zeroed out in favor of larger values across the logits.\\nOverall, I find this paper to be interesting and to address a problem worthy of further consideration. While the mechanism for competition is very simple, the resulting activation maps subject to randomization tests are reasonably convincing.\\nIn the ideal case, I would have liked to see different strategies for eliciting competition explored to determine their relative merits. Nevertheless, I expect that such work will follow with this being an initial step in this direction.\"}" ] }
B1eWbxStPH
Directional Message Passing for Molecular Graphs
[ "Johannes Gasteiger", "Janek Groß", "Stephan Günnemann" ]
Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules. These models represent a molecule as a graph using only the distance between atoms (nodes). They do not, however, consider the spatial direction from one atom to another, despite directional information playing a central role in empirical potentials for molecules, e.g. in angular potentials. To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves. Each message is associated with a direction in coordinate space. These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule. We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them. Additionally, we use spherical Bessel functions and spherical harmonics to construct theoretically well-founded, orthogonal representations that achieve better performance than the currently prevalent Gaussian radial basis representations while using fewer than 1/4 of the parameters. We leverage these innovations to construct the directional message passing neural network (DimeNet). DimeNet outperforms previous GNNs on average by 76% on MD17 and by 31% on QM9. Our implementation is available online.
[ "GNN", "Graph neural network", "message passing", "graphs", "equivariance", "molecules" ]
Accept (Spotlight)
https://openreview.net/pdf?id=B1eWbxStPH
https://openreview.net/forum?id=B1eWbxStPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "2TUEz6rFkb", "B1xQUxXDsH", "HJl0NxmDiH", "r1lvxgXwjH", "H1eUT3mRtS", "Sklq1fBatH", "rJx3xOahYB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741241, 1573494859009, 1573494837906, 1573494766755, 1571859646150, 1571799522379, 1571768307680 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2126/Authors" ], [ "ICLR.cc/2020/Conference/Paper2126/Authors" ], [ "ICLR.cc/2020/Conference/Paper2126/Authors" ], [ "ICLR.cc/2020/Conference/Paper2126/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2126/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2126/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper studies Graph Neural Networks for quantum chemistry by incorporating a number of physics-informed innovations into the architecture. In particular, it considers directional edge information while preserving equivariance.\\n\\nReviewers were in agreement that this is an excellent paper with strong empirical results, great empirical evaluation and clear exposition. Despite some concerns about the limited novelty in terms of GNN methodology ( for instance, directional message passing has appeared in previous GNN papers, see e.g. https://openreview.net/forum?id=H1g0Z3A9Fm , in a different context). Ultimately, the AC believes this is a strong, high quality work that will be of broad interest, and thus recommends acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"DimeNet beyond molecules\", \"comment\": \"As you correctly pointed out, going beyond the graph and incorporating the underlying spatial data is one of the main ideas behind our model. In our work we focus on molecular prediction and leverage the characteristics of this problem. However, many of the ideas we propose in our paper should be applicable to other graphs with underlying geometric data as well, such as meshes in computer vision.\"}", "{\"title\": \"Improved experiments\", \"comment\": [\"During most of our model development process we used a single model to jointly predict all 12 targets. We only changed this for the final training runs. We have added the results of multi-target predictions in the updated appendix (both with shared and separate output blocks per target). Using a shared model increases std. MAE by 81%, even when using separate output blocks. Please note that competing models use single-target models as well.\", \"We have extended the description of the MD17 dataset to incorporate your suggestions.\", \"The finding that a more complex angular transformation helps while a more complex distance transformation does not is mainly empirical. An intuitive explanation for this might be the fact that a rotation in space is a more complex transformation than a translation (in terms of the required matrix operation).\", \"We only take over the architectural complexity from the model we are extending upon (PhysNet) and simplified where it was possible without sacrificing performance.\", \"We have added the suggested workshop paper to the related work section. Note, however, that their method of incorporating orientations requires an explicit order of atoms, which breaks permutation invariance and is therefore only possible for chain-like molecules like proteins.\"]}", "{\"title\": \"Performance improvements and directionality\", \"comment\": \"Performance improvements: After the submission deadline we noticed that competing models use the raw (non-standardized) data for training. To make the setup more consistent we changed this training detail, which made a surprisingly large difference. We achieved further small improvements by jointly representing interatomic distances and angles in a 2D basis and by increasing the number of interaction blocks from 4 to 6. We\\u2019ve updated the description in the paper accordingly (see Sec. 5). We now set the state of the art on 11 out of 12 targets.\", \"directionality_has_not_been_explored_in_gnns\": \"Our proposed directional message embeddings are indeed related to edge embeddings, as we point out in the related work section. However, normal edge embeddings do not incorporate any directional information. By using directional information we essentially go beyond the graph representation and leverage the fact that molecules are in fact embedded in 3D space. Also, note that our model outcompetes a state-of-the-art GNN with edge embeddings (MEGNet, published in April 2019) on average by 71%, while relying solely on message embeddings. MEGNet on the other hand uses a combination of node, edge and graph embeddings.\", \"implicit_angles\": \"Since SchNet only uses the distances between atoms it can not model angles (even implicitly) when it only passes messages between one-hop neighbors (see e.g. limitations of the Weisfeiler-Lehman test and our discussion in Section 4 and Appendix A). Models like SchNet can only use the full geometrical information when they use the full distance matrix, i.e. a fully connected graph and no cutoff.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Strength:\\n-- The paper is well written and easy to follow\\n-- The authors proposed a new approach called directional message passing to model the angles between atoms, which is missing in existing graph neural networks for molecule representation learning\\n-- The proposed approach are effective on some targets.\", \"weakness\": \"-- From the point of view of graph neural networks, the novelty of the proposed techniques is marginal\\n-- The performance of the proposed method are only better than existing methods on some of the targets. \\n\\nThis paper studied learning the graph representation of molecules by considering the angles between atoms. The authors proposed a specific type of graph neural network called directional message passing. Experimental results on the QM9 data set prove the effectiveness of the proposed approach over existing sota algorithms such as Schnet for some of the targets. \\n\\nOverall, the paper studies a very important problem, which aims to learn the representation of molecules. Modeling the angles of atoms is indeed less explored in existing literature. From the view of graph neural networks, the proposed technique is not that new since edge embedding has already been studied in existing literature. But for the technique could be particular useful for molecule representation learning, especially with the BESSEL FUNCTIONS. One question is that the schnet also leverages the coordinates of the atoms, which may also implicitly model the angles between the edges. In terms of the experiments, the proposed approach does not seem that strong, only achieving the best performance on 5 targets out of 12. \\n\\nOverall, I feel this paper is on the borderline. Now I will give weak accept and will revise my score according to other reviews and comments.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This is a sophisticated paper on predicting molecular properties at atom as well as molecular levels using physics inspired, extended GNN architectures. Two key extensions are provided above and beyond previous GNN models that operated on graphs derived from pairwise distances between atoms. First, the encoding of atom distances for use in neural messages is no longer done in terms of Gaussian radial basis function representations but in terms of spherical Bessel functions. The provide an orthogonal decomposition at resolutions controlled by associated frequencies. The orthogonality is though lost due to the use of an envelop function to ensure differentiability at cutoff distance defining the graph (essential for molecular simulations) but this appears not to affect performance. The second and the primary contribution of the paper, beyond the architecture itself, is the use of directional embeddings of messages where angles are transformed into cosine basis for neural mappings. In other words, the message sent from atom i to j aggregates messages from i's other neighbors in a manner that takes into account the angle formed by i, j, and k's respective atom positions. Since local directions are equivariant with an overall molecular rotation, the message passing architecture in this new representation remains invariant to rotations/translations. The added directional information, embedded in local basis representation, clearly makes the network more powerful (able to distinguish higher order structures).\", \"the authors suggest that the radial information can be transformed simply by element-wise multiplication while angular information requires more complex transformations in message calculations. Is there a physical insight to this or is this simply an empirical finding?\", \"there are many layers of transformations introduced from the atom embeddings before reaching the property of interest. Are so many layers really necessary?\", \"it seems models for QM9 data were trained separately for each different physical target. Is this really necessary? Given the many layers of transformations until the properties are predicted, couldn't the message passing component be largely shared?\", \"what exactly is the training data for the molecular simulation tests? The description in the paper is insufficient. A separate model is trained for each molecule, presumably based on samples resulting from physical simulations (?). What is provided to the models based on each \\\"sample\\\"?\", \"the ablation studies are helpful to assess the impact of the main differences (directionality, bessel vs Gaussian, node embeddings vs message) though I would wish to see what the degradation effect is on QM9 if one used a shared message passing architecture (just sharing the messages, resulting embeddings could be transformed separately for different predictions).\", \"There's a recent workshop paper also making use of directional information (local coordinate transformations along a protein backbone chain) in message passing/transformer architectures: Ingraham et al., Generative models for graph-based protein design, ICLR workshop 2019\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper beneficially incorporates directional information into graph\\nneural networks for molecular modeling while preserving equivariance.\\n\\nThis paper is a tour de force of architecture engineering. Continuous\\nequivariance, potential field representation, and bandwidth limited\\nbasis functions are synthesized in a compelling manner.\\n\\nI found the exposition extremely intelligible despite my lack of\\nfamiliarity with molecular modeling.\\n\\nThe contribution is clear, although applicability beyond the specific\\ndomain of molecular modeling is possibly limited. Being continuously\\nequivariant with respect to rotations is interesting, but seems to require\\nproblems where the input is encoded as a point cloud in a vector space;\\nI'm not familiar with such problems. Nonetheless, the domain of molecular\\nmodeling is sufficiently important in isolation.\\n\\nI recommend acceptance, because the contribution is strong, the writing\\nis excellent, the ideas are well-motivated, and the experiments support\\nthe empirical claims.\"}" ] }
HJe-blSYvH
Unsupervised Learning of Efficient and Robust Speech Representations
[ "Kazuya Kawakami", "Luyu Wang", "Chris Dyer", "Phil Blunsom", "Aaron van den Oord" ]
We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals. The representations, which we learn from up to 8000 hours of publicly accessible speech data, are evaluated by looking at their impact on the behaviour of supervised speech recognition systems. First, across a variety of datasets, we find that the features learned from the largest and most diverse pretraining dataset result in significant improvements over standard audio features as well as over features learned from smaller amounts of pretraining data. Second, they significantly improve sample efficiency in low-data scenarios. Finally, the features confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets, and the features likewise provide improvements in four different low-resource African language datasets.
[ "features", "efficient", "robust speech representations", "significant improvements", "unsupervised learning", "learning", "unsupervised", "speech representations", "phonetic structure" ]
Reject
https://openreview.net/pdf?id=HJe-blSYvH
https://openreview.net/forum?id=HJe-blSYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "MFRfzdtj4l", "OfRwYSTzXM", "r1xr4KzjsS", "SyxV5OzjsS", "SklFPOGiiS", "H1gpEOzsjH", "rkl6lH2for", "rJezAFHRFS", "HJlqbVhpYH", "rJlW3Szi_r" ], "note_type": [ "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1579672096232, 1576798741211, 1573755180531, 1573755020360, 1573754976719, 1573754933382, 1573205236545, 1571867082347, 1571828738089, 1570608552644 ], "note_signatures": [ [ "~Michael_Auli1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2125/Authors" ], [ "ICLR.cc/2020/Conference/Paper2125/Authors" ], [ "ICLR.cc/2020/Conference/Paper2125/Authors" ], [ "ICLR.cc/2020/Conference/Paper2125/Authors" ], [ "~Mark_Adams5" ], [ "ICLR.cc/2020/Conference/Paper2125/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2125/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2125/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"wav2vec does benefit from more data\", \"comment\": \">> large-scale pretraining reverse a trend toward worse performance with larger data that was hinted at in [2]\\nThis is a misinterpretation of our results. Almost all experiments show that training on more data helps, except for one instance where there was a 0.04 WER degradation on test when adding 8% more data - far from significant. Please fix this in the current draft as it misrepresents our findings.\"}", "{\"decision\": \"Reject\", \"comment\": \"The paper focuses on learning speech representations with contrastive predictive coding (CPC). As noted by reviewers, (i) novelty is too low (mostly making the model bidirectional) for ICLR (ii) comparison with existing work is missing.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to negative sampling description\", \"comment\": \"1. We observed sampling negatives from the same audio signal always provide better results. We will remove \\u201cand/or mini-batch\\u201d from the line.\\n\\n2. We will be able to release the pre-trained model after publication.\"}", "{\"title\": \"Response to review #3\", \"comment\": \"Thank you for the thorough review.\\n\\nWe will include more citations to the ZeroSpeech line of work and contextualize our method in terms of it. However, while both our paper and the ZeroSpeech work learn unsupervised acoustic representations, our semi-supervised evaluation is important since it may conceivably require a qualitatively different kind of acoustic features for optimal performance.\"}", "{\"title\": \"Response to review #1\", \"comment\": \"Thank you for the review.\\n\\nWe would like to clarify the novelty of our work and its relation to published work. The representation learning objectives and model architectures are indeed quite similar to [1, 2]; however, we did not intend to imply that either the architecture or training objective is what makes this paper novel, rather we contribute three significant results:\\n1. We demonstrate the feasibility and advantages of pre-training on large-scale and noisy data.\\n2. We demonstrate robustness on out-of-domain evaluation that large-scale pre-training provides. \\n3. We demonstrate that large-scale pre-training results in representations that are universal, as demonstrated by performance on low-resource languages.\\nAll three of these points are new compared to results shown in previous papers.\\n\\nMoreover, these are important aspects of representation learning. And, not only do we explore them systematically for the first time, but we also apparently find that large-scale pretraining reverse a trend toward worse performance with larger data that was hinted at in [2], where they show that increasing the amount of pre-training data did not lead to improved downstream performance. Also, in our replication, the models trained only on Librispeech data did not perform well in out-of-domain evaluations or in low-resource languages, demonstrating the importance of diverse kinds of pre-training data - again, a novel and important result. Those findings are of considerable interest since certainly a major benefit of unsupervised learning is being able to improve the robustness of ASR systems, which is a long standing challenge. The low-resource aspect has been investigated in Zerospeech challenge. We will explain the connections in the final version of the paper.\\n\\nRegarding the effect of the LM on the WSJ task- again, the point of this paper is the robustness of the acoustic representations across datasets, domains, and languages. The results of our in-domain setup demonstrate that these representations are adequate for this setup, but exploring the in-domain setup in detail distracts from the point of the paper.\", \"regarding_the_minor_tweaks_to_the_model\": \"The improvements to the model (bidirectional context network and dense residual connections) certainly improved the performance to the level that our representations only needs 10% of the labelled data to achieve the same result as the same model trained on spectrogram features using 100% of the training data.\\n\\nWe used exactly the same model architecture, learning rate schedule for different features. We fixed the model to DeepSpeech2 and tuned the learning rate and its scheduler for baseline features (not for our features). It is quite likely we could further improve the results with our features if we re-tuned the hyperparameters, but the scientific point we wished to make was already made.\\n\\n[1] Representation Learning with Contrastive Predictive Coding, van den Oord et al.\\n[2] Wav2vec: unsupervised pre-training for speech recognition, Schneider et al.\"}", "{\"title\": \"Response to review #2\", \"comment\": \"Thank you for the review.\", \"we_address_review_concerns_here\": \"1. Our focus in this paper is on completely unsupervised acoustic representations, as well as the properties they confer on the ASR systems that use them. The self-training approach you suggest can indeed be a good way to improve the performance of a system, but it is a different research question that is beyond the scope of this paper. We will identify this as a related strategy.\\n2. The switchboard data was upsampled to 16kHz- we will clarify in the paper.\\n3. We wanted to include TDNN because it is the state of the art in supervised ASR. However, we also wanted to be able to work with a simpler model class than is standard, and DeepSpeech2\\u2019s recurrent architecture (which is close to SOTA) meant it could be more easily shrunk without changing the receptive field (which would have happened had we removed layers from TDNN). An alternative would have been to reduce the capacity of the TDNN model without changing the receptive field size, but deep low-capacity layers are difficult to train, and we felt would have led to higher variance results.\\n4. Thanks for the suggestion. We will run these experiments in ongoing work.\"}", "{\"title\": \"Negative Sampling Description is vague\", \"comment\": \"Thank you for your submission! A couple of questions:\\n\\n1. The paper says \\\"the negatives are uniformly sampled from representations in the same audio signal (z) and/or mini-batch\\\". Could you clarify what this \\\"and/or\\\" means? Considering a batch size of 128 and k=12 steps, do you sample uniformly the 10 negatives out of the 128 * 12 representations? Or do you sample from the 128+11 representations that are the union of the current batch representations at a fixed step and the current audio signal at all steps? Does the method used to select negatives change from one experiment to the other? This point is more than just a detail: as the original CPC paper (van den Oord et al. 2018) showed, the method used to select the negative samples (same-speaker vs. uniform on dataset) had a significant impact on the quality of the produced embeddings.\\n\\n2. Do you plan to open source the code supporting the experiments of this paper?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper investigates an unsupervised learning approach based on bi-directional contrasive predictive coding (CPC) to learning speech representations. The speech representations learned using 1k and 8k hours unlabeled data based on CPC are shown to be helpful in semi-supervised learning ASR tasks in terms of sample efficiency, WER and cross-domain robustness. The reported work is interesting and may have value to the speech community. Regarding the paper, I have the following concerns.\\n\\n1. In terms of semi-supervised learning ASR, I think any proposed approach should compare with the \\\"naive\\\" way of doing it. That is, use a high-performance ASR model to decode the unlabeled data and use the decoded pseudo-truth as the ground truth to train an acoustic model with an appropriate capacity. In my experience, many of the \\\"novel\\\" approaches can not outperform this \\\"naive\\\" method. I would like to see this as a baseline for the semi-supervised learning experiments. \\n\\n2. In sec. 3.1 on the setting of unsupervised learning, the authors state that \\\"all audio signals have a sampling rate of 16KHz\\\". This is obviously not true for the Switchboard data in Table 6 in Appendix A, which has a sampling rate of 8KHz as they are telephony signals. The authors should clarify. \\n\\n3. It is not clear to me why the authors use two different ASR models (DeepSpeech2 small and TDNN). Why not stick to one architecture but adjust the model capacity? \\n\\n4. I wonder if the latent features learned by CPC can be complementary to the conventional features such as logmel ? How does it perform if the two are simply concatenated as the input to the acoustic model? \\n\\nP.S. rebuttal read. I will stay with my score.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an unsupervised method for learning representations of speech signals using contrastive predictive coding.\\nThe authors provide results for the speech recognition task, in which they trained their model on up to 8000 hours of speech. The authors provide results on several English benchmark datasets in addition to four low-resource African language datasets. \\nThe authors compared their method to the traditional signal processing representations and show that the proposed method is superior. \\n\\nMy main concern with this submission is its novelty.\\nThe proposed method was previously explored in [1] and presented similar results. If I understand it correctly, the main novelty in this work is the usage of bi-directional models together with more data. However, it is not clear what made the improvements. Considering the fact that such an approach was suggested recently by [1], a detailed comparison with uni-directional models is needed.\\nFor example, in Table 2, the authors provide results for WSJ dataset, however, with no LM decoding. Can the authors provide experiments of WSJ while using LM similarly to [1]? Moreover, if the authors wanted to eliminate the effect of LM as they stated in the paper, why not calculating Character Error Rates instead or in addition to Word Error Rates? Again, as done in [1], and in many other papers in the field [2]. \\n\\nAdditionally, in Table 1 and Table 5, the error rates seem pretty high, especially for the baseline model, did the authors investigated different architectures/stronger ones for these tasks? Different representations such as LogFilterBanks / MFCCs?\\n\\nI'm willing to increase my score, in case the authors will address my concerns. However, at the moment, I do not see much novelty in this paper comparing to previous work. Additionally, the authors are missing an essential comparison to previous work so we could better understand the contribution of this paper.\", \"minor_comments\": \"\\\"using a simpler convolutional architecture than is common\\\" -> should be rephrased.\\n\\n\\n[1] Schneider, Steffen, et al. \\\"wav2vec: Unsupervised Pre-training for Speech Recognition.\\\" arXiv preprint arXiv:1904.05862 (2019).\\n\\n[2] Adi, Yossi, et al. \\\"To Reverse the Gradient or Not: an Empirical Comparison of Adversarial and Multi-task Learning in Speech Recognition.\\\" ICASSP, 2019.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"\", \"overview\": \"This work uses contrastive predictive coding (CPC) to learn unsupervised speech representations on large amounts of unlabelled speech data and then uses the resulting features in downstream speech recognition systems. Unlabelled data is obtained from several sources (spanning different languages). Supervised systems are then built on top of these features and sample-efficiency and cross-domain robustness is investigated using English data sets. Finally, the approach is applied to four African languages.\", \"strengths\": \"Firstly, the paper is very clearly written and motivated. Secondly, a very relevant problem is tackled in a systematic way; compared to transcribed resources, unlabelled resources are much easier to collect and more widely available. This paper shows that these unlabelled resources can be of great benefit in downstream tasks and on languages where few resources are available. Thirdly, the experiments are carried out very systematically to support the claims of the paper: that bidirectional CPC-based feature learning improves same efficiency (they show that much less labelled data is required to achieve the same performance as when using more substantial labelled data with conventional features), and that it improves robustness to out-of-domain data. They perform these experiments on both English and truly low-resource languages.\", \"weaknesses\": \"There are two main weaknesses to the paper. Firstly, as the authors note themselves, unsupervised CPC-based speech feature learning was developed and considered in previous work, and has also been subsequently investigated by others. The main technical contribution is therefore only in changing the unidirectional architecture to bidirectional. Secondly, the paper does a very poor job of linking this work with previous work. The work in [1] is very related. In Section 5, the ZeroSpeech challenges are mentioned briefly (with a single citation), but over the last decade there has been substantial work in this community specifically looking at exactly the main problem addressed in this paper (unsupervised speech representation learning). It would be of great benefit to situate this work within that context, and I would recommend that the paper at least mention [2] to [9].\", \"overall_assessment\": [\"Although technical novelty is limited (first weakness), I think there is novelty in the paper's systematic experimental investigation, including ASR experiments on truly low-resource languages. The conclusions of this work also has practical implications for the ASR community. The second weakness can be addressed by amending Section 5. I therefore assign a \\\"Weak Accept\\\" to the paper.\", \"Questions, suggestions, typos, grammar and style:\", \"In Figure 1, it might be useful to indicate the autoregressive nature of the context vectors by adding arrows in-between the $c$ blocks on the top left. (In the text it says an RNN is used.)\", \"p.7: \\\"... are suitable for driving recognition different languages ...\\\". A typo or grammatically incorrect sentence.\", \"p. 9: \\\"Tts without t\\\" -> \\\"TTS without T\\\"\", \"p. 9: \\\"african\\\" -> \\\"African\\\" (check all citations for capitalization)\"], \"missing_references\": \"1. https://arxiv.org/abs/1904.03240\\n2. A. Jansen et al. A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition. ICASSP, 2013.\\n3. Badino, L., Canevari, C., Fadiga, L., & Metta, G. (2014). An auto-encoder based approach to unsupervised learning of subword units. in ICASSP.\\n4. Versteegh, M., Anguera, X., Jansen, A. & Dupoux, E. (2016). The Zero Resource Speech Challenge 2015: Proposed Approaches and Results. In SLTU-2016 Procedia Computer Science, 81, (pp 67-72).\\n5. Renshaw, D et al. (2015). A Comparison of Neural Network Methods for Unsupervised Representation Learning on the Zero Resource Speech Challenge. Interspeech.\\n6. R. Thiolliere et al. A hybrid dynamic time warping-deep neural network architecture forunsupervised acoustic modeling. Interspeech. 2015\\n7. https://arxiv.org/abs/1811.08284\\n8. https://arxiv.org/abs/1702.01360\\n9. https://arxiv.org/abs/1709.07902\"}" ] }
BJx-ZeSKDB
Compositional Embeddings: Joint Perception and Comparison of Class Label Sets
[ "Zeqian Li", "Jacob Whitehill" ]
We explore the idea of compositional set embeddings that can be used to infer not just a single class, but the set of classes associated with the input data (e.g., image, video, audio signal). This can be useful, for example, in multi-object detection in images, or multi-speaker diarization (one-shot learning) in audio. In particular, we devise and implement two novel models consisting of (1) an embedding function f trained jointly with a “composite” function g that computes set union opera- tions between the classes encoded in two embedding vectors; and (2) embedding f trained jointly with a “query” function h that computes whether the classes en- coded in one embedding subsume the classes encoded in another embedding. In contrast to prior work, these models must both perceive the classes associated with the input examples, and also encode the relationships between different class label sets. In experiments conducted on simulated data, OmniGlot, and COCO datasets, the proposed composite embedding models outperform baselines based on traditional embedding approaches.
[ "Embedding", "One-shot Learning", "Compositional Representation" ]
Reject
https://openreview.net/pdf?id=BJx-ZeSKDB
https://openreview.net/forum?id=BJx-ZeSKDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Eq55j-Xan0", "HJg9EJB2oB", "HylvuIEnjB", "S1lrgINnoH", "ByxlAVN2jB", "Hkx7Kvvk5S", "H1edfYN0YS", "B1lxwbMTtS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741180, 1573830449940, 1573828206672, 1573828077177, 1573827783830, 1571940218771, 1571862799583, 1571787095719 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2124/Area_Chair1" ], [ "ICLR.cc/2020/Conference/Paper2124/Authors" ], [ "ICLR.cc/2020/Conference/Paper2124/Authors" ], [ "ICLR.cc/2020/Conference/Paper2124/Authors" ], [ "ICLR.cc/2020/Conference/Paper2124/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2124/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2124/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors propose a new type of compositional embedding (with two proposed variants) for performing tasks that involve set relationships between examples (say, images) containing sets of classes (say, objects). The setting is new and the reviewers are mostly in agreement (after discussion and revision) that the approach is interesting and the results encouraging. There is some concern, however, that the task setup may be too contrived, and that in any real task there could be a more obvious baseline that would do better. For example, one task setup requires that examples be represented via embeddings, and no reference can be made to the original inputs; this is justified in a setting where space is a constraint, but the combination of this setting with the specific set query tasks considered seems quite rare. The paper may be an example of a hammer in search of a nail. The ideas are interesting and the paper is written well, and so the authors can hopefully refine the proposed class of problems toward more practical settings.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reviewers, any comments on the author response?\", \"comment\": \"Dear Reviewers, thanks for your thoughtful input on this submission! The authors have now responded to your comments. Please be sure to go through their replies and revisions. If you have additional feedback or questions, it would be great to know. The authors still have one more day to respond/revise further. Thanks!\"}", "{\"title\": \"More experiments have been conducted to answer the reviewer's concern\", \"comment\": \"\\u201chow was the exact neural architecture for f in section 3.2 chosen? It seems contrived. Is it possible to do some ablation studies?\\u201d -- In our updated paper we compare models with different numbers of layers (g_Lin, g_Lin+FC, g_DNN). We also add some more details about training in the appendix.\"}", "{\"title\": \"Paper updated with new experiments and clarification\", \"comment\": \"\\u201cDoes the proposal mean each embedding eventually corresponds to multiple classes/subclasses ie., one can learn something on-trivial about each class from these embeddings that is different from class-specific embedding?\\u201d \\u2014 Yes, that is the goal. The embedding computed by f can encode an entire *set* of classes, not just 1 class (as with traditional embeddings).\\n\\n\\u201cHow do you avoid the trivial solution problem here i.e., the embeddings are going to be average of the class-specific embeddings\\u201d \\u2014 Based on the reviewer\\u2019s suggestion, we added several more comparisons in Experiments 2, 3, and 4. In particular, we compared our proposed method to (1) \\u201cMean\\u201d: Simply computing the mean of multiple embeddings from an embedding function f trained just on singletons. (2) \\u201cf & g_mean\\u201d: Computing the mean of multiple embeddings when the embedding f was trained *with the knowledge* that its outputs would be averaged together. In summary: we found evidence that the proposed method, based on f and a non-linear g, can deliver better performance than either of the two \\u201cmean\\u201d baselines.\\n\\n\\u201c\\u2018... x_a containing objects in another image \\u2018 -- this statement is not making sense, is it objects in x_a also present in another image x_b?\\u201d -- Yes, that is correct. Objects in x_a are presented in x_b.\\n\\n\\u201cSimpler models (like Symm(a,b,.) i.e., just the first layer of what is being used now) should be evaluated instead to get better understanding of what is going on.\\u201d -- Thanks for the suggestion. We have implemented several new variants of g (and of h) in our updated paper. In some cases, a simple g consisting of a single linear layer works best, whereas in other cases a deeper g works better. Please note that we also fixed a bug in the implementation of the bi-linear baseline from our original submission. The result (which is now called the g_Lin method) has been updated in the paper. In experiment 3, we also used a different random seed and the new results are slightly different from the previous version.\"}", "{\"title\": \"We would like to thank the reviewer for noting some missing points in our experiments. We updated the paper with some new experiments according to the suggestions and made some clarification.\", \"comment\": \"\\u201cNo explanation is given why a trivial solution cannot be used instead of the learnt functions.\\u201d \\u2014 First, we want to point out that training a classifier (e.g., ResNet) using standard supervised learning is only possible if the training and testing classes are the same. For our Omnigot and simulation studies (Experiments 2 and 1, respectively), they were different (one-shot learning). This is an important case, e.g., for speaker diarization. Second, based on the reviewer\\u2019s suggestion, we did conduct a follow-up analysis on COCO (Experiment 4, in which training and testing classes are indeed the same) -- please see the updated paper. Interestingly, the trained ResNet classifier (followed by a threshold of 0.5 and then a bit-comparison to answer label queries) did not perform very well compared to the proposed f & h method -- see Table 1(b). One possible reason is that ResNet is not optimized to answer queries about image pairs. Instead, it tries to encode each image into an n-bit string (for n classes). While this representation can account for all 2^n possible label sets, it may not be the most effective or efficient representation for the task, especially since some objects are very unlikely to co-occur with others. The proposed f & h embedding method can harness the co-occurrence structure to answer queries more accurately, whereas a ResNet trained to recognize all the individual classes does not harness it. Another reason may be that such a classifier trained on COCO has to overcome strong class imbalance (which is not trivial to fix on COCO), which the compositional embeddings do not (since they were trained inherently with 50%/50% balance).\\n\\n\\u201cAnalysis of how performance of the technique scales with the size of the set\\u201d \\u2014 We added a study to the appendix on the accuracy of f & h as a function of the label set size.\\n\\n\\u201cf is always different between that used with g and that used with h, is this the case?\\u201d \\u2014 f is the same architecture but has different parameters in g than h.\\n\\n\\u201cSimRef also doesn't do data augmentation but there's no explanation why\\u2026\\u201d \\u2014 Actually, SimRef uses the same augmentation as the proposed f & g method. Recall that all the methods receive reference examples of the *singleton* classes, which are created using random affine transformations of the original OminGlot data. The reviewer may be referring to the statement, \\u201cwithout shifting/scaling/rotation\\u201d in our paper. Please note that these transformations were part of the *rendering* function r. Since r is assumed to be hidden (from all the methods), we did not give oracle access of how r works to the SimRef method.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper describes a way to train functions that are able to represent the union of classes as well as to query if the classes in an image subsume the classes in another image. This is done throughly jointly training embedding functions, a set union function and a query function. The paper reads well.\\n\\nWhile the approach is reasonable, the experiments seem to be quite incomplete and no explanation is given why a trivial solution cannot be used instead of the learnt functions.\\n\\nThe paper argues for learning a set union function however much of the evaluation focuses on quite small sets of 2 or 3 items. On the evaluation that utilises larger sets, e.g. COCO, there isn't any analysis of how performance of the technique scales with the size of the set since that would be one of the defining characteristics of a set union function. The COCO experiment is also lacking in detail, for example, how many items are there in the positive and negative sets and how the test set is balanced. Finally, it seems that f, g and h could be trivial non-learnt functions. For example, f could be a function that maps an image to a binary representation of its classes (this could be a typical ResNet image classifier), g could be a function that does a binary OR of its two arguments and h could be a function that uses a binary AND and equality test on its two arguments. In this case, g and h don't need to be learnt at all. This may not be possible in the COCO experiment where the individual labels are not known but it seems quite unrealistic to have a dataset where only pairwise subset relationships are known.\\n\\nIt also seems that the f is always different between that used with g and that used with h, is this the case? SimRef also doesn't do data augmentation but there's no explanation why it is done for the proposed method and not for this baseline. The MF baseline in experiment 1 seems to be a straw man especially since the baselines in experiment 2 are much stronger.\\n\\n================================================================================\", \"update_after_rebuttal\": \"Thanks for answering my questions and performing the additional experiments with a ResNet baseline and performing an additional analysis based on the number of subclasses in figure 5. I think these provide a substantially better analysis of the algorithm so I've increased my score correspondingly. For the final paper, I think it would be good to add TradEmb/ResNet to figure 5 as well to understand how those methods scale worse/better with the number of subclasses.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a joint/compositional embedding procedure where a single instance can be mapped/embedded to multiple classes while preserving the class-specific information in the embedded representations. The authors look at class union and class query criteria for the composite embeddings. The proposed approach is evaluated appropriately. There are several issues with the work.\\n\\nDoes the proposal mean each embedding eventually corresponds to multiple classes/subclasses ie., one can learn something on-trivial about each class from these embeddings that is different from class-specific embedding? How do you avoid the trivial solution problem here i.e., the embeddings are going to be average of the class-specific embeddings --- as we see in the evaluations this is in fact happening (figure 1b)? Also, is this behaviour desired i.e., tending towards mean? \\n\\nAnd continuing along these lines, a clear choice of baseline for the proposal is to choose mean embeddings i.e., men of independent embeddings? Or is this not appropriate? Why is ML the best baseline? We can use the probability map (the input to final softmax) instead as the embedding as well correct? \\n\\n\\\"... x_a containing objects in another image \\\" -- this statement is not making sense, is it objects in x_a also present in another image x_b?\\n\\nIt is rather difficult to interpret the usefulness of g(.) when it is a nonlinear model like neural network. Simpler models (like Symm(a,b,.) i.e., just the first layer of what is being used now) should be evaluated instead to get better understanding of what is going on!\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n=======\\nThis paper proposes compositional embeddings i.e. embeddings that can be used to infer multiple classes from the data. In particular, the paper deals with two types of composite functions for embeddings, one that computes union of the different classes represented by each embedding vector, and the other where the class of one of the embeddings is subsumed by the class of the other embedding. The actual composition functions are parameterized by neural networks whose parameters are learned from data. Results on synthetic as well as several real-world datasets highlight the superiority of the learned composite embeddings.\", \"comments\": \"==========\\n1) This paper presents a welcome contribution to the saturated literature on embeddings. The whole idea of compositionally and its application to speaker diarization and multi-object detection is novel. \\n\\n2) The execution of the idea is also excellent and thorough. Further, the paper is very well written and puts itself nicely in context of previous work. I think this should inspire future work on other kinds of composite functions other than the two considered here. \\n\\n3) The results on both the synthetic and real-world omniglot and COCO datasets are impressive and mostly well executed and show significant improvement over the \\\"most frequent\\\" baseline. \\n\\n\\n4) My only concern regarding the paper is w.r.t some arbitrary decisions made in the experiments e.g. how was the exact neural architecture for f in section 3.2 chosen? It seems contrived. Is it possible to do some ablation studies? Also, I think it will be nice to provide some more details regarding the neural network training in Section 3.1.\"}" ] }
HklxbgBKvr
Model-based reinforcement learning for biological sequence design
[ "Christof Angermueller", "David Dohan", "David Belanger", "Ramya Deshpande", "Kevin Murphy", "Lucy Colwell" ]
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design. RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered. We propose a model-based variant of PPO, DyNA-PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds. To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA-PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned.
[ "reinforcement learning", "blackbox optimization", "molecule design" ]
Accept (Poster)
https://openreview.net/pdf?id=HklxbgBKvr
https://openreview.net/forum?id=HklxbgBKvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "_4uNzD_Rnj", "S1xqbYYhjB", "SJxKdDt3oS", "ryeOxDK3or", "BJxXnrF3iH", "S1gxUQJD9B", "ryxDkJqCYH", "rylwJMHTKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741151, 1573849345779, 1573848944579, 1573848815977, 1573848491111, 1572430664038, 1571884766564, 1571799519441 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2123/Authors" ], [ "ICLR.cc/2020/Conference/Paper2123/Authors" ], [ "ICLR.cc/2020/Conference/Paper2123/Authors" ], [ "ICLR.cc/2020/Conference/Paper2123/Authors" ], [ "ICLR.cc/2020/Conference/Paper2123/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2123/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2123/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a model based proximal policy optimization reinforcement learning algorithm for designing biological sequences. The policy of for a new round is trained on data generated by a simulator. The paper presents empirical results on designing sequences for transcription factor binding sites, antimicrobial proteins, and Ising model protein structures.\\n\\nTwo of the reviewers are happy to accept the paper, and the third reviewer was not confident. The paper has improved significantly during the discussion period, and the authors have updated the approach as well as improved the presented results in response to comments raised by the reviewers. This is a good example of how an open review process with a long discussion period can improve the quality of accepted papers.\\n\\nA new method, several nice applications, based on a combination of two ideas (simulating a model to train a policy RL method, and discrete space search as RL). This is a good addition to the ICLR literature.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Included additional baseline and ablation studies\", \"comment\": \"> Penalizing the reward if the same sequence is seen multiple times seems decent and works well compared to the entropy regularization but it is still questionable if it is the best solution for biological sequences. Showing results when the reward is penalized using the hamming loss or the biological similarity with previous sequences could go a long way to convince of your choice.\\n\\nThanks for your feedback. To motivate the choice of penalizing duplicate sequences, we included an additional analysis that compares penalizing sequences based on their frequency to penalizing sequences based on the distance to the nearest neighbor and entropy regularization (see figure 9). The results show that scaling the reward by the distance also increases the hamming distance and uniqueness of sequences. However, it is less effective in finding all optima and does not provide a fine-grained control of diversity. \\n\\nWe further generalized our described exploration bonus to not only penalize exact matches, but take all past sequences within a specified distance radius into account. We show in figure 11 that including sequences within a radius greater than zero improves exploration over only penalizing exact duplicates (radius 0). This is because exact duplicates are unlikely in case of high-dimensional problems such as the AMP problem. \\n\\n=============================================\\n> However, I would like to see other RL algorithms that were shown in the appendix for all those comparisons.\\n\\nFigure 7 now compares PPO with REINFOCE, DQN, and categorical DQN on all three optimization problems and shows that PPO performs best. PPO has better exploration properties than REINFORCE, which tends to converge too soon to a local optimum. The poor performance of DQN and CatDQN can be explained by the sparse reward (the reward is only non-zero at the terminal state), such that the Bellman error and training loss for updating the Q network are zero in most states. We also found the performance of DQN and CatDQN to be sensitive to the choice of the epsilon greedy rate and Boltzmann temperature for trading-off exploration and exploitation and increasing diversity.\\n\\n=============================================\\n> Including MCMC methods in the experiments will also allow to see how RL methods compared to the sota in bioinformatics..\\n\\nThanks for the suggestion. We expect that MCMC methods are sota on optimization problems for which we know the functional form of the objective function, but not where we are performing black-box optimization. Nevertheless, we have added MCMC and simulated annealing to our analysis. We show in figure 8 that it performs considerably worse than regularized evolution on all problems. The details of our MCMC and simulated annealing approaches are provided in appendix A.2. \\n\\n=============================================\\n> For the Ising dataset, you mention that it is a contribution but do not provide enough details regarding it to allow further research with it.\\n\\nWe have updated Sec B.1 to explain the details of our new 20-state Ising model and to provide additional implementation details. Please let us know if you have more questions. It would be interesting in future work to extend these Ising models to contain, for example, higher order terms. \\n\\n=============================================\\n> In a real life biological setting, the data obtained at each batch will be more likely different both in term of sequences but also in term of labels. How all your method (and the others) perform with changing distribution of data from one batch to another. Ex (e.g. 0 < y batch 1 < 100, 100 <= y batch 2 < 500, etc) ?\\n\\nWe agree with you that this is a challenging ML problem because the distribution over rewards is changing batch to batch. As we optimize, the rewards generally get higher. This is exactly why we adaptively perform model selection (both model type and hyper-parameters) at each round in order to make sure that we adjust to properties of each batch. Otherwise, it is difficult to find a regressor that is suitable across rounds of experiments.\\n\\n=============================================\\n> You tried two values for R^2 in one experiment (one positive and one negative). What happens for any other positive values (e.g 0.1. 0.2, 0.3, etc) ?\\n\\nWe have added figure 14, which shows the sensitivity of DyNA PPO to the choice of \\\\tau on all three design problems. Choosing \\\\tau between 0.4 and 0.5 is best on all problems. We explain the effect of choosing \\\\tau too small or large in the figure caption. See also our response to reviewer 4.\"}", "{\"title\": \"Included additional baselines and discussed related work\", \"comment\": \"Thank you for suggesting the important related work (Ingraham et al, Sabban et al, and VAE based generative models). We have added a discussion of them in the related work section. Below are some additional details explaining the relationship between these works and our paper.\\n\\nA VAE as used in PepCVAE or for generating SMILE string is not a sequence design method, but a specific generative model that can be used within a variety of design approaches. The DbAs method that appears in all three sections of our experiments employs a VAE. We have updated the text to clarify this. We have added an experiment analyzing the performance of DbAS with a VAE vs. LSTM generative model in figure 8. The LSTM performs worse than DbAs VAE on high-dimensional problems and only slightly better on TF Bind.\\n\\nIngraham et addresses the problem of inverse protein folding, i.e. generating a protein sequence that folds into a given protein structure. Instead our paper addresses multi-round optimization of any blackbox functions, which requires methods that are sample efficient and generate diverse sequences--two challenges that are not addressed in Ingraham et al. Ingraham et al considers graph-conditional sequence generation and one important contribution of their paper is the architecture for encoding protein graph structures. We consider consider unconditional optimization and do not address the problem of encoding protein graph structures. In many biological sequence design problems, the structure of interest is unknown, so a generative model that conditions on structure would not be applicable.\\n\\nSabban et al describes a method for generating protein graph structures, whereas our paper is about generating sequences. Generative models for protein structures is a distinct research challenge, which we do not address. It is not clear how such a model could be directly used in multi-round biological sequence design.\"}", "{\"title\": \"Addressed all concerns\", \"comment\": \"=============================================\\n> The performance of the model seems similar to PPO in the large state-spaces (section 4.3), which somehow is disappointing.\\n\\nWe agree, and this motivated us to do additional research on this problem and to make key updates to the paper. We found that the model error on this problem increased rapidly during model-based training in the first few rounds (figure 12) and that the policy was hence trained with incorrect rewards, which decreases the overall optimization performance (figure 13). We also found that the model uncertainty (quantified by the standard deviation of the ensemble predictions) is strongly correlated with the model error (figure 12), and can therefore be used as a proxy for the reliability of the model. We do not know the model error at policy optimization time, but we do know its uncertainty. We therefore extended DyNA PPO to not train the policy with the model for a fixed number of rounds, but to stop model-based training as soon as the model standard deviation increased by a certain factor, i.e. the model starts to become unreliable. This prevents training the policy with incorrect rewards and improves optimization performance.\\n\\nWe could further improve the performance by generalizing our reward function to not only penalize exact duplicates, but by the number of previously proposed sequences that are within a certain distance radius around the proposed sequences. We describe the extended density-based exploration bonus in section 2.4 and compare it with alternative approaches in figures 9-11.\\n\\nAs a result, DyNA PPO performs now also best on the AMP problem (figure 6, left).\\n\\n\\n=============================================\\n> The performance of the model seems very sensitive to the choice of \\\\tau (Figure 6 right), which is set to 0.5, but it is not mentioned how this parameter is chosen (or at least I couldn\\u2019t find it) and how much the performance of the model in the other experiments is affected by the choice of this parameter. \\n\\nWe agree robustness to the choice of \\\\tau is important. We have updated section 2.4 to clarify that we treat \\\\tau as a tunable hyper-parameter. We have also added figure 14, which shows that performance is relatively insensitive to the choice of \\\\tau, in particular if most models that are considered during model selection are accurate. In case of the protein contact map Ising problem, for example, the cross-validation score of most models is above 0.7, i.e. choosing any threshold below 0.7 does not decrease model performance. The performance is more sensitive to the choice of \\\\tau if some models that are considered during model selection are inaccurate, for example in the case of the AMP problem. In this case, inaccurate models will be selected when choosing a small \\\\tau, and the resulting ensemble model will be hence less accurate. DyNA PPO reduces to PPO when choosing \\\\tau close to 1.0 since models have almost never a cross-validation score of 1.0 and are hence not used for model-based optimization.\\n\\n=============================================\\n> The paper should discuss relevant works such as the one above. \\n\\nThanks for the pointers. We added additional citations and extended our discussion of existing model-based approaches in the related work section.\"}", "{\"title\": \"Response to the reviewers\", \"comment\": \"We would like to thank all reviewers for evaluating our manuscript. We have tried to address all concerns in a proper way and believe that our paper has improved considerably. In summary, we made the following changes:\\n\\n1. We generalized our proposed diversity promoting reward function to take all neighbors within a specified distance radius into account instead of only exact duplicates, which we show improves exploration on higher dimensional problems such as the AMP problem. See figures 9-11 and section 2.4.\\n\\n2. We extended DyNA PPO to stop model-based optimization as soon as the model uncertainty increases by a certain factor instead of training the policy for a fixed number of rounds. This is motivated by our observation that the model uncertainty is strongly correlated with the model error. See figures 12-13 and section 2.3.\\n\\n3. DyNA PPO performs now best also on the AMP problem due to point 1 and 2 (see figure 6).\\n\\n4. We changed the vocabulary size of the proposed protein contact Ising model from two to twenty, the number of amino acids, to make it a more realistic protein design problem. We also discuss the future research and why this problem is of value for the protein design community. See section 4.1 and B.1.\\n\\n5. We included MCMC, simulated annealing, and DbAs with a LSTM instead of VAE as generative model. See section A.2 and figure 8.\\n\\nWe responded in detail to all comments. We would be happy to make further corrections if necessary. Overall, we believe that our paper is interesting for both applied computational biologists and machine learning researchers. We hope that our proposed optimization problems and methods will inspire future research on blackbox optimization, generative modeling, and uncertainty quantification, with the ultimate goal to improve drug design or make manufacturing processing more sustainable.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this work the authors propose a framework for combinatorial optimisation problems in the conditions that the measurements are expensive. The basic idea is to make an approximation of the reward function and then train the policy using the simulated environment based on the approximated reward function. The applications are shown in a set of biological tasks, which shows that the model performs well compared to the baselines.\\n\\nThe idea of learning the models of environment (or reward) and simulating the model to train the policy is not novel (e.g., https://arxiv.org/pdf/1903.00374.pdf). Similarly, in terms of formulating the discrete search problem as a reinforcement-learning problem, again there are similar works in the past, which are cited in the paper, but the combination of these two is novel to my knowledge; having said this the paper should discuss relevant works such as the one above. \\n\\nThe experiments seem convincing to me overall, however, I have the following concerns: \\n\\n- The performance of the model seems similar to PPO in the large state-spaces (section 4.3), which somehow is disappointing.\\n\\n- The performance of the model seems very sensitive to the choice of \\\\tau (Figure 6 right), which is set to 0.5, but it is not mentioned how this parameter is chosen (or at least I couldn\\u2019t find it) and how much the performance of the model in the other experiments is affected by the choice of this parameter.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Designing new discrete sequences satisfying desirable properties is an important problem in molecular biology. This is a difficult combinatorial optimization problem because of the difficulty in optimizing over a combinatorially large space. The authors propose a RL based framework for this black box optimization problem.\\n\\nThe paper is well written, but I have questions about the efficacy of the method, particularly because I think some of these results are against weak baselines. For example, the authors don't compare against many better performing protein design methods (See: Ingraham et. al, GENERATIVE MODELS FOR GRAPH-BASED PROTEIN DESIGN, Sabban et. al, RamaNet: Computational De Novo Protein Design using a Long Short-Term Memory Generative Adversarial Neural Network). VAE based methods have worked well for designing sequences like SMILES strings, but the authors dismiss them claiming that they are better modelled as molecular graphs. While it could be true that molecules are better modeled as molecular graphs, it is not clear why methods that have worked well in a sequence based modeling using SMILES strings will not work for Protein Design. For AMP Design, again they compare with a weak baseline and don't compare with VAE based methods (like for example: Das et al. PepCVAE: Semi-Supervised Targeted Design of Antimicrobial Peptide Sequences)\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Contribution\\nThis paper apply a model-based RL algorithm, DyNA-PPO for designing biological sequences. By being model-based, this algorithm is sample efficiency compared to model-free RL algorithms. This advantage is attractive and important in the context of biological sequence design since the designed is constrained to be done in the large batch / low round settings. To further improves model efficiency, the authors reduce learning bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross validation. To encourage diversity in the target distribution they also penalize the reward using a visitation-based strategy.\\n\\n\\nClarity\\nOverall, the paper is well written, well motivated and well structured. The technical content is also very clear and good.\\n\\n\\nNovelty\\nThe novelty in this work seems to be more on the applicative side (RL to optimizing DNA and protein sequences) than the method itself. I agree with the authors that most existing optimization methods are ill equipped for the large batch / low round settings and as sample efficiency becomes critically important as the number of round gets lower and their method is a good solution in such settings. \\n\\nThe technical novelty seems incremental as cross-validating and using a set of models under particular performance constraints does not constitutes a novel contribution. Penalizing the reward if the same sequence is seen multiple times seems decent and works well compared to the entropy regularization but it is still questionable if it is the best solution for biological sequences. Showing results when the reward is penalized using the hamming loss or the biological similarity with previous sequences could go a long way to convince of your choice.\", \"experiments\": \"The experiments are overall well presented and seems robust given the number of replicates that was made each time.\", \"analyzing_the_model_performances_across_different_metrics\": \"diversity, fraction of optimals, cumulative maximum helped to understand the method and its advantages.\\n\\nHowever, I would like to see other RL algorithms that were shown in the appendix for all those comparisons.\\nIncluding MCMC methods in the experiments will also allow to see how RL methods compared to the sota in bioinformatics.\\nFor the Icing dataset, you mention that it is a contribution but do not provide enough details regarding it to allow further research with it.\\n\\nIn a real life biological setting, the data obtained at each batch will be more likely different both in term of sequences but also in term of labels. How all your method (and the others) perform with changing distribution of data from one batch to another. Ex (e.g. 0 < y batch 1 < 100, 100 <= y batch 2 < 500, etc) ?\\n\\nYou tried two values for R^2 in one experiment (one positive and one negative). What happens for any other positive values (e.g 0.1. 0.2, 0.3, etc) ?\\n\\n\\nPoints of improvement\\nGiven the applicative nature of the paper and the proposed method there are few small experiments that could have been done to strengthen the manuscript (see questions and comments above).\", \"preliminary_rating\": [\"weak accept *\"]}" ] }
rklx-gSYPS
Learning to Optimize via Dual space Preconditioning
[ "Sélim Chraibi", "Adil Salim", "Samuel Horváth", "Filip Hanzely", "Peter Richtárik" ]
Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases. There is currently no analytical way for finding a suitable preconditioner. We present a general methodology for learning the preconditioner and show that it can lead to dramatic speed-ups over standard optimization techniques.
[ "Optimization", "meta-learning" ]
Reject
https://openreview.net/pdf?id=rklx-gSYPS
https://openreview.net/forum?id=rklx-gSYPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "bD1zs1XOHo", "BJlFUg-iiH", "rkgBTk-sjH", "rygBT9ljir", "Bkg2rQQZoH", "SygFLpeAYB", "HJetjtg0YB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741121, 1573748817276, 1573748669514, 1573747388662, 1573102403755, 1571847504896, 1571846560965 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2122/Authors" ], [ "ICLR.cc/2020/Conference/Paper2122/Authors" ], [ "ICLR.cc/2020/Conference/Paper2122/Authors" ], [ "ICLR.cc/2020/Conference/Paper2122/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2122/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2122/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Thanks for the detailed replies to the reviewers, which significantly helped us understand your paper better.\\nHowever, after all, we decided not to accept your paper due to weak justification and limited experimental validation. Writing should also be improved significantly. We hope that the feedback from the reviewers help you improve your paper for potential future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for your careful analysis and feedback. We are adding more experiments and benchmarks in response to your comments.\", \"q\": \"The proposed method requires the learning of neural networks, which will be computationally demanding. Please report the overall computational cost of the optimization algorithm in numerical experiments.\", \"a\": \"We have reported the number of epochs needed to train our neural network as well as the structure of the neural network. For the power function, for instance, we trained a neural network of size $256\\\\times 128$ on a data-set of $1000$ samples during $100$ epochs.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your careful analysis and feedback. We are adding more experiments and benchmarks in response to your comments.\", \"q\": \"Section 4: \\u201cThe step-size is set to 1\\\". It seems that the optimizer has been overfit and engineered to work on this specific problem.\", \"a\": \"This choice is justified by theory: in the theoretical observation mentioned above, the perfect preconditioner $\\\\nabla f^*$ is used with step-size 1.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your careful analysis and feedback. We are adding more experiments and benchmarks in response to your comments.\", \"q\": \"The function of x and the gradient is complex, it is difficult to predict the relationship by using a simple network.\", \"a\": \"Neural Networks might be our best shot at approaching \\u2207f* since they\\u2019re the most universal function approximators.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes an optimization method with the preconditioning in the framework of supervised learning. The ideal preconditioning is given by the Fenchel conjugate of the optimization function. This paper uses a supervised scenario to find the ideal preconditioning. The authors point out the importance of the sampling distribution then and propose a sampling scheme using the uniform distribution on the space of gradient descents. The samples are used to train neural networks that imitate the mapping of the Fenchel conjugate. The training network is incorporated into the Dual space Preconditioned Gradient Descent (DPGD). Some numerical experiments show the effectiveness of the proposed method comparing to the standard gradient descent method.\\n\\nThe authors proposed an interesting approach to the preconditioning in optimization problems. However, the paper is not well-written. In particular, the optimization algorithm is not clearly described. Numerical experiments with some toy problems are not very convincing to show the benefit of the proposed method. Though this paper may have some interesting ideas, more intensive analysis would be required.\", \"other_comments\": [\"The optimization algorithm is not explicitly described. Is the neural network trained as the batch learning? Is it possible to use the learning of the preconditioning in an online manner?\", \"It is not sure whether the ideal distribution \\\\mu over the domain D(f) presented in Proposition 3 is computationally tractable.\", \"In Proposition 3: I think that the uniform property of the sampling does not directly mean the optimality in the sense of the learning accuracy. The authors need to investigate the more detailed relationship between the distribution mu and the prediction accuracy of the Fenchel conjugate.\", \"The proposed method requires the learning of neural networks, which will be computationally demanding. Please report the overall computational cost of the optimization algorithm in numerical experiments.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper attempts to learn a preconditioner for optimization, specifically for the Dual space preconditioned descent (DPGD).\", \"The techniques used to learn the preconditioner are heuristic, not scalable and without justification or ablation studies.\", \"It does not compare against \\\"standard\\\" optimization techniques that construct data-driven preconditioners such as Adam or Adagrad or even to more Newton, natural gradient methods that use the Hessian or the Fisher information matrix as preconditioners. It shows ad-hoc synthetic experiments in dimensions 1 and 50. This is clearly not enough.\"], \"detailed_review_below\": [\"Section 2: Please explain why Legendre functions are useful in ML. For assumption 1, 2; it needs to be explained why these hold for a given f*. What constraints do you need on f? What functions satisfy these? Please explain this explicitly.\", \"Section 3: What is the number of points x_i needed in high dimensions to learn? Is it even possible to scale up this method to high dimensions?\", \"Constructing \\\\mu requires computing the determinant of the Jacobian. What is the computational complexity? Moreover, it seems that we need access to the \\\\nabla f(x) for all x in D(f)?\", \"Please state all the assumptions in the beginning rather than introducing one at a time in the propositions.\", \"Remark 1: It is unclear that the cost of an inverse Hessian matrix is more than the procedure proposed in this paper.\", \"Section 3.5: Please explain what is the advantage of this learned optimizer compared to other methods? Note that there is literature on non-smooth optimization and methods like sub-gradient descent can be used in this case.\", \"What is the justification for the selection of the loss function and log-rescaling?\", \"The result of Lemma 1 is standard. Please acknowledge this.\", \"Section 4: \\\"The step-size is set to 1\\\". It seems that the optimizer has been overfit and engineered to work on this specific problem. Either these decisions need to be justified, there needs to be an ablation study or there needs to be a larger set of experiments.\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Update after rebuttal period]\\nI have read the response, my confusion in the original reviews cannot be answered satisfactorily. Therefore, I keep my initial scores.\\n\\n\\n[Original reviews]\\nFirstly, the motivation of the proposed method is not convincing for me. The authors want to propose a general methodology for learning precondition by supervised learning setting. However the method in practice, the x is a complex distribution, it is difficult to handle the map between the gradient and the x. This method proposes log-scaling, but it needs to be stored with a precision of approximately 15 decimal places and the regressed model will be a piecewise constant function, which is very computationally time-consuming.\\n\\nSecondly, the experimental results are not sufficient for evaluation. This paper shows two\\nThe experimental result which includes the result of power function and the logistic function. But\\nIt is not clear that the whole process of dual space preconditioned method with the model of computation of precondition given. And without quantitative results given, it is not convincing the \\u201cdramatic\\u201d speedups\\u201d of these methods, because the surprising training process is off-line and time-consuming. On the other hand, because of the different forms of the convex objective function, the network will train for the specific convex objective functions. In my opinion, it is not a general method to lead to dramatically speed up.\\n\\nFinally\\uff0cthe function of x and the gradient is complex, it is difficult to predict the relationship by using a simple network,\"}" ] }
B1xybgSKwB
Self-Attentional Credit Assignment for Transfer in Reinforcement Learning
[ "Johan Ferret", "Raphaël Marinier", "Matthieu Geist", "Olivier Pietquin" ]
The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient. Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.
[ "reinforcement learning", "transfer learning", "credit assignment" ]
Reject
https://openreview.net/pdf?id=B1xybgSKwB
https://openreview.net/forum?id=B1xybgSKwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "S39Mmu6hZ", "Hkg4OyMnsr", "rkePrJMhiS", "ryg1CRZ3oH", "BJlk_RW3sr", "SJgA7RW3jB", "S1gkk2lt9H", "Skx8qjXJcr", "H1eNjMcpKH", "BJeDbT_1_r", "r1g_Ve11uH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798741092, 1573818220514, 1573818175312, 1573818054674, 1573817958879, 1573817894499, 1572568023410, 1571924877803, 1571820187549, 1569848574879, 1569808432137 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2121/Authors" ], [ "ICLR.cc/2020/Conference/Paper2121/Authors" ], [ "ICLR.cc/2020/Conference/Paper2121/Authors" ], [ "ICLR.cc/2020/Conference/Paper2121/Authors" ], [ "ICLR.cc/2020/Conference/Paper2121/Authors" ], [ "ICLR.cc/2020/Conference/Paper2121/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2121/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2121/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2121/Authors" ], [ "~Su_Young_Lee1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper introduces a novel approach to transfer learning in RL based on credit assignment. The reviewers had quite diverse opinions on this paper. The strength of the paper is that it introduces an interesting new direction for transfer learning in RL. However, there are some questions regarding design choices and whether the experiments sufficiently validate the idea (i.e., the sensitivity to hyperparameters is a question that is not sufficiently addressed). Overall, this research has great potential. However, a more extensive empirical study is necessary before it can be accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Re: Official Blind Review #3 (2/2)\", \"comment\": \"5. (1.) Fig.5 reports the discounted return (with gamma = 0.9) while Fig.6 reports the undiscounted return, which explains the different asymptotical values.\\n\\n(2.) Given the properties of reward shaping, we think that in the worst case scenario SECRET should only slow down the learning process of the agent. Also, please notice that we use the attention provided by the reward predictor only if the non-negative reward is correctly predicted (this was not clear initially, we clarified it). This way, we ignore attention that lead to badly predicted reward, which should mitigate the raised problem. \\n\\nAs an experimental illustration, we run SECRET with increasing size of the window around the agent in Triggers (as suggested by R2). Increasing the size leads to an inaccurate matching between attention and triggers, as reported in Appendix B.1. We trained agents using the shaped reward obtained from these reward predictors. Results vary depending on the size of the window, but SECRET still helps compared to the vanilla RL (no reward shaping using attention). Related results are provided in the new Appendix B.1. \\n\\n(3.) We propose a method using credit assignment in an offline manner to achieve transfer in RL. SECRET learns representations for credit assignment that are kept separate from those learned in the RL task, which we think is key for transferability. Existing credit assignment methods work online and in combination with an RL agent, which makes them hardly comparable to SECRET. Studying their transferability is outside the scope of the paper (please also see our reply to point 1 of R2, which is related). However, we compare the performance of our method to a baseline, but a transfer baseline, namely based on the transfer of weights.\\n \\nMinor comments\\nWT refers to weight transfer (in which we transfer the weights of an RL agent trained over episodes drawn from the source domain), this is now clarified.\\nWe took their comments into account and modified the draft accordingly.\\n\\n[1] Arjona-Medina J., Gillhofer M., Widrich M., Unterthiner T., Brandstetter J., Hochreiter S. - RUDDER: Return Decomposition for Delayed Rewards. NeurIPS 2019.\"}", "{\"title\": \"Re: Official Blind Review #3 (1/2)\", \"comment\": \"1. We agree with the line of reasoning of the reviewer and thank them for their careful checking. There is indeed a mistake in the equation defining Z. The intended effect of the causal mask is either to conserve values or assign a negative number with a large magnitude so that the future values are set to zero after the softmax operation. In our experiments, this is exactly what is being done: the causal mask multiplies elementwise the key-query product matrix and the result of that operation is added with the negation of the mask multiplied by a negative constant with a large magnitude. Thus, the problem only lay in the equation mentioned, which did not correspond to the reality of the experiments. We updated in the equation in the draft accordingly.\\n\\n2. The values of the weights w(c) are defined in Appendix A.1. The rationale behind using class weights in the sequential cross-entropy is that the reward prediction task is highly imbalanced in the tasks considered. Indeed, agents experience zero rewards most of the time. Hence, we opted for a change in the loss that puts a greater cost on the misclassification of nonzero rewards. \\nIn the new Appendix B.2. we included an additional experience in which we measure the impact of the class weights values on SECRET in a Triggers scenario. Briefly, we observe that not compensating for class imbalance degrades results.\\n\\n\\n3. A good way of reducing the detrimental effect of delayed rewards is to assign the reward directly to the responsible actions (as demonstrated in [1]), which is what we aim to do with SECRET. Constructing the potential function from the attention mechanism seems natural to us since we use attention to identify the contributions to future reward. There are certainly other options but this is the one we chose to explore and it is supported by our experimental analysis. Additionally, in theory, since we use reward shaping, it does not modify the set of optimal policies and agents will still explore as they learn.\\nAnother approach would be to use a stochastic estimate to modify the reward function over a single trajectory, so that we do not have to generate trajectories in the target domain and so that we do not use a potential function. We leave the investigation of that option for future work. \\n\\n4. By this sentence (\\\"given an underlying environment state and action, their contribution to future rewards is not fundamentally altered\\\") we mean that if the \\u201cstructure\\u201d of the MDP is preserved, then the credit to be assigned is also preserved. For instance, in the Triggers example, whatever the environment layout or size, the action of activating a trigger contributes the same way to future reward. Of course, depending on the layout, activating a trigger from a given state could mean going up or left, but we believe that the network learns this higher level semantic of \\\"activating a trigger\\\" from episodes in the source domain. Indeed, the representations of actions and observations get mixed in the proposed architecture, making it realistic to learn a combined abstraction.\\n\\nAs for the second part of the question, we agree that the formulation is confusing. What we meant is to consider changes to the reward function that preserve the set of optimal policies (e.g. in Triggers adding a negative reward to the trigger, but not too low such that the optimal policy still consists in exiting the room), and it is potentially more general than transformations that preserve the ranking of the individual values the reward function can take. We changed our formulation.\"}", "{\"title\": \"Re: Official Blind Review #2 (1/1)\", \"comment\": \"1. We believe that the transferability of SECRET is due to two major aspects: 1) that we keep representations for the credit assignment separate from those for the RL task and 2) that we use a self-attentional architecture, which was shown to transfer in settings other than RL.\\nBetter credit assignment is desirable and should arguably lead to better transfer results in the case of SECRET. Nevertheless, it is not necessarily true for other credit assignment methods available because they are designed for the online setting and intricately coupled with an RL agent. \\nThe focus of the paper being on transfer, we proposed a transfer method relying on credit assignment. In our opinion, comparing its credit assignment capabilities to other existing methods is outside of the scope of the paper. \\n\\n2. We included the results of varying the window size in the new Appendix B.1. Briefly, with bigger windows, there is less partial observability, and the attention no longer matches the trigger. Please see the new appendix for more details.\\n\\n3. Relational Deep RL ([1]) uses spatial self-attention to infer and leverage relations between \\\"objects\\\" (pixel representations). Crucially, it does not make use of the sequential aspect of the RL task. Instead, SECRET relies on temporal credit assignment, which could be presented as a form of temporal relations (as dictated by the reward function). Those are very different approaches to handling relations (if SECRET can be deemed as relational). We think it would indeed be an interesting research direction to combine both spatial and temporal aspects for credit assignment or relational reasoning.\\n\\n4. There are two different aspects here: 1) the reward model could be trained on very few trajectories in the source domain, or 2) it could be applied on very few trajectories to build the potential function in the target domain.\\nFor 1), in practice, we only redistribute the nonzero rewards that were successfully predicted by the reward model, so insufficient prediction capabilities are not a problem. We added a sentence in the main text to mention the fact that we consider correctly predicted nonzero rewards. If the model does not manage to predict nonzero rewards, then SECRET falls back to the Vanilla RL case. In the worst case scenario, SECRET could predict a small proportion of the nonzero rewards and assign wrong credit, which could lead to a slowed down procedure.\\nFor 2), the potential function used in SECRET relies on trajectories with nonzero rewards. In the worst case scenario, the potential function could not reflect accurately the structure of the MDP and lead to a slowed down learning procedure. \\nWe now include two additional experiments in Appendix B.3 that explore both scenarios. We show that with a small number of trajectories, either in the source or the target domain, the performance of the agent does not drop too much. \\n\\n5. The samples generated in the target domain are not included in the number of episodes reported in the paper. While debatable, our motivation to do so is that we use the same fixed policy we used in the source domain to generate those trajectories. Note that there is no learning procedure involved during the collection of the target samples. \\n\\n6. Maybe a follow-up to consider for the coffee test is to adapt from using a coffee-brewing machine to making it from scratch :)\\n\\n[1] Zambaldi V., Raposo D., Santoro A., Bapst V., Li Y., Babuschkin I., Tuyls K., Reichert D., Lillicrap T., Lockhart E., Shanahan M., Langston V., Pascanu R., Botvinick M., Vinyals O., Battaglia P. - Deep Reinforcement Learning with Relational Inductive Biases. ICLR 2019.\"}", "{\"title\": \"Re: Official Blind Review #4 (2/2)\", \"comment\": \"5. (i) A trajectory is a sequence of observation-action pairs that covers a whole episode of interaction. In the context of our paper, a sub-trajectory is a portion of trajectory that starts at the beginning of the episode and is now defined as such in the main text. The reward prediction task considers all sub-trajectories. Another way to look at it is that for each trajectory in the dataset, we try to reconstruct whole sequences of rewards from the observation-action pairs.\\n\\n(ii) The lengths of trajectories is not fixed. However, in our experiments, they are upper bounded due to time limits in Triggers and DMLab (in Triggers, the time limit depends on the size of the grid and is chosen so that it is sufficient to observe rewards given the played policies, while in DMLab it is fixed to 900).\\nWe added information about the time limits used in Appendix A.\\n\\n6. Attention weights sum to 1 in our experiments. The distribution displayed on Fig.3 mistakenly reports normalized logits instead of post-softmax weights, which explains why the values do not sum to 1. We updated the figures in the paper so that the post-softmax values are reported instead of normalized logits.\\nNote that the updated average attention weights sum to 1 and also that the attention remains peaked around interest points in the updated distributions. The improvement over the previous figure lies in the application of the softmax and the use of class weights in the loss.\\n\\n7. Since Transformers create representations via pooling, they need additional information so as to take into account the relative positions of sequence elements. Positional encoding is a way to incorporate that, and it consists in adding sine and cosine signals of varying frequencies to the input embeddings of the Transformer. Positional encoding is a building block of Transformer models that is well described in [2].\\nWe added a description of the effect of positional encoding in the main text and describe how it is done in practice in Appendix A.\\n\\nMinor comments\\n\\n1- We now use M_c as the notation for the causal mask.\\n2- We now state that d_i refers to the dimensionality of inputs in the main text.\\n3- The demand in the number of trajectories for Triggers is high due to the inefficiency of the random policy. For instance, in a 8x8 Triggers environment with 3 triggers and 1 prize only 1.6% of the trajectories feature a positive reward. Using suboptimal trajectories (from an imperfect learner or backup trajectories from past experimentation) could alleviate this demand.\\nThough, we agree that further experimentation would be needed to establish whether the proposed method is suitable for robotics. \\n\\n[1] Arjona-Medina J., Gillhofer M., Widrich M., Unterthiner T., Brandstetter J., Hochreiter S. - RUDDER: Return Decomposition for Delayed Rewards. NeurIPS 2019.\\n[2] Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A., Kaiser L., and Polosukhin I. - Attention is all you Need. NeurIPS 2017.\"}", "{\"title\": \"Re: Official Blind Review #4 (1/2)\", \"comment\": \"1. (i) We do not currently have a procedure to determine the amount of information to be hidden from states. We acknowledge that we currently need to design the transformation in a task-specific manner, which can be natural (eg, for a robotic task, it could be removing the information about velocity and acceleration) or not. In Triggers, the transformation we consider (cropping the frame around the agent) is natural and could work more generally for navigation tasks.\\nWe added an experiment where we study the effect of varying the window size used in the transformation applied to Triggers states in the new Appendix B.1. Results show that it is an important parameter for SECRET to assign sensible credit, but also that SECRET speeds up the learning procedure even with little or no partial observability. \\nFinally, transformations could indeed lead to state-aliasing (in the sense that they could have the same output for different inputs), but will not affect the agent directly since the agent uses the full state as in standard RL methods, if available. Hence, we believe state-aliasing to be a threat only if states that are crucial for future rewards have identical outputs than other unrelated states. \\n\\n(ii) We think the point of the reviewer is fair and that partial observability alone might not be sufficient. This is supported by the poor credit assignment quality when using a 7x7 window transformation in Triggers (see the new Appendix B.1).\\nWith SECRET, valid credit assignment relies on isolating the information necessary to predict that there is reward to be experienced from the corresponding inputs. In our experiments, we find that when using the transformation, architecture and loss we propose for SECRET there is a high correlation between the reward prediction quality (as measured by weighted accuracy) and the quality of the credit assigned (measured by the precision and recall calculated as in Sec.3.1). We only have empirical evidence of this for now, and no general theoretical result.\\nWe modified the sentence the reviewer mentioned to reflect these points.\\n\\n(iii) The preprocessing we do is rather natural for the considered problem, and maybe more generally for a navigation task, but we acknowledge that it is not general. We can think of some options that might alleviate the need for manual preprocessing:\\nWe could use a reconstructed version of the input. For instance, we could apply noisy autoencoding on the state representation of the RL agent and use the result as input to the reward prediction model.\\nIn [1] they use auxiliary losses to have their model attend to past sequence elements despite having access to the current state.\\nWe could hide a subset of sequence elements so that the model has incentive to diversify its focus. The subset could be reduced to the current sequence element or be determined randomly. \\nWe leave the exploration of automatic preprocessing strategies for future work, other approaches could be envisioned. \\n\\n2. In our DMLab experiment, we indeed use the position and the current key possessed by the agent to infer an imperfect state that we use to create the potential function.\", \"the_state_is_created_as_such\": \"it is the concatenation of the discretized position and the identifier of the key possessed. The discretized position is the result of the euclidean division by a cell size integer c_s and is necessary since the DMLab position is continuous. c_s is fixed and has the value of 50 in our experiments.\\nWe modified the paper so that the fact we manually construct the state is clear from the main text, and that the method used to create the state is clear from the Appendix.\\n\\n3. In Triggers (and more generally in MDPs), the observation type of vanilla RL in both in-domain and out-of-domain experiments is the full state. The observation built from the state is only used as input to the sequence model tasked with reward prediction. We did not consider using partial observations as input to RL agents since we think it would make the task artificially harder and would potentially require a recurrent architecture for the agents.\\n\\nIn DMLab, the observation type in all tasks and for both agents and the sequence model is the partial observation.\\n\\n4. We added attention distributions in out-of-domain scenarios in the new Appendix B.4.\\nNote that they look nearly identical to the updated attention distribution in the in-domain case, which is backed by near perfect precision and recall metrics.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes to consider the problem of transfer in the context of sequential decision-making -- in particular reinforcement learning -- from the view-point of learning transferable credit assignment capability. They hypothesize that by learning how to assign credit, structural invariants can be learned which the agent can exploit to assign credit effectively and thus learn more efficiently in new environments (be it in-domain or out-of-domain). They pose the credit assignment problem as learning to predict (sparse) rewards at the end of sub-trajectories, finding the extent to which past state-action pairs appear to be responsible for these rewards (by means of the reward-prediction training), and creating a dense reward function via reward shaping (such that the set of optimal policies does not change). This is appealing as no modifications are needed to the RL algorithm/architecture. To examine their hypothesis, they created a method, called SECRET, based on self-attention and supervised learning to train credit assignment capability offline: sample many trajectories (often a mixture of expert and non-expert ones) from the source distribution, train a self-attentive seq2seq model to predict the rewards in these trajectories offline. Once this model is trained, they apply this model to a relatively small set of sampled trajectories from the target distribution and obtain the attention weights. Then, they use these attention weights as a proxy for credit assignment and, thus, use them to form a reward redistribution function. In their experiments, they show that the average attention weights actually signal the state-actions at which the future reward is triggered. They also show in their experiments that SECRET improves learning performance on in-domain transfer learning (larger mazes), as well as an out-of-domain case (with modified dynamics).\\n\\nOverall, this paper proposes an interesting general avenue for research in transfer learning in RL. Regarding the proof-of-concept method and experiments, I need some clarifications. Given these clarifications in the authors' response, I would be willing to increase my score.\\n\\n1. Regarding this statement on breaking Markov property: \\\"hide a certain amount of information from states and break the Markov assumption\\\". \\n(i) It is unclear to me what this \\\"certain amount\\\" would need to be in general. I believe this would require domain-specific knowledge to know what can be removed to break Markov-ness while not introducing significant state-aliasing (which could hinder the agent's learning). \\n(ii) Does any extent of partial-observability warrant that the success in reward-prediction would mean that we have a valid credit assignment model? I feel like this is not generally true, in which case I question the statement on p.3: \\\"Note that in POMDPs, this is unnecessary since the observations agents get from the environment are incomplete by construction.\\\" \\n(iii) Regarding generality, the fact that states need to be (manually) preprocessed seems to me like a downside of this approach. Can you see any way around this? \\n\\n2. In p.4, this is mentioned: \\\"In POMDPs, we recover an approximate state from the observation...\\\".\\nI do not see how this is done in the DMLab experiments. If this is done manually, and not trained, then I think it should be clearly stated in the main text. I think the 2nd paragraph of Sec. A.4 is stating that extra information about the state was utilized, and not approximated via a trained model to recover the states (i.e., no s^=h^-1(o) was used)? \\n\\n3. What is the observation type of Vanilla RL in the out-of-domain experiments? Is it also observing its local-view (similar partial observability as SECRET) or does it have access to the full state? I would argue that it is important that the performance of Vanilla RL with partial observation is reported. Including both cases could also be beneficial. \\n\\n4. Fig.3 shows attention weights on held-out environments from an identical distribution as the source (i.e., in-domain). \\nI would like to see how well the attention signal works when the target distribution differs from the source. Is there a reason why this is not demonstrated? \\n\\n5. Not sure about specific definitions of sub-trajectory and trajectory in the paper: \\n(i) What constitutes a sub-trajectory (as opposed to a trajectory) in the context of this paper?\\n(ii) Are the lengths of the sub-trajectories or trajectories fixed?\\n\\n6. Why do the attention weights not sum to 1 in Fig.3?\\n\\n7. Could you clarify the role of positional encoding and how it is done?\", \"minor_comments\": \"1. M is used to denote both MDP and causal map. \\n2. Explicitly defining d_i in p.3 should improve clarity. \\n3. Using 40k and 10k trajectories of interactions to train the credit-assignment model (on Triggers and DMLab domains, respectively) seems quite demanding, which seems somewhat unrealistic to deem useful for application to robotics perhaps?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel transfer learning mechanism through credit assignment, in which an offline supervised reward prediction model is learned from previously-generated trajectories, and is used to reshape the reward of the target task. The paper introduces an interesting new direction in transfer learning for reinforcement learning, that is robust to the differences in the environtment dynamics.\\n\\nI have the following questions/concerns.\\n\\n1. The authors insist that their fous is on transfer and not competing on credit assignment. If accurate credit assignment leads to better transfer, shouldn't achieving the best credit assignment model (thus competing in credit assignment) lead to better transfer results?\\n\\n2. What effect does the window size for transforming states to observations have on the performance of SECRET?\\n\\n3. On a high-level, how does SECRET compare to transfer through relational deep reinforcement learning: https://arxiv.org/abs/1806.01830? Relational models use self-attention mechanisms to extract and exploit relations between entities in the scenes for better generalization and transfer. Although SECRET intentionally avoids using relations, I think a discussion around relational models for RL is warranted. I'm curious what happens if SECRET is allowed to exploit relations in the environment.\\n\\n4. What happens if the reward model uses very few trajectories and is not able to predict good rewards? Does transfer through credit assignment become detrimental? In other words, in a real-world scenario, how I do know when to start using SECRET, or when am I better off learning from environment rewards alone? Especially given that SECRET requires 40000 trajectories in the source domain.\\n\\n5. Are the samples generated in the target domain for collecting attention weights included in the number of episodes when evaluating SECRET? For example, in Figure 4. I believe the number of episodes required to collect those target samples should be added to the number of episodes when using SECRET since the agent must interact with the environment in the target domain.\\n\\n6. On a lighter note, I don't believe using a coffe-brewing machine has a 'universally invariant structure' of coffee-making. That's a luxurious way of making coffee :) In the developing world, we still need to boil water, pour coffee powder in it, etc., all without a coffee-brewing machine.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work focuses on credit assignment using a self-attention module for transfer RL problems. Specifically, the reward signals are assigned backward to previous states according to the attention weights. This can be helpful especially when the reward signal is sparse. Experiments on the newly proposed Triggers environment and the DMLlab keys & doors environment show that the proposed algorithm, SECRET, can speed up training in the transferred environment.\", \"pros\": [\"The writing is mostly great.\"], \"cons\": \"- Some design choices are not well-motivated or even problematic.\\n- Experiments are not sufficient.\\n\\n(1) On page 3, the definition of Z is problematic. The mask matrix M is applied *before* the softmax transformation, which means future values can have non-zero attention. This is because softmax will never produce zero probability. It would still be problematic even if M is applied after the softmax transformation because in this case, the attention for past elements could become very small and almost surely not sum to 1 (except for the last element). Therefore, regardless of the position of M, the attention will be questionable.\\n\\n(2) Page 4, the weight w(c) is not defined for the weighted cross-entropy. It is claimed that such weighting is essential, but no evidence is provided to support this.\\n\\n(3) The proposed potential function is not very well-motivated. It is not clear why it should be defined like this instead of other alternatives. Moreover, for never-visited states, the potential is set to 0, which seems to prevent exploration. This would potentially harm the performance in a new environment especially when the training trajectories are far from optimal.\\n\\n(4) Sec.2.3 says \\\"given an underlying environment state and a specific action, their contribution to future rewards is not fundamentally altered.\\\" Can you elaborate? Also what is \\\"the rank of the rewards\\\"?\\n\\n(5) Experiment:\\n(5.1) Why Fig.5 and 6 do not have similar asymptotic returns? Given that they both correspond to 1 trigger and 1 (2) prize(s), the asymptotic return should be close.\\n(5.2) As mentioned above, it would be interesting to see whether SECRET will prevent exploration if the behavior agent is (heavily) biased. The random agent in the Triggers environment provides sufficient support for the whole state space, while the PPO agent in the DMLab is well-focused on the \\\"good\\\" regions. If a \\\"bad\\\" agent (say, exploits some low reward regions) is used, SECRET may slow down instead of speed up training in the transfer environment. This is an important scenario to see whether SECRET will potentially create a negative transfer.\\n(5.3) No other method from the literature is used for comparison. Several alternatives are discussed in the Related Work \\\"credit assignment\\\" section, but none is compared in the experiment.\\n\\nMinor comments\\n- In other fields than RL -> in fields other than RL.\\n- The caption of Fig.3-left: there is no \\\"key\\\" in the Triggers environment. It uses switches.\\n- WT is not explained in Fig.6. \\n- h(s) is defined as the observation given a state s, but it is not used in later discussion.\"}", "{\"comment\": \"Thanks for the comment.\\n\\nAttention weights indeed sum to 1. The distribution displayed on Figure 3 mistakenly reports normalized logits instead of post-softmax weights, which explains why the values do not sum to 1. Note that the attention remains peaked around interest points in the updated distributions.\\n\\nWe will rectify in the final draft.\", \"title\": \"Re: a question related to the attention weights in Figure 3\"}", "{\"comment\": \"I really enjoyed reading this paper especially the novel idea to apply credit assignment in the transfer learning domain.\\n\\nI have a question regarding on the attention weights reported in Figure 3. \\nAs I understand, the attention weights are generated from a vector of softmax attention, therefore should sum up to 1.\\nIt seems like the sums of the average attention weights in Figure 3 (both left and right) are far above 1.\\n\\nI would appreciate if you could let me know whether I missed or misunderstood some experimental settings.\\nThank you!\", \"title\": \"A question related to the attention weights in Figure 3\"}" ] }
HJlk-eHFwH
AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion
[ "Maitreya Patel", "Mirali Purohit", "Mihir Parmar", "Nirmesh J. Shah", "Hemant A. Patil" ]
Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker. Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs. Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC. Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices. In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning). In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC. We compare our results with the state-of-the-art StarGAN-VC architecture. In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively. The key strength of the proposed architectures is that it yields these results with less computational complexity. AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters.
[ "Voice Conversion", "Deep Learning", "Non parallel", "GAN", "AdaGAN", "AdaIN" ]
Reject
https://openreview.net/pdf?id=HJlk-eHFwH
https://openreview.net/forum?id=HJlk-eHFwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "orGK3FEZfF", "BylOnuWiir", "rkehj4Zosr", "HklBlV-ssB", "r1xjn_YV5r", "BkgNckrRtH", "SJxDQYG0FS", "SkgCMSDftB", "r1lkU8ByuH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798741063, 1573750960352, 1573749923906, 1573749740986, 1572276402743, 1571864460218, 1571854623244, 1571087637798, 1569834567405 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2120/Authors" ], [ "ICLR.cc/2020/Conference/Paper2120/Authors" ], [ "ICLR.cc/2020/Conference/Paper2120/Authors" ], [ "ICLR.cc/2020/Conference/Paper2120/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2120/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2120/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2120/Authors" ], [ "~Ju-Chieh_Chou1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper has major presentation issues. The rebuttal clarified some technical ones, but it is clear that the authors need to improve the reading substantially, ,so the paper is not acceptable in its current form.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We sincerely thank respected Reviewer 2 for these valuable comments. These comments were highly useful to improve the updated manuscript. We sincerely believe that the reviewer\\u2019s concerns addressed additional clarifications in the updated version of the paper. We hope the reviewer will agree. We provide further details below in this regard.\\n\\nQ 1. On writing There are glaring grammar errors in numerous places. e.g. \\n-- \\\"Although, there are few GAN-based systems that produced state-of-the-art results for non-parallel VC. Among these algorithms, even fewer can be applied for many-to-many VC tasks. At last, there is the only system available for zero-shot VC proposed by Qian et al. (2019).\\\" This is hard to parse.\", \"ans\": \"Yes, we claim experiments to be zero-shot. We do understand that zero-shot means that we do not require any target speakers' data during training procedure. Here, AdaGAN requires 3-5 seconds from target speakers\\u2019 speech during the testing phase only and just as a reference to the target. The carefully designed training procedures allow AdaGAN to generate the latent representation of unknown speakers' speech via some reference utterance of the same speaker. The AutoVC paper which proposed the first attempt to zero-shot VC via encoders and decoders, which requires 20 seconds of target speakers' speech as a reference [1].\\n\\n\\n\\n[1] Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, and Mark Hasegawa-Johnson. Autovc: Zero-shot voice style transfer with only autoencoder loss. In International Conference on Machine Learning (ICML), pp. 5210\\u20135219, 2019.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your valuable suggestions. We do believe that the reviewer\\u2019s concerns are taken care through additional clarifications in the updated version of the paper. We hope the reviewer will agree. We provide further details below.\\n\\nQ 1. There are many typos and wrong notations in the text.\", \"ans\": \"Yes, we do understand. However, we believe that by adding mathematical formulas, it will be helpful to future readers to implement and manage data processing for various applications of voice conversion.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We would like to thank respected Reviewer 1 for their reviews and constructive suggestions. We are glad that the reviewer liked our work. Below, we provide clarification for the reviewer\\u2019s queries.\\n\\nQ 1. Section 4.4 Are all of the utterances the same length? Based on the architecture description, it appears as though the model generates one output frame for each input frame. This would suggest that for training input and output need to be synchronized. If so, make this explicit and include length parameters in Section 6.1\", \"ans\": \"Yes, we strongly agree with the respected reviewer. Thanks. We have made changes in range of the y-axis from 1 to 5 in the revised manuscript.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work describes an efficient voice conversion system that can operate on non-parallel samples and convert from and to multiple voices. The central element of the methodology is the AdaIn modification. This is an efficient speaker adaptive technique where features are re-normalized to a particular speaker's domain. The rest of the machinery is well motivated and well executed, but less novel. This addition enables the voice conversion between speakers.\\n\\nSection 4.4 Are all of the utterances the same length? Based on the architecture description, it appears as though the model generates one output frame for each input frame. This would suggest that for training input and output need to be synchronized. If so, make this explicit and include length parameters in Section 6.1 \\n\\nSection 6.2 states \\\"For statistically significant analysis, results are shown in different conversion possibilities.\\\" However, no test of statistical significance is presented. This pointer may be helpful (https://pdfs.semanticscholar.org/b2b1/d01336323f3794f54de26567335aa0bcac46.pdf)\", \"presentation_comments\": \"Section 3.1: I would recommend using different subscripts for Z_i and U_i, since when indexing Z this implies the i-th speaker, and when indexing U it's the i-th utterance. The formulas in Section 3.1 imply a single index i for both of these which is clearly not intended.\", \"section_4\": \"Consider using the present tense instead of the perfect tense when describing the results. \\\"...we discuss our proposed AdaGAN architecture... We have shown... We have presented...\\\" can be \\\"...we discuss our proposed AdaGAN architecture... We show... We present...\\\"\\n\\nSection 5.2; Tables 1 and 2: Consider some partition of the FLOPS and Parameters, separation by commas, spaces or even abbreviation e.g. 2952233 -> 2,952,233 or 2 952 233 or 2.9M. This will make this table much easier to read.\\n\\nSection 6.2; Figures 2-5: MOS scores have a minimum value of 1. This should be the axis of the chart, rather than 0. \\n\\nIt's pretty bold to star by contextualizing the work with the sentence \\\"Language is the core of civilization and speech is the most powerful and natural form of communication.\\\" :-)\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper tackles many-to-many voice conversion task using GAN for style transfer between different speakers. The core idea is adaptive instance normalization (Huang & Belongie, 2017).\", \"detailed_comments\": [\"There are many typos and wrong notations in the text. Here is an incomplete list:\", \"\\\"it were spoken by target speaker\\\", should be \\\"was\\\".\", \"In Section 3.1, \\\"Here, U_1 and U_2 are spoken by Z_i and Z_2\\\" should Z_1. Overall, the descriptions in this subsection is confusing. For example, it seems utterance U_i is from speaker Z_i in the dataset, but there are n speakers and m utterances.\", \"A closely related task is voice cloning, which is arguably more challenging than voice conversion, because the synthesis need generalize to arbitrary text. One may properly discuss the recent advances in this community (e.g., Arik et al., 2018; Nachmani et al., 2018; Jia et al., 2018).\"], \"pros\": \"The empirical improvement seems meaningful.\", \"cons\": \"This paper is poorly written and difficult to follow. For example, I could not accurately identify the major contribution & novelty after reading the abstract and introduction. As an application paper, the authors may clearly explain the ideas with a few sentences in the most natural way without \\\"heavy notations\\\", e.g., Eq. (5)(6)(7).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a voice conversion approach using GANs based on adaptive instance normalization (AdaIN). The authors give the mathematical formulation of the problem and provide the implementation of the so-called AdaGAN. Experiments are carried out on VCTK and the proposed AdaGAN is compared with StarGAN. The idea is ok and the concept of using AdaIN for efficient voice conversion is also good. But the paper has a lot of issues both technically and grammatically, which makes the paper hard to follow.\\n\\n1. On writing\\nThere are glaring grammar errors in numerous places. e.g.\\n -- \\\"Although, there are few GAN-based systems that produced state-of-the-art results for non-parallel VC. Among these algorithms, even fewer can be applied for many-to-many VC task. At last, there is the only system available for zero-shot VC proposed by Qian et al. (2019).\\\" This is hard to parse.\\n -- \\\"helps generator to make ...\\\" -> \\\"helps the generator make ...\\\"\\n -- \\\"let assume\\\" -> \\\"Let's assume\\\" \\n -- \\\"We know that the idea of transitivity as a way to regularize structured data has a long history.\\\" what does it mean?\\n -- \\\"the generator of AdaGAN is consists of Encoder and Decoder.\\\" -> \\\"consist of\\\"\\n -- \\\"After training of AdaGAN for large number of iteration of $\\\\tau$ , where theoretically $\\\\tau \\\\rightarrow \\\\infty$.\\\" where is the second half of the sentence?\\n\\n2. On math notation\\n The math notation is messy and there are lots of inaccuracies. e.g.\\n -- $X_{i} \\\\in p_{X}(\\\\cdot|Z_{i},U_{i})$ should be $X_{i} \\\\sim p_{X}(\\\\cdot|Z_{i},U_{i})$\\n -- \\\"generate the distribution denoted by $\\\\hat{X}_{Z_{1}\\\\rightarrow Z_{2}}$\\\" -> why $\\\\hat{X}_{Z_{1}\\\\rightarrow Z_{2}}$ becomes a distribution? \\n -- \\\"$p_{N}(\\\\cdot|Z_{1},U_{1})$, $p_{N}(\\\\cdot|Z_{2},U_{1})$\\\" in Eq.14, $N$ should be replaced by the random variable.\\n -- $S'_{X}$ and $S'_{Y}$ should be $S_{X'}$ and $S_{Y'}$ in line 15 in the algorithm\\n\\n3. On technical details:\\n -- In Fig.1 (b), why is there only one input to the discriminator? How do you inject the adversarial samples and how do you generate adversarial samples? \\n-- In section 4.4, \\\"in encoder and decoder all layers are Linear layers\\\". Are you referring to fully-connected layers? Linear layers are usually referred to those with linear activation functions. \\n-- The experiments are claimed to be zero-shot, but 3-5s of speech is required. can you explain? \\n\\nAlthough the samples sound OK, given its current form, the paper needs significant re-work. \\n\\nP.S. rebuttal read. I will stay with my score.\"}", "{\"comment\": \"Hello,\\nAgain, thank you for reading our research work and drawing our attention to your nice piece of work. \\nWe will definitely look into this.\", \"title\": \"Thank you for sharing your work!\"}", "{\"comment\": \"Hi,\\nThank you for interesting work.\", \"i_am_the_author_of_this_paper\": \"https://arxiv.org/abs/1904.05742\\nI found that we adopted similar idea (adaIN) to the task of VC. \\nI believe that including my work in your paper can make your work more thorough.\", \"title\": \"About related work\"}" ] }
SJxAlgrYDr
City Metro Network Expansion with Reinforcement Learning
[ "Yu Wei", "Minjia Mao", "Xi Zhao", "Jianhua Zou" ]
This paper presents a method to solve the city metro network expansion problem using reinforcement learning (RL). In this method, we formulate the metro expansion as a process of sequential station selection, and design feasibility rules based on the selected station sequence to ensure the reasonable connection patterns of metro line. Following this formulation, we train an actor critic model to design the next metro line. The actor is a seq2seq network with attention mechanism to generate the parameterized policy which is the probability distribution over feasible stations. The critic is used to estimate the expected reward, which is determined by the output station sequences generated by the actor during training, in order to reduce the training variance. The learning procedure only requires the reward calculation, thus our general method can be extended to multi-factor cases easily. Considering origin-destination (OD) trips and social equity, we expand the current metro network in Xi'an, China, based on the real mobility information of 24,770,715 mobile phone users in the whole city. The results demonstrate the effectiveness of our method.
[ "reinforcement", "actor", "reinforcement learning", "metro expansion", "process", "sequential station selection", "design feasibility rules" ]
Reject
https://openreview.net/pdf?id=SJxAlgrYDr
https://openreview.net/forum?id=SJxAlgrYDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "3x4CbROGp", "rJl7sAFy5H", "Hyxa9ARAKr", "rJgavbZ2tr" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798741034, 1571950234663, 1571905172543, 1571717476875 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2119/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2119/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2119/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper explores the use of RL (actor-critic) for planning the expansion of a metro subway network in a City. The reviewers felt that novelty was limited and there was not enough motivation on what is special about this application, and what lessons can be learned from this exercise.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper the authors train a seq2seq model through reinforcement learning to iteratively expand a city metro network. The authors show that different objectives can be satisfied with this approach, such as the accessibility to different areas (something the authors call social equity indicator) or maximising origin-destination trips.\\n\\nThe paper is interesting but could use a more extensive comparison to alternative approaches or ablated version of the same approach. For example, what if the approach would only take into account the last metro station instead of the complete previous sequence? Would it work less well? Additionally, the baseline method the approach is compared against is not explained in enough detail. In addition to RL methods, method such as genetic algorithm have shown great promise in producing layouts, such as for wind turbines (e.g. Grady et al. \\u201cPlacement of wind turbines using genetic algorithms\\u201d). I wonder if such an approach would work equally well for designing metro lines and if RL is really the best technique here (which it might be but I\\u2019m not convinced yet). Because of the mentioned shortcomings, I believe the paper should be improved before publication. \\n\\nAdditionally, the paper would benefit from a careful spell and grammar check. I found multiple typos, especially in the introduction.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Review of \\u201cCity Metro Network Expansion with Reinforcement Learning\\u201d\\n\\nIn this work, they investigate the use of RL (actor-critic) for planning the expansion of a metro subway network in a City. They formulate the problem of expanding the metro system as sequence of actions, hence a seq2seq+attn type architecture is used for the actor. The reward calculation is based on a combination of origin-to-destination and social equity metrics. They apply their method to plan the expansion of the current metro lines in a large city in China where they can leverage the mobility information information gathered from ~ 25M mobile phones in the city.\\n\\nI think this work has great potential, as they identify a data-driven approach that can have a high impact (i.e. design subway lines affecting 25M+ people in a real city). That being said, there are some issues that I feel needs to be addressed before the work can be published at a high quality conference, so I want to help the authors improve the work by highlighting important points that will make the work better:\\n\\nThe related work section on metro network design is only a paragraph. I think both the related work and experiments sections are missing many substantial contributions, as there is vast literature of work from Operations Research area about solving such problems through constraint optimization. Even a simple google scholar search brings many examples [1]. This is before discussing about existing ML and Genetic Algorithm approaches [2], or with Monte Carlo Tree Search [3].\\n\\nWithout discussing existing work and offering detailed comparisons and experiments, this paper essentially just shows that RL can be applied to such problems, but the reader wouldn't know whether it is the best tool, or simply if RL is the hammer that is used to treat every problem like a nail. The only baseline the paper compared against is another paper published in 2019 which IMO is not satisfactory.\\n\\nOn a related note, there are also a few projects doing similar network optimizations with slime mold (Physarum) and different variations using it for shortest path finding in mazes and all sorts of interesting problems [4].\\n\\nI'm reminded of a nice work called \\u201cNeural Combinatorial Optimization with Reinforcement Learning\\u201d [5] that proposed the use of neural nets to solve TSP problem, but ultimately needed to put in the work to compare with traditional approaches. I'm including the reference here so the authors can learn from that paper's experiences to help improve the work.\", \"regarding_the_dataset\": \"One of the most impressive points is that the work utilized a giant dataset of ~ 25M mobile phones. For an important dataset like this that is central to an impactful application of ML, would be nice to have a discussion (even in the Appendix) to describe how the data is collected, and what regulations / user privacy issues the research team might have to overcome, as these types of issues are becoming very important to the wider research community. Would also like to see a discussion about whether the large amount of data points can be reduced to a simpler 2D density map and achieve similar performance? Would there be any plans to release an anonymized version of the dataset for demonstration purpose?\\n\\n[1] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=transportation+network+operations+research+constraint+optimization&btnG=\\n\\n[2] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=transportation+network+design+machine+learning&btnG=\\n\\n[3] i.e. Link Prediction with Monte Carlo Tree Search (https://paperswithcode.com/paper/m-walk-learning-to-walk-over-graphs-using)\\n\\n[4] https://www.researchgate.net/publication/324791496_Physarum-Inspired_Solutions_to_Network_Optimization_Problems\\n\\n[5] https://openreview.net/forum?id=rJY3vK9eg\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a method for solving the problem of network expansion, in particular, considers the city metro network and its expansion within the new metro line. Since this problem was previously represented as the non-linear integer problem, having an exponential number of constraints or requiring expert knowledge, authors had an initial motivation to appeal to learnable algorithms from the reinforcement learning paradigm. Authors proposed to consider the problem as Markov Decision Process, so that the environment is represented as the city grid, with constructed metro network, before some timestamp conditioned by additional transportation constraints, while each new state in the expansion process of construction one separate metro line is the state of the metro network as graph with new station added to the metro line, considered as the agent\\u2019s action. The custom reward function was represented based on metrics like OD trips used in the traditional approaches and the first time used in this problem specific social equity score. The permissibility constraints of action, i.e. adding a new station, were incorporated into policy function, while the attention mechanism was used here as in other seq2seq models, to query the most suitable station among candidates. Authors use the actor-critic approach, widely exploited in reinforcement learning problems, and with all the above-mentioned modification reached the baseline results on the real-world data, obtained from Xi\\u2019an city metro network, Shaanxi province, China.\\n\\nMain arguments \\n\\nApart from the interesting approach, this paper should be rejected due to several reasons: (1) the specific application field, considered in this paper, have to be generalized to more general cases, in order to be valuable for Machine Learning community, (2) from RL algorithms perspective, the novelty of proposed method is questionable due to lack of literature review and advanced approach to deep reinforcement learning, (3) requirement of explainable AI is essential for deployment and in spite of the quite sufficient explanations of the algorithm\\u2019s works, this paper does not well justify its superiority over the traditional methods either by theory or practice, due to experiments suffering from the lack of variability and missing the main point of improvement, which leads to generally insufficient contribution (4) the paper is unpolished and lacks specific formulations from the existing literature on related subjects.\\n \\nIn detail\\n\\nThe paper does a great job of justifying neither the novelty of its method nor the guarantees of increased performance or efficiency compared to existing methods. Although the proposed method discusses the specific problem and there could be a lack of existing sufficiently good methods in this specific topic, the reinforcement learning paradigm is utilized by authors in a very superficial and general manner. \\n\\nFirstly, the paper excludes any comparison to similar problems and already existing methods of RL used by similar fields, which can be expressed as planning with constraints (for example, path planning in mobile robot navigation), thus will be more valuable to ML community than specific application of graph expansion problem, limited to line-by-line addition. \\n\\nSecondly, except general notice of combinatorial problems solved by RL algorithms, the literature review lacks mentioning graph representation methods or any seq2seq algorithms (which efficiently uses attention), which should serve as an initial guess for the well-performing model in this particular problem. Regarding the RL approach, there are plenty of already solved planning problems, papers comparing RL algorithms and their performances on grid-based, discrete environment problems in order to mention here and ideally, comparing to proposed one within the same specific problem formulation. Because of this reason, there is no justification of concrete architecture construction and choice of actor-critic model for learning this policy in this specific example and so there is no intuition, why this model will work better than existing methods. Besides, this leads to the doubtful novelty of the proposed algorithm, although used first time in this specific field, but composed totally from the same components and sometimes in the same combination as in other works related to RL methods in planning.\\n\\nMoreover, the explanation of the methods\\u2019 architecture and basic components\\u2019 influence on the system is insufficient but might serve as an initial explanation of how this algorithm works. However, the number of experiments and their variations leaves much to be desired to justify its practical advantages and its utilization purposes as explainable AI. (1) there is no ablation study conducted to justify the choice of the proposed model over existing reinforcement learning methods, like actor-critic methods and their several variations with baseline, including modifications like attention model, graph representation, encoder and decoder architecture. The mentioning of these methods in the literature review is enough to answer this issue. (2) Using only one, hardly tuned baseline method on one dataset (without any reference to existing benchmark results on it) does not prove the efficiency and good performance of the model. Additional experiments can on different environments structure (city grids, metro network) of complexity of the environments (the width of the cell in the grid) to measure the performance of the algorithm based on hyperparameters (3) Real-world data brings more application-based matter into the subject, but requires more thorough investigation on the optimality of the solutions, proposed by RL method. This means that the comparison with the groundtruth metro network expansion (built after October 15) cannot be used as it is without using the expert evaluation. As a solution and habit in the machine learning literature, without the expert knowledge on the dataset, authors may propose a comparison of the real-world solution with the algorithm\\u2019s results, based on the optimized metrics (OD trips score and social equity) included into reward function. \\n \\nFinally, the proposed paper is imprecise in several ways. In terms of RL formulation and usage of RL terms (commonly introduced in literature), it reveals inappropriate usage of MDP terms and insufficient description policy-gradient approach in actor-critic training algorithm. While not appealing to this issue, authors missed the opportunity to formulate this problem as discrete MDP with high-dimensional action space and sparse rewards, thus had less opportunity to research different model-based reinforcement learning problems in discrete environments to conduct better model selection and experiments. Besides, the paper lacks details in some parts, like the usage of 3G cellular mobile data, which was not mentioned in this work, although it is emphasized as an essential part if not a significant contribution of this research. \\n\\u2003\\n\\n \\nQuestions to answer\\nIntroduction\\n\\tWhat is the reason by constructing the city metro networks line by line, not iteratively adding stations to the existing network on the grid? This would make your problem closer to general grid-based planning problems. Do traditional methods expand metro networks only line-by-line and can this be there a limitation?\\n\\tHow does a \\u201cgood solution\\u201d is represented in the literature with traditional approaches? What are the general measures and constraints in practice?\\n\\tIs incorporating social equity concern your contribution? Then why does the maximization of OD trips is not enough (there is no mention of the preferability of social equity metric based on the results of experiments)?\\n\\tThe effectiveness of the method due to experimental results is still questionable.\\nRelated work\\n\\tIf the main concern of the paper is still the limitation of other methods which use expert knowledge then it is better to state the usage of additional data (development index) in social equity metric as a reward design part in Appendix to justify this reward engineering\\n\\t\\u201cWe believe that RL has the ability to solve the metro expansion problem\\u201d is the statement, which should be substituted by extensive literature review on RL methods used for planning with constraints or specifically graph-based expansion methods. \\nProblem formulation\\n\\tIs the network graph undirected or directed (imprecise inclusion of \\u201cdirect links\\u201d)? How does this the OD trips are measured (no information here or in Appendix A).\\n\\tWhat is the reason behind 4 constraints provided in problem formulation? Are they used or defined in the literature review papers?\\n \\nMethod\\n\\tRL and deep learning methods included in the literature review should be the reference of choosing one or another component of the algorithm, including the formula for policy and critic update.\\n\\tIt is essential to use notation based on RL formulation, for example, use r(s_t,a_t) = 0 if s_t is not terminal state and r(s_t,a_t) = w(Z|G) in order to state the sparsity of the problem. S_t here is Z_t sequence of chosen stations during the episode (line expansion process) and so on.\\n\\tUse precise formulation, for example, the proposed P(Z|G) probability distribution is the trajectory distribution and not policy function, which could be denoted as \\u03c0(z_t | S,G,M(Z_t)) \\n\\tHow does the choice of filter design is justified, based on similar works in the transportation field?\\n\\tWhat is the reason by choosing the 1-dimensional CNN layer to represent candidate stations and what is the concrete input information? There is no formal specification of the input data, regarding the existing graph structure, only a short mention of it.\\n\\tIs the attention mechanism architecture, used here, similar to other models used in seq2seq problems? (Answer: Yes) What is the motivation to use this concrete architecture? \\n\\tWhy does the permissibility mapping using filter is used in policy function? Are there reasons to use it in policy function, rather than include in reward design? For example, the commonly used approach is to penalize unfeasible actions (stations) with a very low reward, during the episode learning?\", \"training\": \"How does the sparsity of reward influence the performance? Is there a way to better design reward, so that there will not be a necessity to update actor and critic only after the end of the episode (termination step)?\\n\\tWhat is the reason for learning critic as V(Z) = w(Z|G)? Is there a need for baseline if the state and the sequence of actions have bijective mapping, meaning that one sequence of stations can generate the unique state of the environment \\u2013 line expansion, and vice versa? The intuition of the baseline is to measure the value of the state as the average among all possible actions, which lead to this state.\\n\\tWhat are the batches B used in the actor-critic training procedure?\\n\\tHow do we generalize the environment training? Do we need to retrain the reinforcement learning algorithm for each different initial metro city network configuration, or it is generalizable to other grids, using the same weights?\\nExperiments\\n\\tPlease, include the training time of the RL algorithm and its inference time, and compare it to the performance of the baseline algorithm, which should be one of the most essential contributions, due to the high time-complexity of traditional methods.\\n\\tHow does the choice of corridor width influence the performance of the baseline method? \\n\\tIs there the baseline performance on the same city metro network (Xi\\u2019an city metro) mentioned in other literature, to directly compare with? This is necessary to fill the gap in justification of proposed results optimality (for the baseline case).\\n\\tHow does the partial similarity of the 2 times line expansion by RL method and 6 real-world lines of the city metro network can justify the optimality of the proposed method? Can you provide the measurement based on OD trips and social equity? Can you provide a truly optimal solution based on the grid granularity, initial network graph, and constraints, to compare with as an optimal solution?\"}" ] }
r1x0lxrFPS
BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations
[ "Hyungjun Kim", "Kyungsu Kim", "Jinseok Kim", "Jae-Joon Kim" ]
Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings. However, BNNs suffer from performance degradation mainly due to the gradient mismatch caused by binarizing activations. Previous works tried to address the gradient mismatch problem by reducing the discrepancy between activation functions used at forward pass and its differentiable approximation used at backward pass, which is an indirect measure. In this work, we use the gradient of smoothed loss function to better estimate the gradient mismatch in quantized neural network. Analysis using the gradient mismatch estimator indicates that using higher precision for activation is more effective than modifying the differentiable approximation of activation function. Based on the observation, we propose a new training scheme for binary activation networks called BinaryDuo in which two binary activations are coupled into a ternary activation during training. Experimental results show that BinaryDuo outperforms state-of-the-art BNNs on various benchmarks with the same amount of parameters and computing cost.
[ "binaryduo", "reducing gradient mismatch", "binary activation network", "bnns", "gradient mismatch", "differentiable approximation", "binary activations binaryduo", "interest thanks", "compute cost reduction" ]
Accept (Poster)
https://openreview.net/pdf?id=r1x0lxrFPS
https://openreview.net/forum?id=r1x0lxrFPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "fYXG9tn1c6", "bKgiICH2iO", "HklEl_vusS", "SJg8uvvusr", "BkeuMLwujS", "ByeFTEDOir", "HygLemDdiS", "HyemulP_sB", "Skga_kvOiS", "H1ldnneeiH", "HJgd6tglcS", "rJgT-SZ0tB", "HJgBJh3KuS" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1578632508035, 1576798741004, 1573578731685, 1573578606291, 1573578256327, 1573577920858, 1573577454495, 1573576810739, 1573576565202, 1573026992071, 1571977664122, 1571849477187, 1570520028834 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ], [ "ICLR.cc/2020/Conference/Paper2118/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2118/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2118/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2118/Authors" ] ], "structured_content_str": [ "{\"title\": \"Updated code link\", \"comment\": \"We moved the source code to a github repository. Please find the code at the following link:\", \"https\": \"//github.com/Hyungjun-K1m/BinaryDuo\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"Three reviewers suggest acceptance. Reviewers were impressed by the thoroughness of the author response. Please take reviewer comments into account in the camera ready. Congratulations!\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you very much for your constructive comments. We could improve the quality of our work significantly by responding to your comments. Below, we summarize your questions and address them in order.\", \"q1\": \"The motivation is clear. The 1-bit activation networks usually deteriorates the performance greatly.\", \"a1\": \"Thank you very much for the positive comment.\", \"q2\": \"The gradient mismatch for discrete variable did bring difficult for optimization. Do you mean 1-bit activation has larger gradient mismatch than other bits, at least in the defined cosine similarity by this paper?\", \"a2\": \"Yes. We think that 1-bit activation has larger gradient mismatch than other bit cases because the cosine similarity between the gradient estimate and the coarse gradient shows a large difference between ternary and 1-bit activation cases while similar cosine similarity values are exhibited for 1-bit activation cases with various STEs (Fig. 3)\", \"q3\": \"As to Eq. 3, Appendix C.1 describes the way to choose step size. I understand the logic, but for the detailed method, is it cross-validation with grid search or some other tricks?\", \"a3\": \"We measured the cosine similarities for various epsilon values (epsilon = 1e-4 to 1e-2) and the results show that overall trend is maintained regardless of the epsilon value although the absolute values change depending on the epsilon value. Detailed experimental results with various epsilon values have been added in Appendix C.1.\\n \\n\\n\\nQ4. Is there any relation between the decoupling method in Section 5 and the proposed estimated gradient mismatch in Section 4.2?\", \"a4\": \"The results from gradient mismatch analysis confirm that there is a large accuracy gap between ternary activation neural network and binary activation neural network with any STEs. The results gave us strong justification to apply ternary activation neural network to train binary activation neural network, which is a key idea in the decoupling method used in the proposed BinaryDuo scheme.\"}", "{\"title\": \"Response to Reviewer #4 (part 1)\", \"comment\": \"Thank you very much for your constructive comments. We could improve the quality of our work significantly by responding to your comments.\", \"q1\": \"\\\"Binary Neural Network (BNN) has been gaining interest thanks to its computing cost reduction and memory saving.\\\" --> \\\"Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings.\\\" (will stop making English language corrections from here on)\", \"a1\": \"Thanks very much for pointing out grammatical errors in our writing. We updated some sentences including the above one in the revised manuscript. We will do our best to improve the writing in the final manuscript.\", \"q2\": \"\\\"Therefore, we argue that the sharp accuracy drop for the binary activation stems from the inefficient training method, not the capacity of the model.\\\" This could also be due to poor initialization in the binary case. e.g., it might make sense to initialize the binary network with bias=-0.5, so that the nonlinearity has a kink at pre-activation=0, rather than pre-activation=0.5.\", \"a2\": \"Per reviewer\\u2019s suggestion, we conducted more experiment on CIFAR-10 dataset by initializing the binary network with bias=0.5 so that the nonlinearity has a kink at pre-activation=0. Please note that bias=0.5 instead of -0.5 needs to be used to make the nonlinearity have a kink at pre-activation=0 for the current activation function because each pre-activation needs to be shifted by 0.5 after the bias value is added.\\nWe trained four different networks with different width factors (x1, x1.25, x1.5 and x2) and each network was trained for 4 runs and the mean results are reported below.\\n---------------------------------------------------------------------------------------\\n|\\t\\t|\\t\\t\\t\\t\\twidth factor\\t\\t\\t\\t\\t|\\n| Bias\\t|\\tx1\\t\\t|\\tx1.25\\t|\\tx1.5\\t\\t|\\tx2\\t\\t| \\n---------------------------------------------------------------------------------------\\n| 0\\t|\\t89.07\\t|\\t89.22\\t|\\t89.41\\t|\\t89.60\\t|\\n| 0.5\\t|\\t88.96\\t|\\t89.32\\t|\\t89.58\\t|\\t89.69\\t|\\n---------------------------------------------------------------------------------------\\nAs shown in the table above, we observe that initializing with bias=0.5 does not significantly improve the results.\\nIn addition, the sharp accuracy drop for the binary activation was also observed in many previous works where symmetric signum function was used for binary activation function (e.g. ABC-Net as shown in Table 1 in our paper) for which nonlinearity has a kink at pre-activation=0. Therefore, we believe that using bias value of 0 is not the reason for sharp accuracy drop.\", \"q3\": \"\\\"Unfortunately, it is not possible to measure the amount of gradient mismatch directly because the true gradient of a quantized activation function is zero almost everywhere. \\\" It *is* possible to measure the mismatch to the true gradient exactly. One could even train using the true gradient. It's just that the true gradient is useless.\", \"a3\": \"We agree that it is possible to calculate the true gradient. As the reviewer mentioned, the main point should have been that the measured results will not be \\u201cuseful\\u201d because the value will be zero almost everywhere. We replaced the word \\u201cpossible\\u201d with \\u201cuseful\\u201d in the revised draft as follows.\\n\\u201cUnfortunately, it is not useful to measure the amount of gradient mismatch directly because the true gradient of a quantized activation function is zero almost everywhere.\\\"\", \"q4\": \"Fig 1b -- this is a nice baseline.\", \"a4\": \"Thanks very much for your encouraging comments.\", \"q5\": \"\\\"the steepest descent direction, which is the direction toward the point with the smallest loss at given distance\\\" This is not the usual definition of steepest descent direction. If you're going to redefine this, should do so mathematically and precisely (for instance, you are going to run into trouble with the word \\\"distance\\\", since your coordinate discrete gradient more closely resembles an L\\\\infty-ball perturbation, rather than an L2-ball perturbation.\", \"a5\": \"To get rid of the confusion over the terminology, we decided not to use the term \\u201csteepest descent direction\\u201d and updated the sentence in the revised manuscript as follows.\\n\\n(original) \\u201cSince the true gradient of quantized activation network is zero almost everywhere, we cannot use the true gradient to find the steepest descent direction, which is the direction toward the point with the smallest loss at given distance.\\u201d\\n\\n(revised) \\u201cSince the true gradient of quantized activation network is zero almost everywhere, using the value of the true gradient does not provide a useful measure of the gradient mismatch problem.\\u201c\"}", "{\"title\": \"Response to Reviewer #4 (part 2)\", \"comment\": \"\", \"q6\": \"eq. 3: Note that this equation is equivalent to taking the true gradient of a function which has been boxcar-smoothed along each parameter. This may more closely resemble existing measures of deviation than you like. You should also consider the relationship to an evolutionary strategies style gradient estimate, which similarly provides an unbiased gradient estimate for a smoothed function, and which allows that estimate to be computed with fewer samples (at the cost of higher error).\", \"a6\": \"Thank you very much for pointing out this important point. We agree that the CDG is equivalent to the gradient of the smoothed loss function. We think their equivalence can provide better explanation about the theoretical background of CDG. Therefore, we revised section 4.1 to introduce CDG with theoretical background based on the smoothed loss function. Please note that our intention was to provide a proper gradient measure as an alternative to the true gradient for gradient mismatch estimation.\\nIn that respect, we think that the basic philosophy of evolutionary strategy-style (ES) gradient estimator is very similar to that of our CDG (Eq. 3) since the ES gradient estimator also provides (estimated) gradient for a smoothed function as the reviewer correctly described. We were not aware of the ES gradient estimate when writing the manuscript, and after a quick survey of the literature for the ES gradient estimate, we believe that ES gradient estimator can be another good candidate for the quantitative assessment of the gradient mismatch.\\n\\nTo verify, we measured the cosine similarity between the ES gradient estimator and the coarse gradient and observed that the results show a similar trend with the case using CDG (Figure 15 in Appendix H). Most notably, the ES gradient estimator-based assessment also shows that there is a large gap in the cosine similarity between the ternary activation case and the binary activation case with any STEs.\\n\\nDue to the time limitation in rebuttal period, we cannot thoroughly assess the relative strengths and weaknesses of the CDG estimator-based approach compared to the ES gradient estimator-based approach. However, we plan to continue to study the two approaches as the comparative assessment of the two approaches may open up an opportunity to find more sophisticated and practical gradient mismatch assessment methodology for quantized activation neural network. \\n\\nIn this study, our main goal for the derivation of CDG was to have a solid justification to apply ternary activation neural network to train binary activation neural network, so the similar results from another approach further strengthens our main motivation for developing the BinaryDuo scheme. \\n\\nTo reflect the update, we revised the manuscript as follows.\\n\\n - First, we changed the title of section 4.1 to \\u201cGradient of Smoothed Loss Function\\u201d from \\u201cCoordinate Discrete Gradient\\u201d. We also updated section 4.1 to introduce CDG with theoretical background based on the smoothed loss function.\\n - We replaced the term \\u201cCDG\\u201d with \\u201cgradient of smoothed loss function\\u201d in the abstract and conclusion. \\n - We updated Eq. 3 to explain the equivalence of the gradient of smoothed loss function and CDG. \\n - At the end of section 4.1, we added the following sentence: \\u201cNote that the evolutionary strategy-style gradient estimation is another good candidate which can be used for the same purpose as it provides an unbiased gradient estimate for a smoothed function in a similar way (Choromanski et al., 2018). Please refer to Appendix H for more information.\\u201d\\n - At the end of section 4.2, we added the following sentence: \\u201cWe also measured the cosine similarity between the estimated gradient using evolutionary strategies and the coarse gradient. The results show a similar trend to that of the results using CDG (Figure 15 in Appendix H). Most notably, the evolutionary strategy-based gradient mismatch assessment also shows that there is a large gap in the cosine similarity between the ternary activation case and the binary activation case with any STEs.\\u201d\\n\\nWe thank the reviewer again to help us to generalize and strengthen the gradient estimate flow to provide the strong justification to develop the BinaryDuo scheme.\", \"q7\": \"Sec. 4.2 / Figure 3: The results in this section will be *highly* sensitive to the choice of epsilon. You should discuss this, specify the epsilon used, and experimentally explore the dependence on epsilon.\", \"a7\": \"We measured the cosine similarities for various epsilon values (epsilon = 1e-4 to 1e-2) and the results show that overall trend is maintained regardless of the epsilon value although the absolute values change depending on the epsilon value. Detailed experimental results with various epsilon values are given in Appendix C.1).\"}", "{\"title\": \"Response to Reviewer #4 (part 3)\", \"comment\": \"\", \"q8\": \"\\\"The results indicate that the cosine similarity between coarse gradient and CDG can explain the relationship between gradient mismatch and performance of model better than previous approaches. \\\" Don't know that I followed this. Gradient mismatch is never formally defined, so it's hard to know what this says about its relationship. Additionally, CDG sounds more like something which is correlated with, rather than an explanation for, performance.\", \"a8\": \"We agree that CDG is correlated with performance rather than giving an explanation for performance. So we revised the above sentences as follows.\\n\\\"The results indicate that the cosine similarity between coarse gradient and CDG can provide better correlation with the performance of a model than previous approaches do. \\\"\", \"q9\": \"\\\"we shift the bias of BN layer which comes right before the activation function layer. \\\" Did you try using these bias values without pre-training as a ternary network? I suspect it would work just as well!\", \"a9\": \"Per reviewer\\u2019s suggestion, we conducted additional experiments in which, instead of using ternary pretrained model as initialization, we initialized the decoupled binary model with the shifted bias values used in section 5.1. For fair comparison, we tried the same amount of hyper-parameter search as that for the case shown in Figure 6. The results show that initializing the decoupled binary model with shifted bias still shows lower accuracy than using pre-trained coupled ternary model as initialization while it achieves a small increase in accuracy compared to initializing the model with zero bias values.\\nComparison among various training results of different schemes are shown below.\\n=====================================================================\\nScheme\\t\\t\\t\\t\\t\\t\\t\\t\\t\\t\\t\\t| Best Accuracy\\t|\\n----------------------------------------------------------------------------------------------------------------\\nBaseline binary\\t\\t\\t\\t\\t\\t\\t\\t\\t\\t|\\t\\t89.07%\\t\\t|\\nCoupled ternary\\t\\t\\t\\t\\t\\t\\t\\t\\t\\t|\\t\\t89.69%\\t\\t|\\nDecoupled binary with ternary initialization\\t\\t\\t\\t|\\t\\t90.44%\\t\\t|\\nDecoupled binary from scratch\\t\\t\\t\\t\\t\\t|\\t\\t88.93%\\t\\t|\\nDecoupled binary from scratch with shifted bias values\\t|\\t\\t89.21%\\t\\t|\\n=====================================================================\", \"q10\": \"\\\"Please note that BN layers followed by binary activation layer can be merged to the threshold of the binary activation layer, incurring no overhead at inference stage.\\\" Did not understand this.\", \"a10\": \"In BNN inference, computations for Batch-Normalization (BN) and the binary activation can be merged as a function which compares the weighted-sum value with a threshold [R1]. Therefore, modulating the BN bias with different values as in Eq. 5 does not incur additional overhead at inference stage.\\nTo avoid confusion, we revised the sentence as follows.\\n\\n\\u201cPlease note that computations for BN and the binary activation can be merged to a thresholding function. Therefore, calculating BNs with different bias values does not incur additional overhead at inference stage.\\u201d\\n\\nWe described the merging process in detail for your information below. \\n\\nLet $X$ be a weighted-sum vector. Applying batch-normalization on $X$ results in $Y$ as in Eq. R1.\\n\\n$Y = \\\\gamma*(X-\\\\mu)/\\\\sigma+\\\\beta$ (Eq. R1)\\n\\nHere, $\\\\gamma$,$\\\\beta$,$\\\\mu$, and $\\\\sigma$ denote for weight, bias, mean, and standard deviation of the batch-normalization layer after training is finished. Then, the batch-normalization output $Y$ goes through binary activation function producing a binary output $Z$.\\n\\n$Z = +1 (\\\\text{if}\\\\;\\\\; Y \\\\geq 0), -1 (\\\\text{else})$\\t\\t\\t (Eq. R2)\\n\\nHere we formulate the binary activation function as the sign function for concise expression, but the same development is possible with different binary activation functions without loss of generality. At inference stage, Eq. R1 and Eq. R2 can be merged as Eq. R3 since batch-normalization parameters are fixed.\\n\\n$Z = +1 (\\\\text{if} \\\\;\\\\; X \\\\geq \\\\mu-\\\\beta*\\\\sigma/\\\\gamma), -1 (\\\\text{else})$\\t\\t(Eq. R3)\\n\\nIn this way, batch-normalization can be merged to binary activation layer, incurring no overhead at inference stage.\\nFurthermore, scaling factor for weights can also be merged to the binary activation function similar to the batch-normalization. \\nThis method has already been used in many previous works on BNNs [R1,R2].\\n\\n[R1] Umuroglu, Yaman, et al. \\\"Finn: A framework for fast, scalable binarized neural network inference.\\\" Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 2017.\\n[R2] Liu, Zechun, et al. \\\"Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm.\\\" Proceedings of the European Conference on Computer Vision (ECCV). 2018.\"}", "{\"title\": \"Response to Reviewer #4 (part 4)\", \"comment\": \"\", \"q11\": \"\\\"it is expected that the fine-tuning increases the accuracy even further\\\" Does it improve the accuracy further? Should state this as result, not prediction, and should have an ablation experiment showing this.\", \"a11\": \"We agree to state this as result rather than prediction. Our experimental results (Figure 6 in Section 6.2) indeed show that the fine-tuning procedure actually increases the accuracy further. Please note that decoupling the ternary model without the fine-tuning does not change computation results of the network. Therefore, the accuracy of the decoupled binary model is same as that of the coupled ternary model when fine-tuning is not applied. For example, in case of VGG-7 on CIFAR-10 dataset, the accuracy of decoupled binary model without fine-tuning is 89.69% which is same as that of the coupled ternary model (shown in Section 6.1). In contrast, during the fine-tuning process, the weight for each of the decoupled binary network is tuned separately. The results for VGG7 on CIFAR-10 dataset shows that 0.75% of accuracy improvement can be achieved by fine-tuning which results in 90.44% accuracy (shown in Section 6.1 again).\\n\\nThe improvement was also observed on ImageNet dataset. Accuracy results before and after fine-tuning for AlexNet, ResNet-18, and ResNet-18(+sc) are shown below.\\n=================================================================\\n|\\t\\t\\t\\t\\t|\\tBefore\\tfine-tuning\\t|\\t After fine-tuning\\t|\\n| Network\\t\\t| Top-1(%)\\t| Top-5(%)\\t| Top-1(%)\\t| Top-5(%)\\t|\\n---------------------------------------------------------------------------------------------------------\\n| AlexNet\\t\\t| 50.7\\t| 74.4\\t| 52.7\\t| 76.0\\t|\\n| ResNet-18\\t\\t| 58.8\\t| 81.3\\t| 60.4\\t| 82.3\\t|\\n| ResNet-18(+sc)\\t| 59.1\\t| 81.3\\t| 60.9\\t| 82.6\\t|\\n=================================================================\\n\\nTo reflect the update, we revised the sentence as follows.\\n\\u201cexperimental results show that the fine-tuning increases the accuracy even further. Detailed experimental results will be discussed in the next section.\\u201d\", \"q12\": \"\\\"Table 2 shows the validation accuracy of BNN in various schemes.\\\" Why not test accuracy?\", \"a12\": \"Table 2 shows the validation accuracy of BNN in various schemes on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 dataset. ImageNet dataset has 1.28M images for training set and 50K images for validation set. Because the test set is not available to the public, (the test set is privately used for the ImageNet Large Scale Visual Recognition Challenge competition only), all the results of previous work in Table 2 are validation accuracy. For fair comparison with previous results, we also provided validation accuracy for our scheme.\\nIf required, it is possible to split the provided training set to train/valid sets and to use the provided validation set as a test set. However, the test accuracy from the experiment cannot be fairly compared with other results. Nevertheless, if reviewer asks for the test accuracy, we are willing to conduct addition training and report the test accuracy. Please let us know.\", \"q13\": \"Figure 6: What are the filled circles? What was the sampling grid for the HP search? The images have high spatial frequency structure that I suspect is an artifact of the interpolation function, rather than in the data.\", \"a13\": \"We apologize for lack of detailed description of the figure. We mistakenly moved the information along with other details to the Appendix A while we made the original submission fit to 8-page limit.\\nFor the experiment on CIFAR-10 dataset, we tried 13 weight decay values (from 1e-6 to 1e-2) and 10 different initial learning rates (from 1e-4 to 1e-1). Therefore, 130 different data are plotted in each contour plot. Since our goal for hyper-parameter search is to ensure that we are not with completely wrong hyper-parameters, we believe that this amount of sampling grid is large enough for our experiment. The 5 circles represent top 5 test accuracy points and the red circle is for the best result.\\n\\nWe updated the caption of Figure 6 as follows.\\n\\u201cFigure 6: Training results of VGG-7 on CIFAR-10 dataset in the order of coupled ternary model, decoupled binary model and decoupled binary model trained from scratch (left). 5 circles represent the top 5 test accuracy points and the red circle is for the best result. The best accuracy result is shown at the top left corner of each contour plot. Test accuracy of various models with different activation precision and training schemes (right).\\u201d\"}", "{\"title\": \"Response to Reviewer #1 (part 1)\", \"comment\": \"Thank you very much for your constructive comments. We could improve the quality of our manuscript thanks to your comments. Here are our responses.\", \"q1\": \"One main concern is that since the computation of the decoupled binary model and the coupled ternary model are the same, why does the decoupled binary model can finally to tuned to perform better than the original ternary model? Is there any intuition or theoretical explanation?\", \"a1\": \"When a ternary model is decoupled, a weight is divided into two identical weights as shown in Figure 4 (right). Before fine-tuning starts, the two weights have the same value, which is half of the original weight value. However, once fine-tuning begins, the two weights are allowed to have different values with backpropagation because they are connected to separate neurons with different threshold values after decoupling. Therefore, it becomes possible to find better minima by letting them independently be updated during the fine-training process.\", \"q2\": \"Yet another concern is that ternary activation basically can be viewed as binary+sparse activations, can it be even more computationally cheaper than the decoupled binary activation?\", \"a2\": \"As reviewer rightfully suggested, a ternary activation can be viewed as a binary+sparse activations and bitwise binary operations can be applied to compute it. However, we think that such an approach is more expensive to compute than the decoupled binary activation. Let us elaborate the detail process to assess the computing overhead.\\n\\nLet $Y$ be a dot-product result of a ternary input activation vector ($X$) and a binary weight vector ($W$) with length of n each. $X \\\\in \\\\{-1,0,1\\\\}$ has to be encoded to a 2-bit binary number: for example, $\\\\{10, 00, 11\\\\}$.\\nLet us call the higher bit of the 2-bit input which indicates whether the number is zero or not as $X^{MSB}$, and the other bit as $X^{LSB}$. Computations for ternary activation consist of 3 steps as follows.\", \"step_1\": \"To utilize bitwise operation for computing ternary activation, we first need to count the number of inputs that are not zero.\\n (1) $POPCNT(X^{MSB}) = m$\", \"step_2\": \"Then, we need to mask zero inputs and pack non-zero inputs which will participate in bitwise computation.\\n (2) $X^{LSB} \\\\in {0,1}^n \\u21d2 X^{\\\\emptyset,LSB} \\\\in {0,1}^m, W \\u21d2 W^{\\\\emptyset}$\\nTo pair the non-zero value inputs with corresponding weights, index information for the non-zero values needs to be kept. Note that the number of bits to store index information is not negligible compared to the bits required to store the ternary input values. In addition, it takes substantial time to find the matching pair of non-zero input and corresponding weight.\", \"step_3\": \"Lastly, with XNOR_POPCNT operation of $X^{\\\\emptyset,LSB}$ and $W^{\\\\emptyset}$, we can derive the desired output Y as follows.\\n (3) $Y = 2*{\\\\text{XNOR_POPCNT}(X^{\\\\emptyset,LSB}, W^{\\\\emptyset})}-m$\\n\\nIn summary, computing ternary activation with bitwise operations requires the extra index information for 0-input value because 0-input values need to be excluded from the bitwise operation. While the overhead of indexing process is manageable in high-bit precision case, it becomes non-negligible when input has 1-bit or 2-bit precision since the amount of extra index information becomes comparable with the amount of original input data. Therefore, we believe that computing ternary activation as binary+sparse activations is computationally more expensive than the decoupled binary activation.\\n \\nOn the other hand, the cost of conventional computing for ternary activation is actually lower than the cost of computing ternary activation which is regarded as binary+sparse activations.\\nIn fact, the cost of conventional computing for ternary activation is computationally same as that of computing decoupled binary activation. In this case, the ternary input $X \\\\in \\\\{-1, 0, 1\\\\}$ may be encoded to a 2-bit binary number $\\\\{00, 01, 11\\\\}$, in which the number of `1's indicates the relative order of the numbers. With such encoding, we can derive the desired output Y as follows.\\n$Y = 2*{\\\\text{XNOR_POPCNT}(X^{MSB},W)+\\\\text{XNOR_POPCNT}(X^{LSB},W)}-n$\\n\\nIn this case, the cost of computing ternary activation is twice as that of computing binary activation (XNOR_POPCNT). Since decoupling basically splits each ternary activation into two separate binary activations, compute cost of coupled ternary model and that of decoupled binary model are the same. Therefore, we think that cost of computing ternary activation is not cheaper than computing corresponding decoupled binary activation.\\nMeanwhile, the accuracy of the decoupled binary model is higher than that of the coupled ternary model after fine-tuning. In conclusion, the decoupled binary model can achieve higher accuracy with the same amount of compute cost as the coupled ternary model.\"}", "{\"title\": \"Response to Reviewer #1 (part 2)\", \"comment\": \"\", \"q3\": \"One line below eq (2), does STE mean the estimated gradient? How can the difference be calculated based on different things (i.e., activations and gradients)?\", \"a3\": \"After reading reviewer\\u2019s question, we noticed that there might be a confusion over the terminology \\u2018STE\\u2019. Therefore, we updated our manuscript to clarify our intention as follows.\\nWe use the term \\u2018STE\\u2019 to indicate the derivative of the approximation of the binary activation function used at backward pass. For example, derivative of HardTanh function was used as STE in Courbariaux et al. (2016). In Figure 2, g\\u2019(x) is for STE.\\nWe call the presumed activation function which is used at backward pass as differentiable approximation of binary activation function. For example, HardTanh or SwishSign is one of the differentiable approximations of the binary activation function. In Figure 2 and Eq. 2, g(x) is for the differentiable approximation of the binary activation function.\\nTherefore, STE (g\\u2019(x)) is the derivative of the differentiable approximation of binary activation function (g(x)).\\nThe cumulative difference in Eq. 2 was used to measure the difference between the actual binary activation function and its differentiable approximation. Yellow area in Fig.2 will help understanding Eq.2 graphically.\\n\\nWe thank the reviewer for pointing this out and helping us to improve our manuscript for better understanding.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"To be honest, I only read several papers on the neural network quantization. I am not familiar with this research topic, so I provide my judgement based on my own limited knowledge rather than thorough comparison with other related works.\\n\\n1. The motivation is clear. The 1-bit activation networks usually deteriorates the performance greatly.\\n2. The gradient mismatch for discrete variable did bring difficult for optimization. Do you mean 1-bit activation has larger gradient mismatch than other bits, at least in the defined cosine similarity by this paper?\\n3. As to Eq(3), Appendix C.1 describes the way to choose step size. I understand the logic, but for the detailed method, is it cross-validation with grid search or some other tricks?\\n4. Is there any relation between the decoupling method in Section 5 and the proposed estimated gradient mismatch in Section 4.2?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new measure of gradient mismatch for training binary networks, and additionally proposes a method for getting better performance out of binary networks by initializing them to behave like a ternary network.\\n\\nI found the new measure of gradient deviation fairly underdeveloped, and I suspect the method of converting ternary activations into binary activations works for a different reason than that proposed by the authors.\\n\\nThere were English language issues that somewhat reduced clarity, though the intended meaning was always understandable.\", \"detailed_comments\": \"\\\"Binary Neural Network (BNN) has been gaining interest thanks to its computing cost reduction and memory saving.\\\" --> \\\"Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings.\\\" (will stop making English language corrections from here on)\\n\\n\\\"Therefore, we argue that the sharp accuracy drop for the binary activation stems from the inefficient training method, not the capacity of the model.\\\"\\nThis could also be due to poor initialization in the binary case. e.g., it might make sense to initialize the binary network with bias=-0.5, so that the nonlinearity has a kink at pre-activation=0, rather than pre-activation=0.5.\\n\\n\\\"Unfortunately, it is not possible to measure the amount of gradient mismatch directly because the\\ntrue gradient of a quantized activation function is zero almost everywhere. \\\" It *is* possible to measure the mismatch to the true gradient exactly. One could even train using the true gradient. It's just that the true gradient is useless.\\n\\nFig 1b -- this is a nice baseline.\\n\\n\\\"the steepest descent direction, which is the direction toward the point with the smallest loss at given distance\\\"\\nThis is not the usual definition of steepest descent direction. If you're going to redefine this, should do so mathematically and precisely (for instance, you are going to run into trouble with the word \\\"distance\\\", since your coordinate discrete gradient more closely resembles an L\\\\infty-ball perturbation, rather than an L2-ball perturbation.\\n\\neq. 3:\\nNote that this equation is equivalent to taking the true gradient of a function which has been boxcar-smoothed along each parameter. This may more closely resemble existing measures of deviation than you like.\\n\\nYou should also consider the relationship to an evolutionary strategies style gradient estimate, which similarly provides an unbiased gradient estimate for a smoothed function, and which allows that estimate to be computed with fewer samples (at the cost of higher error).\\n\\nSec. 4.2 / Figure 3:\\nThe results in this section will be *highly* sensitive to the choice of epsilon. You should discuss this, specify the epsilon used, and experimentally explore the dependence on epsilon.\\n\\n\\\"The results indicate that the cosine similarity between coarse gradient and CDG can explain the relationship between gradient mismatch and performance of model better than previous approaches. \\\"\\nDon't know that I followed this. Gradient mismatch is never formally defined, so it's hard to know what this says about its relationship. Additionally, CDG sounds more like something which is correlated with, rather than an explanation for, performance.\\n\\n\\\" cosine similarity between coarse gradient and CDG can explain the relationship between gradient mismatch and performance of model better \\\" --> \\\" cosine similarity between coarse gradient and CDG can explain the relationship between gradient mismatch and performance of model better \\\"\\n\\n\\\"we shift the bias of BN layer which comes right before the activation function layer. \\\"\\nDid you try using these bias values without pre-training as a ternary network? I suspect it would work just as well!\\n\\n\\\"Please note that BN layers followed by binary activation layer can be merged to the threshold of the binary activation layer, incurring no overhead at inference stage.\\\"\\nDid not understand this.\\n\\n\\\"it is expected that the fine-tuning increases the accuracy even further\\\"\\nDoes it improve the accuracy further? Should state this as result, not prediction, and should have an ablation experiment showing this.\\n\\n\\\"Table 2 shows the validation accuracy of BNN in various schemes.\\\"\\nWhy not test accuracy?\", \"figure_6\": \"What are the filled circles?\\nWhat was the sampling grid for the HP search? The images have high spatial frequency structure that I suspect is an artifact of the interpolation function, rather than in the data.\\n\\n----\", \"update_post_rebuttal\": \"The authors have addressed the majority of my concerns, through both text changes and significant additional experiments. I am therefore increasing my score. Thank you for your hard work!\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies activation quantization in deep networks. The authors first compare the coordinate discrete gradient and those obtained by various kinds of straight-through estimators, and found 1-bit activation networks have much poorer gradient estimation than 2-bit ones. Thus they speculate that this explains the poorer performance of 1-bit activation networks than 2-bit ones. To utilize higher precision of activation, the authors then propose to decouple a ternary activation into two binary ones, and achieve competitive results on typical image classification data sets CIFAR-10 and ImageNet.\\n\\nThe paper is overall well-written and easy to follow. The decoupling method is simple and straightforward. The experiments are also well conducted. One main concern is that since the computation of the decoupled binary model and the coupled ternary model are the same, why does the decoupled binary model can finally to tuned to perform better than the original ternary model? Is there any intuition or theoretical explanation? Yet another concern is that ternary activation basically can be viewed as binary+sparse activations, can it be even more computationally cheaper than the decoupled binary activation?\", \"question\": \"1. One line below eq (2), does STE mean the estimated gradient? How can the difference be calculated based on different things (i.e., activations and gradients)?\"}", "{\"comment\": \"As the ICLR policy strongly recommends the code submission, we prepared an anonymized link for our code. The code can be accessed in the following link: https://drive.google.com/open?id=1NxZdaSB7gZPMVH35hqp1xaqZ7ilwVAtD\", \"title\": \"code submission\"}" ] }
S1xRxgSFvH
ShardNet: One Filter Set to Rule Them All
[ "Saumya Jetley", "Tommaso Cavallari", "Philip Torr", "Stuart Golodetz" ]
Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret. Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this. In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline. We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected. At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable.
[ "neural network compression", "filter sharing", "network interpretability" ]
Reject
https://openreview.net/pdf?id=S1xRxgSFvH
https://openreview.net/forum?id=S1xRxgSFvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "4E0bPZySHS", "ByxW79whor", "rkxvNAcKjH", "SJxyyCctiB", "SyekhEVPsH", "r1g1Y4NPiB", "BkgfN4NPiS", "SkeWW4VvoH", "rkebzlRTKB", "ByeCD0Ritr", "HJxWWgojYr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740976, 1573841433433, 1573658159207, 1573658070930, 1573500070790, 1573500022579, 1573499946081, 1573499897333, 1571835913485, 1571708518393, 1571692536752 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/Authors" ], [ "ICLR.cc/2020/Conference/Paper2117/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2117/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2117/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This submission proposes an interesting experiment/modification of CNNs. However, it looks like this contribution overlaps significantly with prior work (that the authors initially missed) and the comparison in the (revised) manuscript seem to not clearly delineate and acknowledge the similarities and differences.\\n\\nI suggest the authors improve this aspect and try submitting this work to next venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Revised Paper\", \"comment\": [\"We thank the reviewers again for their useful feedback. As promised, we have now uploaded a revised version of the paper that contains:\", \"A more concise version of the introduction.\", \"A revision to the \\u2018Related Work\\u2019 section to include discussion about the latest literature on recurrent implementations of convolutional neural networks.\", \"An update to the \\u2018Results\\u2019 section to make the results of our E variants more discoverable.\", \"Increased the size of figures to make them more readable.\", \"We believe that the updated version of the manuscript makes it clearer how our work compares to existing recurrent CNN approaches, and better highlights the novel contributions made by our approach.\"]}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their useful feedback. We have identified some parts in the introduction and related work that could be shortened in the interest of making space for some interesting results which are at present in the appendix. This would also allow us to increase the size of Figures 3 and 4.\\n\\n\\u201cwhy not also experiment with different L. does VGGNet actually need L layers? what if only 2 layers are used? this will help with the overparametrization problem as well.\\u201d\\n\\nSimply decreasing the number of layers L would require us to increase the size of the filters by a significant amount if we are to draw the class inference from the entire image and not a small patch of the input image. Thus, any naive reduction in the number of layers would not help solve the problem of over-parameterisation, but might aggravate it instead. Historically, in moving from AlexNet to VGGNet, it has been observed that deeper networks with smaller filter sizes offer a good tradeoff between network footprint and performance. ith ResNets[], the filter sizes were made smaller still, and the pipeline was made much deeper while using skip-connections to alleviate the problem of vanishing gradients. This resulted in compact and highly effective network architectures.\\nFurther, several recent studies [1, 2] have attempted to understand when and why \\u2018deeper networks are more effective than shallower ones\\u2019. Our work doesn\\u2019t aim to answer this question. Instead, we present a unique approach to compressing deep networks without compromising the benefits of depth. We aim to achieve similar performance to a complex, multi-layered pipeline by iteratively applying a single layer (which can be seen as a shallow function). We demonstrate the efficacy of this idea for both plain feed-forward constructs and residual constructs. That said, it should be noted that some of the experiments we did perform did involve the use of networks of different depths. In particular, we show results for VGGNets with different depths in Tables 1 and 6, and for ResNets with different depths in Tables 2 and 8.\\n \\n\\u201cIf you're learning the filters through backprop, will they not always be learning to fit to 0 ?\\u201d\\n\\nTo simplify our experimental setup, we keep the number of filters at the first layer the same as at other layers. This amounts to applying some filters over inputs channels that are anchored to 0. Note once again that the filter set is shared across all layers and receives different inputs at different layers of the network. Since the input to the filters in question may not be 0 at other layers, these filters that are redundant at layer 0 can still capture information that is relevant for class disentanglement at layers that are further down the network hierarchy.\\n\\n\\u201cwhy not have the input layer to be a different filter with 3 channels and then have a common filter for all upstream layers?\\u201d\\n\\nUsing a non-shared set of filter weights for the first layer did not make much difference for the small datasets. For Tiny ImageNet and ImageNet, this indeed became relevant on two accounts - (a) to add flexibility through untied weights (b) to regulate the spatial resolution differently at the beginning of the pipeline and for the rest of the pipeline. Thus, our experimental setup was adapted to incorporate a standalone set of weights for the first layer. Please see the last paragraph of Section 4 on page 6 that states the following - \\\"Note that the shared variants of both these models, SL-ResNet34/50, keep the standalone convolutional layer unshared, since its kernel size is adjusted according to the dataset (3 \\u00d7 3 for Tiny ImageNet and 7 \\u00d7 7 for ImageNet).\\\" \\n\\n\\n\\u201c- in multiple places in the text, you refer to the number of \\\"independent\\\" parameters. I dont see why the parameters need to be independent.\\u201d\\n\\nSome papers have referred to the parameters as independent, for instance [3]. This may be because, under proper regularisation, the number of non-zero weights can be thought to define the dimensionality of the function space of the network i.e. the number of axes of the function space. But the authors agree that the use of the word independent is a slight abuse of terminology. We will replace this with the word \\u2018individual\\u2019.\\n\\n\\u201cyou add separate \\\"linear\\\" layers for the 'SL' models. Can you describe how many additional parameters you have to learn?\\u201d\\n\\nThe count of additional parameters introduced by the linear layers for VGGNet variants can be calculated as the difference between the number of parameters for the SL and S variants in Table 6 in the appendix. We will update the caption accordingly to make this clear.\\n\\n[1] Learning Functions: When Is Deep Better Than Shallow, Hrushikesh Mhaskar, Qianli Liao, Tomaso Poggio\\n[2] On the Number of Linear Regions of Deep Neural Networks, Guido Mont\\u00fafar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio\\n[3] Learning Implicitly Recurrent CNNs Through Parameter Sharing Pedro Savarese, Michael Maire ICLR 2019\"}", "{\"title\": \"Response to Reviewer 1: additional comments\", \"comment\": \"\\\"This paper considers only the number of learnable parameters of a network. However, in many cases, for applications, it is more important to save memory (which is not the case as the activations should still be saved for backpropagation) and computation. In my understanding the final computation of the model is actually increased because it uses more channels at lower layers (which corresponds to high resolution features maps). Authors should comment about that.\\\"\\n\\nWe consider the memory and computational requirements of our approach at both training and inference time, in comparison to the baseline networks. Note that for real-world applications, it is common to focus more on the costs at inference time, since networks are commonly trained on powerful multi-GPU clusters, and so an increase in memory and/or compute requirements at training time usually does not present insuperable problems.\\nAt training time, our requirements for both are higher, owing to our reliance on a single convolutional filter having a higher number of channels. That requires more memory and computation, due to the fact that backpropagated gradients have to be computed and stored for all the layers and weights.\\nAt inference time, the activations and gradients no longer have to be stored, which already significantly reduces the memory requirements. Additional memory can be saved by simply loading a single copy of the shared layer (which can then be applied repeatedly), in which case our overall memory usage will be much less than the baseline (which is not able to do this). This is useful in memory-constrained environments, such as are common in deployment scenarios (e.g. self-driving cars, robotics, smartphones, etc.). Here, the baseline would have to repeatedly swap individual layers in and out of memory, incurring significant I/O overhead. However, our approach would not suffer from this problem. This gives our approach a significant advantage in such scenarios.\\n\\n\\\"In section 4, VGGNet-like Architectures and ResNet-like architectures the authors mention a baseline E-VGGNet or E-ResNet with exactly the same architecture as the shared weights network (thus same number of channels at each layer), but without sharing. However I could not find the performance of that interesting baseline in the results.\\\"\\n\\nThe relevant results can be found in Tables 6 and 8 in section A.4.1 of the appendix. In the paper as it stands, these are mentioned at the end of Section 5. However, we accept that the way in which they were mentioned was slightly too general, and we will update the text to reference the tables explicitly in order to make the results more discoverable.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their helpful feedback and for the reference to the Qianli Liao and Tomaso Poggio paper. Indeed this work is relevant to our current submission and we are working towards discussing it in the paper. Please find a detailed response contextualising our work and highlighting its novelty vis-a-vis the reference in Reply #1. Further, it is true that the idea of sharing convolutional weights is not new, we mention several such methods in the related work, but we believe that our approach is sufficiently different from those other methods to qualify as novel. The rest of the points concern clarifications, and we are working towards answering them in our next post.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their helpful feedback and for the reference to the Savarese and Maire paper which we unfortunately missed. We are now in the process of integrating a comparison to it in our submission. For the reviewer\\u2019s concerns regarding novelty please see our Reply #1.\"}", "{\"title\": \"Reply #1: Related Work and Novelty 2/2\", \"comment\": \"In conclusion, when viewed in the light of these two additional studies our work can be seen as even more timely. While there is some overlap between the S variant of our approach and a special case of [2], our SL variant goes beyond it and provides additional flexibility. For [1], we would like to quote a sentence from their discussion - \\u201cA radical conjecture would be: the effectiveness of most of the deep feedforward neural networks, including but not limited to ResNet, can be attributed to their ability to approximate recurrent computations that are prevalent in most tasks with larger than shallow feedforward networks. This may offer a new perspective on the theoretical pursuit of the long-standing question \\u201cwhy is deep better than shallow\\u201d\\u201d. Our study puts this conjecture to the test of experimental rigour for two state-of-the art deep network architectures - VGGNet and ResNet - and a variety of visual datasets including CIFAR, TinyImageNet and ImageNet, while opening new pathways of thinking about the simplifications in conventional deep nets and how they might be reconciled with biological models.\\n\\n[1] Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex Qianli Liao and Tomaso Poggio\\n[2] Learning Implicitly Recurrent CNNs Through Parameter Sharing Pedro Savarese, Michael Maire ICLR 2019\"}", "{\"title\": \"Reply #1: Related Work and Novelty 1/2\", \"comment\": \"We thank reviewers R1 and R3 for drawing our attention to [1] and [2] respectively. These studies are indeed highly relevant to our current work and we regret not having seen them earlier. We are now in the process of adding a comparison to their methodology and findings in our submission. In the meantime, we would like to emphasise that the existence of this literature does not diminish the novelty of our work but in fact adds to it. We elaborate further as follows.\\nThe authors of [1] discuss ResNets that share weights between individual residual units as implementing a time-invariant homogeneous function and thus being a generalisation of RNNs. On the experimental front, however, they only ever analyse CIFAR-10. When experimenting with weight-sharing across time, the transition functions for different states are implemented using non-shared convolutional layers and the recurrence function comprises of a predetermined number of self-transitions interspersed with transitions across states. In this regime, their 2-state fully recurrent network achieves an accuracy vs. size tradeoff of ~90% for 298K parameters. In the further case when they examine weight-sharing across states (using a single set of convolutional weights across all layers for a 3-state shared ResNet using 64 features), they achieve an accuracy of ~85% for 40K parameters. This weight-sharing scheme closely matches that of our Shared-VGGNet variant with 64 features. Our model is able to achieve 85.5% accuracy for 37K parameters (See Table 6(a) in Appendix), and we are able to obtain comparable results without ever using any residual connections. Further, the formulation of [1] either treats the transition functions (convolutional layers) as being either the same or different, there is never any soft-association between the different transition functions. In comparison, for our SL variant, we can consider the combination of a linear layer followed by a convolutional layer as a composite layer. These composite layers (which are equivalent to the transition functions in [1]) are then associated with the shared layers by virtue of their 2D filter maps being a linear combination of the shared filter maps. This association lends greater flexibility to our model without incurring huge cost in parameter count. Observably, the SL variant for VGGNet with 64 features achieves an accuracy of 87.7% for 53.3K parameters (See Table 6(a) in Appendix), and the SL variant of ResNet with equal number features achieves an accuracy of 89.7% for 45.3K parameters (See Table 8(a) in Appendix) - a significant improvement in performance over the basic model of [1] without a huge increase in the number of parameters.\\nThe above described formulation (of the SL variant) is also what differentiates our work from [2]. The authors of [2] propose to model each convolutional layer as a linear combination of similarly-sized layer templates from a bank (they use one bank per size or scale level). As observed by the reviewer, if all the layers in their network had the same size, and if they had used precisely one bank containing a single template (a case that was not addressed in the original work), then their formulation would have mapped to our S-variant. However, in any case, we take this paradigm further with the SL variant by modeling the 2D filter maps of some layers as a linear combination of the shared filter maps. Notice also that in terms of the experimental work, [2] only ever analyses Wide-ResNet architecture. For CIFAR dataset, their model achieves 96% for 12M parameters. Compare this to our ~94% accuracy for 0.8M parameters. Clearly, the approach of [2] doesn\\u2019t focus on network compression while our work aims to achieve a competitive balance between accuracy and parameter count. Particularly for ImageNet, a most challenging vision dataset as noted by the reviewers, their model doesn\\u2019t demonstrate any parameter saving whilst showing a marginal 0.26% (top-1) and 0.1% (top-5) increase in accuracy for a total of 69M parameters. In comparison, we investigate the compression aspects of our approach for ImageNet and present two shared variants of ResNet with 3.8 and 23 times fewer parameters than [2] (18.1M and 3.2M in total, respectively) while losing a few points in accuracy; it is this tradeoff that is a crucial contribution of our study (as further evidenced by the results we report in Table 2b, where we show that our accuracy is comparable to that of other \\u201csharing-based\\u201d approaches, while achieving a higher compression rate).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to modify a standard CNN by requiring all of its layers to share the same filter set, essentially allowing it to be expressed as an iterative (or recurrent) network. This also has the effect of forcing the same number of feature channels to be used throughout the network. For ResNet-like architectures with bottleneck blocks, sharing occurs at the level of the block (3 conv layers in series that are repeated). Another variant of the sharing pattern inserts unshared 1x1 convolutional layers after shared layers or blocks; this adds some flexibility while still reducing parameters compared to standard CNNs.\\n\\nOn CIFAR-10, CIFAR-100, and Tiny ImageNet, experiments demonstrate the ability of the sharing scheme to reduce parameters without impacting accuracy (or more drastically reduce parameters at the cost of accuracy) (Tables 1ab, 2a).\\n\\nHowever, results are less compelling on ImageNet (Table 2b), where SL-ResNet-50 and SL-ResNet-34 are both less accurate than the baseline standard ResNets as well as ShaResNet [Boulch, 2018]. The accuracy gap between SL-ResNet and ResNet on ImageNet (Table 2b) is significant (approx 5% Top-1 and 2% Top-5 accuracy) and might make it difficult to justify use of the proposed method in this setting. As ImageNet is the most challenging of the datasets used, this is cause for concern.\\n\\nThere is also a major concern with respect to novelty and related work. Unfortunately, the paper appears to have completely missed the following highly related publication from ICLR 2019:\\n\\nLearning Implicitly Recurrent CNNs Through Parameter Sharing\\nPedro Savarese, Michael Maire\\nICLR 2019\\n\\nThis prior work proposes a network structure in which a set of L layers share a set of k parameter templates. The templates and sharing coefficients are learned as part of the standard training procedure. This prior work demonstrates both parameter savings and accuracy improvements when training networks in this manner. Additionally, this prior work shows that some learned networks can be converted into explicitly recurrent forms as a post-processing step.\\n\\nThe paper under review appears be a special case of this prior work with the number of templates k = 1 (shared between all layers). It is possible this is an important special case, worthy of significant attention on its own. Notably, [Savarese and Maire, 2019] considered sharing across at most all layers within the same stage of a residual network, rather than all layers in the network. However, arguing for the importance of this special case would require focused experimental comparison and analysis, which is not present in the current version of the paper.\\n\\nNovelty is clearly limited in light of this overlooked prior work. At minimum, citation, discussion, and experimental comparison to the above ICLR 2019 paper is necessary.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an approach to reduce the number of a neural network by sharing the convolutional weights among layers. To convert the first layer into the right number of features padding is used. The last layer I suppose is instead a normal classifier on the fully connected representation (for VGG) or on the average pooling(for ResNet). Results on different datasets and architectures show that the proposed approach can highly compress the number of needed parameters with a minimal reduction of the network test accuracy.\\n\\nI lean to reject this paper because, in my opinion is very similar to (\\\"Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex\\\" Qianli Liao and Tomaso Poggio), which is not mentioned in related work. This paper, published in 2016 was already proposing the idea of reducing the number of parameters of ResNet by sharing the weights of each layer and therefore consider ResNet with shared weights as a recurrent net.\\nIn this paper the setting are slightly different, authors add also a variant with additional 1x1 convolutions and show also results with additional compression. However, in my opinion, the main idea is the sharing of convolutional weights, and this is not new.\", \"additional_comments\": [\"This paper considers only the number of learnable parameters of a network. However, in many cases, for applications, it is more important to save memory (which is not the case as the activations should still be saved for backpropagation) and computation. In my understanding the final computation of the model is actually increased because it uses more channels at lower layers (which corresponds to high resolution features maps). Authors should comment about that.\", \"In section 4, VGGNet-like Architectures and ResNet-like architectures the authors mention a baseline E-VGGNet or E-ResNet with exactly the same architecture as the shared weights network (thus same number of channels at each layer), but without sharing. However I could not find the performance of that interesting baseline in the results.\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose to use the *same* convolutional layer in every layer of a DNN. The network effectively is converted into repeatedly applying the same convolutional filter at multiple scales. The idea is motivated by wavelet decompositions and related work. The authors show that by repeatedly applying the same filter, the number of parameters that need to be stored for a model reduces proportionally to the depth of the network. At the same time, experimental evidence is provided that the performance of these models is not affected, when compared to the baseline (full) model.\", \"comments\": [\"the paper is well written, but overly verbose. there are several areas where the explanation can be compressed, making room to add more informative details (which are in the appendix), or increasing the size of the figures (which are too small)\", \"the paper seems lacking a bit in experiments. If the authors can show that the same filter applied L times achieves about the same performance, why not also experiment with different L? i.e. does VGGNet actually need L layers? what if only 2 layers are used? this will help with the overparametrization problem as well.\", \"Figures 3 and 4 are hard to read. Please increase their size.\", \"page 5 line 2: how does padding the input with (n-3) empty channels affect performance? If you're learning the filters through backprop, will they not always be learning to fit to 0 ? or am i missing something?\", \"along the above lines, why not have the input layer to be a different filter with 3 channels and then have a common filter for all upstream layers?\", \"in multiple places in the text, you refer to the number of \\\"independent\\\" parameters. I dont see why the parameters need to be independent. Unless there's some orthogonalization happening at the weights, calling them independent is incorrect.\", \"paragraph above sec4: you add separate \\\"linear\\\" layers for the 'SL' models. Can you describe how many addditional parameters you have to learn?\"]}" ] }
HJxTgeBtDr
Towards Interpretable Evaluations: A Case Study of Named Entity Recognition
[ "Jinlan Fu", "Pengfei Liu", "Xuanjing Huang" ]
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \emph{why} or \emph{how} a particular method is better and how dataset biases influence the choices of model design. In this paper, we present a general methodology for {\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text. The proposed evaluation method enables us to interpret the \textit{model biases}, \textit{dataset biases}, and how the \emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches. By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.
[ "interpretable evaluation", "dataset biases", "model biases", "NER" ]
Reject
https://openreview.net/pdf?id=HJxTgeBtDr
https://openreview.net/forum?id=HJxTgeBtDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hYlQMrsvmf", "Bye062bisS", "rkxnw3-isS", "Syg_KsbijH", "SklmjObioH", "r1gOzubiir", "BJlGfDZssr", "H1xyV2ljsH", "ByljmRZSqH", "Skgm6QnaYr", "rkxmuiLnYH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740943, 1573752006448, 1573751907621, 1573751680452, 1573750938666, 1573750800396, 1573750537972, 1573747751023, 1572310563340, 1571828666690, 1571740523133 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/Authors" ], [ "ICLR.cc/2020/Conference/Paper2116/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2116/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2116/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper diligently setup and conducted multiple experiments to validate their approach - bucketizating attributions of data and analyze them accordingly to discover deeper insights eg biases. However, reviewers pointed out that such bucketing is tailored to tasks where attributions are easily observed, such as the one of the focus in this paper -NER. While manuscript proposes this approach as \\u2018general\\u2019, reviewers failed to seem this point. Another reviewer recommended this manuscript to become a journal item rather than conference, due to the length of the page in appendix (17). There were some confusions around writings as well, pointed out by some reviewers. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper to improve the paper for your future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #1-Part3\", \"comment\": \"For other detailed suggestions, we have refined our paper based on your feedback:\\n1)\\u00a0\\u00a0\\u00a0\\u00a0Clarify the description of \\u201cSupplementary exam\\u201d\\n2) \\u00a0Re-organize the Sec.2.2 and remove some repetition in methodological\\nperspective\\n3)\\u00a0\\u00a0\\u00a0\\u00a0Merge Sec.3 into Sec.2 \\n4)\\u00a0\\u00a0\\u00a0\\u00a0Add a more intuitive explanation of the measures defined in Sec.4.3 (3.3 in new version)\\n\\nHope we address your concern correctly and look forward to your feedback again.\"}", "{\"title\": \"Response to Review #1-Part2\", \"comment\": \"We appreciate your thorough review and helpful suggestions. We will try to address your questions below.\", \"q1\": \"\\u201cThese metrics are given in section 4 but here again only from a formal point of view: it is very difficult for the reader to understand how to interpret them and how to use them for a practical case.\\u201d\", \"a1\": \"Thanks for your feedback. Here, it is the generality of the methodology that allows us to\\ndescribe it from a formal point of view. In a practical case, this method could be adapted to other tasks easily. Below, we would like to give a more specific explanation of how to use them for a practical case.\\nAs shown in the Table (Part1), for a given NLP task, once we determine related attributes and Bucket Strategy, we could calculate proposed metrics in Sec.4.2 (Sec.3.2 in new version) and make similar analyses.\", \"q2\": \"\\u201cIn the end, the proposition is a formalisation of the simple error analysis which is commonly\\ndone when trying to improve a machine learning system. The advantage of the\\nmethod could be to introduce some metrics to make the error analysis more\\nautomatic.\\u201d\", \"a2\": \"Our method shares some common properties with error analysis, but beyond it:\\n\\u00a0\\u00a0\\u00a0 1) Regarding model analysis, (automatic) error analysis suffers from the confirmation\\nbias (Tab.1) problem while our method doesn\\u2019t.\\nMoreover, the development of error analysis stopped at focusing solely on a\\nsingle dataset [1][2][3]. Many challenges will come when we take the multi-dataset setting into account. This work takes a step towards diagnosing the strengths and weaknesses of different models under different datasets.\\n\\u00a0\\u00a0\\u00a0 2) Regarding dataset analysis, the proposed methodology enables us to quantify the data biases, knowing more about the characteristics of each dataset, which is beyond the grasp of error analysis.\\n\\n[1] Joke Daems, Lieve Macken, and Sonia Vandepitte. On the origin of errors: A fine-grained analysis of mt and pe errors and their relationship\\n[2] Jonathan K. Kummerfeld and Dan Klein. Error-driven analysis of challenges in coreference resolution\\n[3] Jonathan K. Kummerfeld, David Hall, James R. Curran, and Dan Klein. Parser showdown at the wall street corral: An empirical investigation of error types in parser output\", \"q3\": \"\\u201cfamiliarity : test/train distribution should be the same.\\u201c\", \"a3\": \"We\\u2019re sorry for not understanding your statement clearly \\u201ctest/train distribution should be the\\nsame.\\u201d Do you mean if the calculation of the familiarity requires that test/train\\ndistribution should be the same?\\u00a0 If, in that case, the answer is the calculation of the familiarity doesn\\u2019t require that.\", \"q4\": \"\\u201cFk computer on train set because it is bigger ?\\u201d\", \"a4\": \"Do you mean if we calculate F_k on the train set because it is bigger?\\nIf, in that case, the answer is no. The F_k is defined over training set is why we called it as #familiarity#: it could reflect how the statistics in training set of an attribute influence the test performance.\", \"q5\": \"\\u201cit allows to study the impact of the number of occurrences in the training set. Is it more interesting than a learning curve ?\\u201d\", \"a5\": \"The main contribution of this paper is not only to study how the occurrences of some attributes in training set influence different models, but also to investigate how different datasets are sensitive to occurrences of some attributes in the training set. \\u00a0A learning curve is far from this goal.\", \"q6\": \"\\u201cmulti attribute familiarity : risk of metric explosion ? how to select the attributes ?\\u201d\", \"a6\": \"Multi-attribute familiarity may be a risk of metric explosion, but at the same time, it could\\nencourage us to explore new meaningful measures. For example, \\u201cMF-et\\u201d could\\nquantify the category ambiguity phenomenon. (Analogously, deep neural networks\\nachieve impressive at the cost of architecture engineering.). We would like to search for new combinations of attributes, which could be as our future work but is out of scope for this paper.\", \"q7\": \"\\u201cThe encoding of the model name is not clear\\u201d\", \"a7\": \"Although we have tried our best to name these 11 models more intuitive and more precise (by\\nhighlighted sub-words, detailed model choices), yet we still think we could do it better. We have added more explanation in our revised version.\", \"q8\": \"\\u201ca metric on all the dataset for each model could be computed to decide which one is the best\\noverall\\u201d\", \"a8\": \"We are not sure if we have understood your meaning of the term \\u201cmetric\\u201d: Do you suggest we evaluate each model based on solely one attribute and find another best overall? Here, it would make no sense since different attributes just provide more fine-grained results, which will not lead to a new best overall performance.\", \"q9\": \"\\u201canalysis of Fig 4 : R-eLen does not existe (R-Ele). what is eta ?\\u201d\", \"a9\": \"Thanks for catching the typos. We have corrected them: \\u201cR-Ele, R-eLen -> R-eLen\\u201d, eta -> zeta\", \"q10\": \"\\u201cfigure 2 : where are the links to levels ?\\u201d\", \"a10\": \"Fig.2 is used to aid the understanding of the \\u201cAttribute Definition\\u201d (Sec.3.1) and \\u201cBucketization Strategy\\u201d (Sec.3.2).\"}", "{\"title\": \"Response to Review #1-Part1\", \"comment\": \"*****************************************************************************************\\nTasks\\t\\t\\t Attributes\\t\\t\\t Measures\\t\\tBucket Strategy\\n*****************************************************************************************\\nMachine Translation\\t sentence length\\t\\t\\t Bleu\\t\\t R-Buck\\nMachine Translation\\t word (or N-gram) frequency\\t Accuracy*\\t\\t R-Buck\\n\\t\\t\\t in the training set.\\t \\nMachine Translation\\t word POS-tag in the training set Accuracy*\\t\\t R-Buck\\nMachine Translation\\t words in reference file\\t\\t Word likelihood R-Buck\\n------------------------------------------------------------------------------------------------------------------------\\nSummarization (Ext. or Abs.)\\tsentence length\\t\\t\\t Rouge\\t\\t R-Buck\\nSummarization (Ext. or Abs.)\\tcompression of summary\\t\\t Rouge\\t\\t R or F-Buck\\nSummarization (Ext. or Abs.)\\tdensity of summary\\t\\t Rouge\\t\\t R or F-Buck\\nSummarization (Ext. or Abs.)\\tvolume overlap\\t\\t\\t Rouge\\t\\t R-Buck\\n------------------------------------------------------------------------------------------------------------------------\\nSummarization (Ext.)\\t position of each sentence\\t\\t Rouge/Accuracy F-Buck\\nSummarization (Ext.)\\t OOV rate of sentence\\t\\t Rouge\\t\\t R-Buck\\n------------------------------------------------------------------------------------------------------------------------\\nSemantic Matching\\t\\t length of sent1 or sent2\\t Accuracy\\t\\t R-Buck\\nSemantic Matching\\t\\t Func(sent1, sent2)\\t Accuracy\\t R-Buck\\nSemantic Matching\\t\\t OOV\\t\\t\\t\\t Accuracy\\t\\t R-Buck\\n------------------------------------------------------------------------------------------------------------------------\\nQA\\t\\t\\t answer length, type, position\\t Matching F1\\t F-Buck\\nQA\\t\\t\\t document length\\t\\t\\t Matching F1\\t R-Buck\\nQA\\t\\t\\t query type\\t\\t\\t Matching F1\\t F-Buck\\n------------------------------------------------------------------------------------------------------------------------\\nText Classification\\t\\t sentence/word length\\t\\t Accuracy\\t\\t R-Buck\\nText Classification\\t\\t OOV\\t\\t\\t\\t Accuracy\\t\\t R-Buck\\nText Classification\\t\\t sentence familiarity\\t\\t\\t Accuracy\\t\\t F-Buck\\n------------------------------------------------------------------------------------------------------------------------\\nSequence labeling\\t\\t Similar to this work\\t\\n------------------------------------------------------------------------------------------------------------------------\\n\\nSimilar to the Table mentioned in R3, the \\u201cTasks\\u201d column shows different types of tasks.\\n\\u201cAttributes\\u201d denotes the criterion that we use to divide the test set, and \\u201cMeasures\\u201d represents the measure we use to evaluate each divided sub-set. \\u201cBucket Strategy\\u201d shows which types of bucketization methods could be adopted.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your encouraging review. We will continue to improve the draft in the revised version.\", \"q1\": \"\\u201cThe bucketization idea is not something out of the park novel. It is probably something already being used in practice. However, delineating the procedure and suggesting quantifiable statistics and designing experiments to illustrate how these can be used to draw qualitative conclusions is something that is very interesting and useful to the community as a whole.\\u201d\", \"a1\": \"We\\u2019re quite excited that you have pointed out the most challenging part of our work.\\nYes, when we would like to take multiple attributes, models, datasets all together, the most\\nchallenging thing is how to derive specific conclusions based on these tremendous results. In this paper, we overcome the difficulty by designing several meaningful measures, which can help us understand the relative merits between models quantitatively. This work also would like to show:\\nwhen multiple datasets, models are ready, the time is ripe for us to shift the data-driven\\nlearning to data-driven analyzing (conduct an analysis over plenty of experimental data with the help of meaning measures)\", \"q2\": \"\\u201cWhile the authors have tried to state that the method is \\\"general\\\" and goes beyond NER, I am not sure if that is the case. The creation of attribute buckets is vital for any further analysis, its\\nnot clear how the method can be adapted to more general settings unless such\\nattributes and buckets can be created easily (e.g. using domain knowledge).\\u201d\", \"a2\": \"We try to address your concern by presenting a detailed description of general settings on other tasks. You could refer to our first answer to R3.\", \"q3\": \"\\u201cthe paper is well-written and easy to understand, albeit some of the related work seems a little unrelated to the task at hand\\u201d\", \"a3\": \"Thanks for your suggestion and we have made it revised in our new version.\"}", "{\"title\": \"Response to Review #3-Part2\", \"comment\": \"Q1: \\u201cHowever, it seems to be somewhat tailored to the NER task. My question is: How well the proposed method generalizes to other NLP tasks without attributes? Similarly, how well the\\nproposed bucketization strategies generalize beyond the NER task?\\u201d\", \"a1\": \"To adapt this methodology to other tasks, we honestly admit that the process of attribute definition usually requires some domain knowledge. However, we would like to show the process is not complicated since many task-agnostic attributes could be applied, such as oov, sentence length. Importantly, we believe each domain-specific expert should take responsibility for driving the development of the domain based on their understanding of the task, and hopefully, this work could provide such a methodology in which domain knowledge from different tasks could be utilized.\\nAs a preliminary summary in the table (Part1), we share some task-specific definition of attributes, where \\u201cTasks\\u201d represents different types of tasks; \\u201cAttributes\\u201d denotes the criterion that we use to divide the test set, and \\u201cMeasures\\u201d represents the measure we use to evaluate each divided sub-set. \\u201cRelated ref.\\u201d shows the corresponding papers that have adopted the attribute for fine-grained evaluation.\", \"q2\": \"In Section 4.2, for the R-Bucket strategy it is stated as having the requirement of discrete and finite attributes. Based on the equations of the other two strategies (R-bucket and F-bucket), it seems that they also have the requirement of having discrete attributes. Is this indeed the case? if so, it should be explicitly indicated. Having said that, this raises another question: Is this protocol exclusive to tasks/problems with explicit discrete attributes?\", \"a2\": \"We are sorry for not making it clear, and we have refined the description of the bucket strategy more clearly. Specifically, R- and F- Bucket strategies could be applied to\\nattributes with discrete value and continuous value. For example, the \\u201centity density\\u201d attribute we used in this paper. Although its value is continuous, we could discretize the value into different ranges (i.e. low, medium, high), and then we could adopt the R- or F-Bucket strategies.\", \"q3\": \"\\u201cLast paragraph of Section 4.2 summarizes ideas that were just presented. It feels somewhat\\nredundant. I suggest removing in in favor of extending the existing discussions\\nand analysis.\\u201d\", \"a3\": \"Thanks for your granular suggestions, and you can see the modification in our revised version.\", \"q4\": \"\\\"something very desirable for every evaluation. As such, in my opinion, the \\\"interpretable\\\" tag associate to the proposed method is somewhat out of place. Having said that, I would recommend removing the \\\"interpretable\\\" tag and stress the contribution of this manuscript as\\nan evaluation protocol.\\\"\", \"a4\": \"Thanks for your constructive suggestion, and we have carefully considered it. However, we have not taken it yet in our revised version, and we would like to share our reasons:\\n\\u00a0\\u00a01) In this work, we aim to interpret the model biases, dataset biases, and their correlation. Some of the previous work also involves the \\\"bucketize-then-evaluate\\\" idea (As we have listed in the above table and mentioned in the introduction section) while they are without a quantitative process to analyze these biases. \\n\\u00a0 2) We also would like to show that this attribute-aided evaluation method could be a way for us to understand our black-box models and datasets.\\n\\nThanks again for your insightful comments! We have already refined the paper, please check the latest version. Hope we answer your questions correctly and look forward to your feedback again!\"}", "{\"title\": \"Response to Review #3-Part1\", \"comment\": \"*****************************************************************************************\\nTasks\\t\\t\\t Attributes\\t\\t\\t Measures\\t\\tRelated Ref.\\n*****************************************************************************************\\nMachine Translation\\t sentence length\\t\\t\\t Bleu\\t\\t [1]\\nMachine Translation\\t word (or N-gram) frequency\\t Accuracy*\\t\\t [2]\\n\\t\\t\\t in the training set.\\t \\nMachine Translation\\t word POS-tag in the training set Accuracy*\\t\\t [3]\\nMachine Translation\\t words in reference file\\t\\t Word likelihood [3]\\n------------------------------------------------------------------------------------------------------------------------\\nSummarization (Ext. or Abs.)\\tsentence length\\t\\t\\t Rouge\\t\\t -\\nSummarization (Ext. or Abs.)\\tcompression of summary\\t\\t Rouge\\t\\t [6]\\nSummarization (Ext. or Abs.)\\tdensity of summary\\t\\t Rouge\\t\\t [6]\\nSummarization (Ext. or Abs.)\\tvolume overlap\\t\\t\\t Rouge\\t\\t [5]\\n------------------------------------------------------------------------------------------------------------------------\\nSummarization (Ext.)\\t position of each sentence\\t\\t Rouge/Accuracy [4] [5]\\nSummarization (Ext.)\\t OOV rate of sentence\\t\\t Rouge\\t\\t -\\n------------------------------------------------------------------------------------------------------------------------\\nSemantic Matching\\t\\t length of sent1 or sent2\\t Accuracy\\t\\t [7]\\nSemantic Matching\\t\\t Func(sent1, sent2)\\t Accuracy\\t -\\nSemantic Matching\\t\\t OOV\\t\\t\\t\\t Accuracy\\t\\t -\\n------------------------------------------------------------------------------------------------------------------------\\nQA\\t\\t\\t answer length, type, position\\t Matching F1\\t [8]\\nQA\\t\\t\\t document length\\t\\t\\t Matching F1\\t [12]\\nQA\\t\\t\\t query length, type\\t\\t\\t Matching F1\\t [9]\\n------------------------------------------------------------------------------------------------------------------------\\nText Classification\\t\\t sentence/word length\\t\\t Accuracy\\t\\t [11]\\nText Classification\\t\\t OOV\\t\\t\\t\\t Accuracy\\t\\t [10]\\nText Classification\\t\\t sentence familiarity\\t\\t\\t Accuracy\\t\\t -\\n------------------------------------------------------------------------------------------------------------------------\\nSequence labeling\\t\\t Similar to this work\\t\\n------------------------------------------------------------------------------------------------------------------------\\n\\n\\u3010Footnotes\\u3011\\n\\\"Accuracy*\\\" : whether generated words appeared in the gold reference\\n\\u201cFunc\\u201d can be used to compute sentence length difference.\\n\\u201cSentence familiarity\\u201d: we could quantify the degree to which the test sentence has been seen in the training set (based on n-gram calculation).\\n\\n\\u3010References\\u3011\\n[1] Effective Approaches to Attention-based Neural Machine Translation, Minh-Thang Luong Hieu Pham\\nChristopher D. Manning\\n[2] Von misesfisher loss for training sequence to sequence, Sachin Kumar and Yulia Tsvetkov. \\n[3] Compare-mt: A Tool for Holistic Comparison of Language Generation Systems, Graham Neubig,\\nZi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, John Wieting\\n[4] Text Summarization with Pretrained Encoders, Yang Liu,\\u00a0 Mirella Lapata\\n[5] Earlier Isn\\u2019t Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization,\\nTaehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy\\n[6] A Closer Look at Data Bias in Neural Extractive Summarization Models, Ming Zhong, Danqing Wang,\\nPengfei Liu, Xipeng Qiu, Xuanjing Huang\\n[7] Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks, Kai Sheng\\nTai, Richard Socher, Christopher D. Manning\\n[8] Bidirectional Attention Flow for Machine Comprehension, Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi\\n[9] A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task, Danqi Chen, Jason Bolton, Christopher D. Manning\\n[10] Learning Semantic Representations of Users and Products for Document Level Sentiment Classification, Duyu Tang, Bing Qin, Ting Liu\\n[11] The Relationship of Word Length and Sentence Length: The Inter-Textual Perspective, Peter Grzybek, Ernst\\nStadlober, Emmerich Kelih\\n[12] TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension, Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer\"}", "{\"title\": \"Updated version of the paper (Version 1.0)\", \"comment\": \"We thank all reviewers for their comments. They are extremely insightful and help us to make our paper better. We have been refining our paper based on their suggestions, and a new version is uploaded.\", \"below_is_a_summary_of_the_major_changes\": \"1) We re-organize Section2,3 in the last version and merge them as \\\"Preliminaries\\\" Section to summarize the properties of evaluation methods (of related work) and describe the NER task and its current evaluation strategy. We remove some redundant description. (To address R2 and R3's concern)\\n2) We give more explanation of the \\\"supplementary exam\\\" to address R1's concern.\\n3) We refine Section3.2 to make the description of Fig.2 and Tab.2 more clear.\\n4) We make the introduction of R-Bucket more clear and give a concrete example. Additionally, we have removed the last paragraph in Section4.2 (now it is Section3.2) (To address R3's concern)\\n5) We add an intuitive explanation for each measure defined in Section3.3. (To address R1's concern)\\n6) We add a more detailed explanation for the names of 11models and give an example. (To address R1's concern)\\n7) We refine our introduction section, providing detailed examples to show how to adapt to the proposed methodology to other types of NLP tasks.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper discusses a methodology to interpret models and model outputs for Named Entity Recognition (NER) based on assigned attributes. The key idea is to bucketize the test data based on characteristics of attributes and then comment on effect of the attribute on the model, the task itself or the dataset bias.\\n\\nThe empirical evaluation is impressive. The authors have constructed a series of experiments to make their case. The paper is well-written and easy to understand, albeit some of the related work seems a little unrelated to the task at hand. While the authors have tried to state that the method is \\\"general\\\" and goes beyond NER, I am not sure if that is the case. The creation of attribute buckets is vital for any further analysis, its not clear how the method can be adapted to more general settings unless such attributes and buckets can be created easily (e.g. using domain knowledge). Furthermore, there is only one problem setting considered (i.e. NER), and for the paper is make claim to more general settings, I would expect evaluations on atleast one more problem setting. I would suggest the authors modify the claims accordingly. This is not to diminish from their contributions in the NER. \\n\\nThe bucketization idea is not something out of the park novel. It is probably something already being used in practice. However, delineating the procedure and suggesting quantifiable statistics and designing experiments to illustrate how these can be used to draw qualitative conclusions is something that is very interesting and useful to the community as a whole. The strongest part of this paper is the empirical evaluation that allows drawing interesting conclusions, and suggests a methodology to reach that conclusion. While some of the claims made (e.g. regarding dataset biases) probably require further and deeper analysis, this is a good first step that should foster further research and discussion.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"TOWARDS INTERPRETABLE EVALUATIONS A CASE STUDY OF NAMED ENTITY RECOGNITION\\n\\n\\n\\nThe authors propose an evaluation methodology to study the relations between datasets and machine learning models. This methodology introduces the notion of attributes which describes different aspects of the samples and buckets which group samples according to the attributes. The goal is to give a better understanding of the strengths and weaknesses of an algorithm on a specific dataset according to the attributes, as shown on Fig4.\\n\\nThe article is very dense and the author chose to present the method from an abstract and generic point of view which makes the reading of the article difficult. In the end, the proposition is a formalisation of the simple error analysis which is commonly done when trying to improve a machine learning system. The advantage of the method could be to introduce some metrics to make the error analysis more automatic. These metrics are given in section 4 but here again only from a formal point of view : it is very difficult for the reader to understand how to interpret them and how to use them for a practical case.\", \"the_paper_is_17_pages_long_with_the_annex\": \"it would better fit a journal publication or the author should select some of the main results to present them in a conference paper. The aspects of the paper related to learning relations is no put forward enough.\\n\\n2. Related work\\n\\n2.1 :\\n -supplementary exam : unclear\\n\\n2.2 : \\n- methodological perspective : a bit a repetition of introduction\\n- task perspective : not very clear, is the main message \\\"it important to understand what in the dataset make the model work ?\\\"\\n\\n3 Task\\n\\nSection is too small to be a level 1 title\\n\\n4 Attributes\", \"figure_2\": [\"where are the links to levels ?\", \"4.2 :\", \"familiarity : test/train distribution should be the same. Fk computer on train set because it is bigger ? it allows to study the impact of the number of occurrences in the training set. Is it more interesting than a learning curve ?\", \"multi attribute familiarity : risk of metric explosion ? how to select the attributes ?\", \"eq 3 : spearman not defined\", \"4.3\", \"metric are defined by formula but it is difficult to understand what is the rationale behind each of them and therefore figure out how to interpret them\", \"\\\"Usually where a, b represent two different models and usually model a has a higher performance (by dataset-level metric)\\\" : unclear\", \"5 Experimental setting\"], \"table3\": \"the encoding of the model name is not clear\\na metric on all the dataset for each model could be computed to decide which one is the best overall\\nhow did you choose the tested combinaisons ?\\n\\n6\\\\.2\", \"analysis_of_fig_4\": \"R-eLen does not existe (R-Ele). what is eta ?\", \"table_4\": \"spearman\\\\**r*\\\\* ?\\n\\n6\\\\.4\\n\\n* CRF vs MLP : \\\"... a major factor for the choices of CRF and MLP: **if** a dataset with higher \\u03b6MF\\u2212et, in which longer entities can benefit more from CRF-based models.\\\" > missing words ?\", \"writing\": [\"\\\"Concretely\\\" isn't very natural at the beginning of sentences, same thing with \\\"Formally\\\", 'Intuitively' \\u2026\", \"in 4.1 : \\\"We refer to E, P, K as the sets of entities (i.e. New York), entity attributes (i.e. entity length) and attributes values (i.e. 2).\\\" => \\\"We refer to the sets of entities (i.e. New York) as E, entity attributes (i.e. entity length) as P and attributes values (i.e. 2) as K\\\" would be better\", \"same thing in 4.3 \\\"we refer to M = m1,\\u00b7\\u00b7\\u00b7 ,m|M| as a set of **models** and P = p1,\\u00b7\\u00b7\\u00b7 ,p|P| as a set of **attributes**\\\" doesn't really work, \\\"M = m1,\\u00b7\\u00b7\\u00b7 ,m|M| is a set of **models** and P = p1,\\u00b7\\u00b7\\u00b7 ,p|P| is a set of **attributes**\\\" maybe\", \"in 4.2 page 5 : \\\"the familiarity Fk (p1 , p2 ) is a measure with intriguing explanation \\u2026\\\" : not clear\", \"6.3 (3) \\\"Only using character-level CNN is apt to overfit the feature of capital letters.\\\" **apt** doesnt work here\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The manuscript proposes an evaluation methodology to obtain deeper insights regarding the strength and weaknesses of different methods on different datasets. The method considers a set of methods addressing the task of Named Entity Recognition (NER) as case study. In addition, it proposes a set of attribute-based criteria, i.e. bucketization strategies, under which the dataset can be divided and analyzed in order to highlight different properties of the evaluated methods.\\n\\nAs said earlier, the manuscript proposes an evaluation methodology to obtain deeper insights regarding the strength and weaknesses of different methods on different datasets. The characteristic of being able to provided deeper insights on strength/weaknesses and relevant factors on the inner-workings of a given method is \\nsomething very desirable for every evaluation. As such, in my opinion, the \\\"interpretable\\\" tag associate to the proposed method is somewhat out of place. Having said that, I would recommend removing the \\\"interpretable\\\" tag and stress the contribution of this manuscript as an evaluation protocol. \\n\\nIn Section 4.2, for the R-Bucket strategy it is stated as having the requirement of discrete and finite attributes. Based on the equations of the other two strategies (R-bucket and F-bucket), it seems that they also have the requirement of having discrete attributes. Is this indeed the case? if so, it should be explicitly indicated. \\nHaving said that, this raises another question: Is this protocol exclusive to tasks/problems with explicit discrete attributes?\\n\\nThe goal of this manuscript is to propose a general evaluation protocol for NLP tasks.\\nHowever, it seems to be somewhat tailored to the NER task. My question is: How well the proposed method generalizes to other NLP tasks without attributes? Similarly, how well the proposed bucketization strategies generalize beyond the NER task? Perhaps the generalization characteristics and limitations of the proposed evaluation methodology should be explicitly discussed in the manuscript.\\n\\nLast paragraph of Section 4.2 summarizes ideas that were just presented. It feels somewhat redundant. I suggest removing in in favor of extending the existing discussions and analysis.\\n\\nI may consider upgrading my initial rating based on on the feedback given to my questions/doubts.\"}" ] }
S1g6xeSKDS
Mixed-curvature Variational Autoencoders
[ "Ondrej Skopek", "Octavian-Eugen Ganea", "Gary Bécigneul" ]
Euclidean space has historically been the typical workhorse geometry for machine learning applications due to its power and simplicity. However, it has recently been shown that geometric spaces with constant non-zero curvature improve representations and performance on a variety of data types and downstream tasks. Consequently, generative models like Variational Autoencoders (VAEs) have been successfully generalized to elliptical and hyperbolic latent spaces. While these approaches work well on data with particular kinds of biases e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying and leveraging all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable. This generalizes the Euclidean VAE to curved latent spaces and recovers it when curvatures of all latent space components go to 0.
[ "variational autoencoders", "riemannian manifolds", "non-Euclidean geometry" ]
Accept (Poster)
https://openreview.net/pdf?id=S1g6xeSKDS
https://openreview.net/forum?id=S1g6xeSKDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "wMXTh4unAH", "rJljvhvsoS", "ryey_6Gsir", "HyehuQCusr", "Bye_MXC_ir", "SyxZzGAdiH", "B1eOr-Rdjr", "r1lBH4iE9S", "rkeOhAPTYB", "HyxpFesnKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740914, 1573776483379, 1573756263097, 1573606259647, 1573606159911, 1573605896533, 1573605696180, 1572283452874, 1571810991686, 1571758212528 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2115/Authors" ], [ "ICLR.cc/2020/Conference/Paper2115/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2115/Authors" ], [ "ICLR.cc/2020/Conference/Paper2115/Authors" ], [ "ICLR.cc/2020/Conference/Paper2115/Authors" ], [ "ICLR.cc/2020/Conference/Paper2115/Authors" ], [ "ICLR.cc/2020/Conference/Paper2115/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2115/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2115/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies generalizations of Variational Autoencoders to Non-Euclidean domains, modeled as products of constant curvature Riemannian manifolds. The framework allows to simultaneously learn the latent representations as well as the curvature of the latent domain.\\n\\nReviewers were unanimous at highlighting the significance of this work at developing non-Euclidean tools for generative modeling. Despite the somewhat preliminary nature of the empirical evaluation, there was consensus that the paper puts forward interesting tools that might spark future research in this direction. Given those positive assessments, the AC recommends acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to \\\"In response\\\"\", \"comment\": \"Thank you for the quick reply and the extensive feedback.\\n\\n- As can be seen from the experiments, the sign agnostic models (universal component, denoted $\\\\mathbb{U}$) do perform better in higher dimensions than models with fixed signs, which we also mentioned in Section 4, in the second paragraph of the Summary. Another important benefit is that the sign-agnostic models are computationally more efficient, since for a given amount of components in our product space, we do not have to try out an exponential amount of different \\\"sign configurations\\\", but only one that is allowed to dynamically learn the best such sign configuration.\\n\\nOn the other hand, the E/S/H models are less prone to numerical instabilities and are also more efficient to optimize in practice, as noted in [1] and in the second paragraph of Appendix A.2. However, they cannot do sign-agnostic curvature learning (Section 2.1, first paragraph). The same dimensions are comparable across these models in experiments, because they have the same degrees of freedom.\\n\\n- Nan values: Thank you for spotting this. We have removed Table 14 in the Appendix, as it has never been mentioned in the paper and contained only preliminary results. For the two rows in Table 13 with \\\"nan\\\" standard deviations, we did not obtain meaningful results across multiple runs due to optimization instability. We have clarified this in the table's caption.\\n\\nThank you,\\n\\nThe authors\\n\\n\\n[1] Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry by Nickel & Kiela (ICML 2018)\"}", "{\"title\": \"In response\", \"comment\": [\"Thank you for answering my questions. The revised paper looks much better.\", \"The E/D/P versus E/S/H thing makes sense--- I see that you compare the product with the same dimensions for these two spaces in the experiments. what's the takeaway in general? Does the sign-agnostic aspect significantly help?\", \"One more thing: how do we interpret the number in Table 13, where the values often have +/- nan error bars? Should I just assume the whole value is too unstable to interpret?\"]}", "{\"title\": \"Response to review #1 (continued)\", \"comment\": [\"We will assume that the reviewer meant Section 2.4, not 2.3, as that one does not deal with wrapped distributions. Essentially, Wrapped Normal distributions are very computationally efficient to sample from and also efficient for computing the log probability of a sample, as detailed by Nagano et al. (2019). The Riemannian Normal distributions (based on geodesic distance in the manifold directly) could also be used, however they are more computationally expensive for sampling, because the only methods available are based on rejection sampling (Mathieu et al., 2019). A detailed description of Wrapped Normal distributions follows Section 2.4.1, and constructions of all the other mentioned approaches (e.g. restriction-based distributions) are detailed in prior work extensively, like Mathieu et al. (2019), Nagano et al. (2019) Davidson et al. (2018), Xu & Durrett (2018), and several other older sources which we cite in our work. A quite comprehensive trade-off discussion is also present in Mathieu et al. (2019), Appendix B.1. We only aim to give an overview and brief motivation for the choice, which we have slightly improved in the text.\", \"As stated at the end of Section 2.4.1, all the operations in the projected constantly curved spaces (Poincare ball and the projected hypersphere) converge to their Euclidean counterparts as $K \\\\to 0$. Therefore, ELBO(K) in these spaces also converges to the classic Gaussian VAE ELBO as $K \\\\to 0$, and hence ELBO(K) can be reformulated as a continuous function w.r.t. $K$. at all points. Differentiability w.r.t. K is straightforward at all points except for $K=0$. At $K=0$, all operations in these spaces are differentiable, which can be verified. Hence ELBO(K) is differentiable, because it is a differentiable composition of differentiable functions. Thank you for pointing out the impreciseness, we have removed the claim from the paper. Fortunately, our method does not strictly depend on differentiability at 0.\", \"Components of dimension 2 are the smallest non-trivial examples of these spaces. They can be easily plotted and inspected. Also, a product space constructed of components of dimension 2 has the most curvature parameters possible overall, because they are one per component. Moreover, as detailed by Tifrea et al. (2019, Section 5), products of H^2 hyperbolic spaces are isometric with the space of Gaussian distributions with diagonal covariance matrices while a single H^n hyperbolic space is isometric to a single Gaussian distribution with a spherical covariance matrix.\", \"The latent space dimensions in the experimental evaluation were chosen as dimensions similar to prior work (Mathieu et al. 2019, Nagano et al. 2019, Davidson et al. (2018)). They mostly used smaller dimensions (e.g. 5, 10, 15). We changed the numbers slightly so that they are divisible by 2 (divisible by the smallest component dimension) and also divisible by 3 (the number of types of components, i.e. {E, S, H} or {E, D, P}). The models H^n, P^n, S^n are all equivalent to prior art models, as stated in the last paragraph of Section 4 before the header \\u201cBinary Diffusion Process\\u201d. There are also other conceptually differing VAE models that could be used with our product space formulation, as we mentioned in the \\u201cExtended Future Work\\u201d in Appendix D. We have also attempted to use other models (e.g. Beta-VAE), but we did not observe any significant deviations from the presented results, when compared across different product space curvatures.\", \"Thank you for all the feedback,\", \"The authors\"]}", "{\"title\": \"Response to review #1\", \"comment\": \"Firstly, we would like to thank Reviewer #1 for all the time and effort invested into understanding and reviewing our work.\\n\\nWe agree that our work is dense and that is why any result that is not directly essential to the concept of a Mixed-curvature VAE like theoretical properties, proofs, or derivations of formulas has been already moved to Appendices A and B. We believe the provided details will not be at the expense of clarity or make verification of results harder, but should on the contrary make reproducibility and verification easier by outlining the necessary steps more clearly. We have shortened the appendices even more.\", \"a_point_by_point_response_to_the_comments_follows_below\": [\"Thanks for pointing out the relevant submission on Graph Convolutional Networks. We have added a mention of this in the related work section of our work. While the underlying geometry is similar to ours, they face several challenges that are different.\", \"We have updated our definitions of curvature and verified that all following claims still hold.\", \"You are correct, one can approximate flat spaces (0 curvature) arbitrarily well in any of the mentioned spaces. However, in the hyperboloid and hypersphere (as mentioned at the end of the first paragraph in Section 2.1), the Euclidean norms of points in these spaces go to infinity as $K \\\\to 0$. Therefore, when learning curvature, we wouldn\\u2019t be able to change signs of curvature in these spaces. Additionally, the distance and the metric tensors do not converge to their Euclidean variants as $K \\\\to 0$ for these spaces, hence the spaces themselves do not converge to $\\\\mathbb{R}^d$. On the other hand, for the Poincare ball and the projected hypersphere this holds.\", \"Prior work on product spaces (Gu et al., 2019) considered the manifolds {E, S, H}, mostly due to the favorable optimization properties. However, they never attempted to learn curvature in a sign-agnostic way, which presents a challenge: the norm of points diverges as $K \\\\to 0$. For this reason, we introduce additional manifolds {P, D}. In our experiments, we either use products of {E, S, H} or products of {E, D, P} depending on if we\\u2019re learning curvature sign-agnostically or not, because as you correctly pointed out, S is isometric to D and H is isometric to P, in terms of their distance functions. All of this is motivated and explained in Section 2.1 and extensively detailed in Appendix A.\", \"We have fixed the mistake in the L2 distance decomposition, thank you for spotting this!\", \"(continued below)\"]}", "{\"title\": \"Response to review #2\", \"comment\": \"Firstly, we would like to thank Reviewer #2 for all the time and effort invested into understanding and reviewing our work.\", \"below_follows_a_point_by_point_response\": [\"To the best of our knowledge, we used several standard benchmark datasets for evaluating VAE models. We have attempted to use other models (e.g. Beta-VAE), but we did not observe any significant deviations from the presented results, when compared across different product space curvatures.\", \"Gyrovector spaces are a key step towards defining VAEs in a unified framework across differently curved spaces (Poincare ball, Euclidean space, Projected spherical space), which then leads to being able to learn the curvature of such a space irrespective of the sign.\", \"We have attempted to shorten the geometric background as much as possible while not sacrificing understandability by moving a lot of important definitions and properties required for reproducing the theoretical and empirical results of this paper into Appendices A and B already.\", \"The only similarity to the paper of Gu et al. is the motivation and use of product spaces as opposed to single component spaces. They focus on graph embeddings, whereas we attempt to learn VAEs in these spaces. Gu et al. additionally only use the spaces {E, S, H} and learn curvature only with a fixed sign.\", \"As far as we are aware, none of the single-component prior works (hyperbolic/spherical/Euclidean VAEs) ever attempted to learn curvature as a parameter of the model directly. Hence, they did not have to obtain derivations of the Wrapped Normal distribution\\u2019s log-likelihood or even the operations in the space for different values of curvature K. Several of the formulas simplify significantly if we assume a fixed $K=1$ or $K=-1$. One notable exception to this is the Poincare VAE approach of Mathieu et al. (2019), where they derive the necessary formulas and attempt to train several $\\\\mathbb{P}_c^n$ VAEs with different values of $c \\\\in [0.1, 1.4]$, always fixed during the entire process of training and evaluation. Therefore, all the operations and log-likelihoods were derived from scratch for {S, H, D}, proven (see Appendix A and B), and checked by comparing to prior work formulas by substituting $K=1$ (or $-1$) as an additional verification step.\", \"Thank you for all the feedback,\", \"The authors\"]}", "{\"title\": \"Response to review #3\", \"comment\": \"First of all, we would like to thank Reviewer #3 for all the time and effort spent on understanding our work. We are happy to see that the mathematical foundations of our work were understood and appreciated.\", \"a_point_by_point_reply_follows_below\": [\"All reconstruction loss terms for all experiments used Bernoulli log-likelihood, except for CIFAR-10 and the Binary Diffusion Process dataset, which used Gaussian log-likelihood with a fixed standard deviation of 1, as did the original authors of the dataset in Mathieu et al. (2019). We have trained CIFAR-10 (images rescaled to values in [-1, 1]) with a Gaussian log-likelihood as above, as well as with a binary cross entropy loss (images rescaled to values in [0, 1]), which loosely corresponds to a \\u201ccontinuous\\u201d Bernoulli observation model. Changing the observation model in CIFAR did not have a big impact on log-likelihood comparisons between models, just a small one on sample quality. All of these details are readily available in the accompanying source code. Additionally, we have added this information into the text. For more information on the values of the KL term and reconstruction term of the ELBO in experiments, please see \\u201cExtended results\\u201d in Appendix E.\", \"The KL values of components do differ \\u2013 across different models and even across different runs of the same model. The sum of KL terms behaves stably and similarly to the standard Euclidean VAE. We were not able to establish any meaningful connection between the model preferring a specific type of component (hyperbolic, spherical, or Euclidean) due to the KL being higher in that subspace compared to other component types. This applies even if we fix curvature and do not attempt to learn it. Despite no immediate apparent connections, this might be interesting to investigate more in future work.\", \"Sparsity does appear and is indeed verifiable in the context of e.g. small 2-dimensional components. Even with fixed or learnable curvatures, this phenomenon does occur in latent spaces which are \\u201cbig enough\\u201d for the given task, as suggested. It does not seem to be significantly more or less occurring than in a standard Euclidean VAE. As for the question if sparsity appears inside the dimensions of a single big component, this is not straightforward to answer, and would need extensive further investigation, because in spaces of non-zero constant curvature, dimensions are correlated.\", \"Thank you for all the feedback,\", \"The authors\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper devised a framework towards modeling probability distributions in products of spaces with constant curvature and showed how to generalize the VAE to learn latent representations on such product spaces using Gaussian-like priors generalized for this case. Empirically the authors evaluate the VAEs on four different datasets (a synthetic tree dataset, binarized MNIST, Omniglot, and CIFAR-10) for various choices of product spaces (fixed curvature and learnable curvature) and choices of latent space dimensionality.\", \"evaluation\": \"Overall this seems to be a nice work, with balanced discussion of the empirical results, and is clearly written.\\n--Past works have considered VAEs on single constant curvature spaces and hence it is well-motivated to consider a more flexible model that enables usage of products of such spaces.\\n--Empirical evaluations seems fair as far as I can tell, but I am not familiar with benchmarks for VAEs. It was interesting to see the variability in best performing models, e.g. cases in which the mixed curvature models did well vs. the Euclidean one.\\n--Paper is quite readable, though in a few parts seems to delve a bit unnecessarily into geometric formalism/definitions (e.g. I did not really follow or appreciate the relevance of gyrovector distances).\\n--Main text is 10 pages long and I'm not sure the extra length is necessary.\\n--I would have appreciated a more clearly delineated discussion on how the technical details of this work overlap with past papers, both those that have investigated product spaces (Gu et al 2019) and single curvature spaces in VAEs (spherical & hyperbolic)? How did the latter approaches deal with modified prior distributions and/or smoothly recovering the Euclidean K=0 limit? As a result, I'm a bit unsure as to the novelty or technical obstacles that are overcome in the proposed framework in comparison to these.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThis paper is about developing VAEs in non-Euclidean spaces. Fairly recently, ML researchers have developed non-Euclidean embeddings, initially in hyperbolic space (constant negative curvature), and then in product spaces that have varying curvatures. These ideas were developed for embeddings, and recent attempts have been made to build entire models that operate in non-Euclidean spaces. The authors develop VAEs for the product spaces case.\\n\\nThere's largely two aspects here: one is to be able to write down the equivalents for the operations in models (e.g., the equivalent of adding or multiplying matrices and vectors in Euclidean space have to be lifted to other spaces which no longer have a linear structure). The other are VAE-specific choices, particularly choosing a normal distribution on the manifolds. The authors consider several of these choices and then run a variety of experiments on small latent-dimension cases for VAEs. These reveal that sometimes non-Euclidean and in particular product spaces improve performance.\\n\\n\\nStrengths, Weakness, Recommendation\\nI like what the authors are trying to do here; embeddings and discriminative models on non-Euclidean spaces have been developed, offer credible benefits, and generative models are the next step. The authors push forward the machinery needed to do this, and the results seem like there's something there.\\n\\nOn the other hand, the entire work seems quite preliminary. It's hard to say what the takeaway is, or any suggestions for users. The paper is written in a pretty frustrating way. There's an enormous amount of stuff in a sprawling appendix (there are 43 results in the first appendix?!), and checking all of these details will take a great deal of time. \\n\\nOverall, I recommended weak accept, since a lot of these issues seem like they can be cleaned up.\", \"edit\": \"I increased my score based on the authors' response.\", \"comments\": [\"The approach taken here is quite similar to another ICLR submission this year, which basically does the same thing but applies these operations to GCNs instead of VAEs.\", \"A better way to define curvature is just to talk about the sectional curvature, instead of the Gaussian curvature the authors mention at the beginning of section 2. Fortunately for the constant case all of these definitions will be the same.\", \"It's not quite clear in Section 2.1 why we should care about the fact that you can't fully take K->0 there---why does this hurt anything? You can approximate flat curvature arbitrarily well even without K exactly 0.\", \"On a similar theme, what's the point of doing the product of {E,S,D,H,P}, instead of just {E,S,H} or {E,D,P}? Seems a bit weird to consider all 5, given the equivalence between S-D and H-P.\", \"In 2.3, the products of spaces section, the distance decomposition in the 2nd paragraph should have squares (it's an l2): d_M(x,y)^2 = \\\\sum_{i=1}^k d_{M_k_i^n_i)^2(x^i,y^i).\", \"The discussion in 2.3 should be expanded and made more concrete (some of these you can write out the expressions for), and more pros and cons explained, e.g., which theoretical properties are lost for the wrapped distributions?\", \"On page 6, I don't understand the first problem with the learnable curvature approach. Why is there no gradient w.r.t to K? Isn't the idea that you'll write this thing as a piecewise function (presumably it's continuous, since that's why the authors built those models that deform to flat), and differentiate the whole thing? Why wouldn't there be a gradient at ELBO(K)? Is it not differentiable at K=0? That doesn't follow directly from just saying the curvature is 0.\", \"What's the intuition for the component learning algorithm using 2 dimensions for each of the spaces?\", \"The experiment section was written in a way where I couldn't understand why the choices being made were there. Why 6 and 12 dimensions here? More clarity here would be great. Also, are there any other models to compare against for these datasets? I'm not a VAE expert; what do other models typically obtain in the authors' regime?\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a general formulation of the notion of a VAE with a latent space composed by a curved manifold. It follows the current trend of learning representations on curved spaces by proposing a formulation of the latent distributions of the VAE in a variety of fixed-curvature spaces, and introduces an approach to learn the curvature of the space itself. Extensive mathematical derivations are provided, as well as experiments illustrating the impact of various choices of latent manifolds on the performance of the VAE.\\n\\nI believe this work should be accepted, as while the numerical results are not particularly impressive, it provides some clear foundational work for further exploration of the use of non-euclidean latent spaces in VAEs.\\n\\nThis paper provides extensive and detailed theoretical grounding for their work, ensuring that it is a well-founded extension the VAE formalism. It explores numerous alternatives and compares them, providing detailed experimental results on 4 datasets. The appendices provided a much welcome refreshing on non-euclidean geometry, as well as more details & experimental results.\\n\\nThe paper is already quite dense, especially with the appendices, however there are a few points that could still be detailed in my opinion:\\n\\nFirst of all, what were the observation models used for the reconstruction loss in the experiments? I suspect a bernouilli likelhood was used for the binarized dataset, but what about the other ones, and notably CIFAR? Was it a Gaussian observation, a discretized logistic, ...? Was its variance learned? This kind of information is in my opinion crucial for assessing a construction to the latent space of VAE model, as it can have a lot of influence on the kind of information the model will try to store in its latent space.\\n\\nSecondly, for the model using product of spaces, do you observe some preference of the VAE to store more information in some of the sub-component? This can be explored by comparing the values of the KL term in each of these subspaces.\\n\\nThird, the VAE with a factorized Gaussian euclidean latent space has a well-known tendency to sparcify its latent representations: unneeded dimensions of the latent space are ignored by the decoder and set to the prior by the encoder. This allows one to not worry too much about the size of the latent space as long as it is \\\"large enough\\\". Does this property remain in curved spaces? Especially in the case the VAE on MNIST with a 72-dimensional latent, as I suspect the 6 and 12 dimensional spaces are not \\\"large enough\\\" for this phenomenon to appear.\"}" ] }
rJehllrtDS
Rethinking deep active learning: Using unlabeled data at model training
[ "Oriane Siméoni", "Mateusz Budnik", "Yannis Avrithis", "Guillaume Gravier" ]
Active learning typically focuses on training a model on few labeled examples alone, while unlabeled ones are only used for acquisition. In this work we depart from this setting by using both labeled and unlabeled data during model training across active learning cycles. We do so by using unsupervised feature learning at the beginning of the active learning pipeline and semi-supervised learning at every active learning cycle, on all available data. The former has not been investigated before in active learning, while the study of latter in the context of deep learning is scarce and recent findings are not conclusive with respect to its benefit. Our idea is orthogonal to acquisition strategies by using more data, much like ensemble methods use more models. By systematically evaluating on a number of popular acquisition strategies and datasets, we find that the use of unlabeled data during model training brings a spectacular accuracy improvement in image classification, compared to the differences between acquisition strategies. We thus explore smaller label budgets, even one label per class.
[ "active learning", "deep learning", "semi-supervised learning", "unsupervised feature learning" ]
Reject
https://openreview.net/pdf?id=rJehllrtDS
https://openreview.net/forum?id=rJehllrtDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hRkpIrnja", "Bke-m6GVoB", "SyeAR7IXiH", "HylXa7IQjS", "BklJV7UmjS", "SJgouzL7or", "rylYGfUQsr", "SJlVOqRh9B", "BkgLl-YVcS", "HJg_1902tB", "HJg-jbrqFB", "HJe-PntOtS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "comment", "official_review" ], "note_created": [ 1576798740886, 1573297433240, 1573245909691, 1573245882703, 1573245734577, 1573245555042, 1573245457461, 1572821611968, 1572274413898, 1571772896096, 1571602841382, 1571490904816 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2114/Authors" ], [ "ICLR.cc/2020/Conference/Paper2114/AnonReviewer2" ], [ "~Thomas_Brox1" ], [ "ICLR.cc/2020/Conference/Paper2114/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper argues that incorporating unsupervised/semi-supervised learning into the training process can dramatically increase the performance of models. In particular, its incorporation can result in performance gains that dwarf the gains obtained by collecting data actively alone. The experiments effectively demonstrate this phenomenon.\\n\\nThe paper is written with a tone that implicitly assumes that \\\"active learning for deep learning is effective\\\" and therefore it is a surprise and a challenge to the status quo that using unlabelled data in intelligent ways alone gets such a boost. On the contrary, reviewers found that active learning not working very well for deep learning is a well-known state of affairs. This is not surprising because the most effective theoretically justifiable active learning algorithms rely on finite capacity assumptions about the model class, which deep learning disobeys. \\n\\nThus, the reviewers found the conclusions to lack novelty as the power of semi-supervised and unsupervised learning is well known. Reject.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to all Reviewers\", \"comment\": \"This is a general response to all reviewers, meant to address common concerns and summarize our (lengthy) responses to each reviewer.\\n\\n1. We would like to thank all reviewers for their excellent in-depth analysis and feedback. The resulting discussion here helps us in improving our work, in particular discussion of results and conclusions in our manuscript.\\n\\n2. We would kindly ask them to carefully check again our claimed contributions in page 2. For instance, we claim neither a novel component improving AL nor a combination of components for the first time.\\n\\n3. We want to stress that we take a neutral position towards AL as well as unsupervised/semi-supervised learning, and we will polish expressions like \\\"spectacular\\\".\\n\\n4. Nevertheless, our main message, based on empirical findings, is that there are flaws, not in (deep) AL itself, but rather in its evaluation. This is clearly a negative result, however our detailed results allow for actions that will help further progress in (deep) AL. For instance:\\n(a) If the objective is best performance for least labels and the cost of labeling is more important than the cost of model training, it makes sense to evaluate new AL ideas in the presence of methods that use unlabeled data during model training, since this always improves performance by a large margin.\\n(b) One may revisit the standard protocol of equal label budget per cycle as well as the initial seed of labels, since AL is sensitive in the quality of the representation and may be outperformed by vanilla training (Random baseline) when labels are few, especially in the presence of methods that use unlabeled data during model training.\\n(c) If all training examples are used in each cycle, then fine-tuning rather than training from scratch in each cycle or even end-to-end training with all three components (AL, unsupervised, semi-supervised) rather than in different stages are interesting directions to explore. There are already recent approaches integrating unsupervised and semi-supervised learning (e.g. [Zhai et al., S4L: Self-Supervised Semi-Supervised Learning; ICCV19]); the same could happen with AL.\\n\\n5. We believe that suggesting a new protocol or a new integrated algorithm is beyond the scope of this work. There are too many possible directions from here.\\n\\n6. We also believe that our experimental setup is already complex enough, such that adding more options for the components that are orthogonal to AL would blur the presentation and discussion of results and shift it away from our main message.\\n \\n7. Finally, we shall update our discussion of results and conclusion based on the above.\"}", "{\"title\": \"Response to Reviewer #3 (Part 2/2)\", \"comment\": \"4.\\\"The use of sampling in the SSL component is interesting, although an ablation here investigating this specific choice.\\\"\\n\\nOn CIFAR-10 (b=1000) +SEMI, cycle 1, uniform sampling without weights gives 77.6% accuracy, uniform with weights gives 78.4% and our approach gives 78.9% on average over 5 runs. This is a small improvement. We are adding more results.\\n\\n5.\\\"I think the characterization of AL is not quite right on page 2. The authors write that AL is focuses on the \\\"least certain\\\"\\\"\\n\\nOf course. This could be e.g. \\\"least certain\\\" (considering the classifier) or \\\"furthest\\\" (considering the geometry of feature space alone). Whatever the criterion, AL and SSL can still be seen as two facets of the same problem. \\\"Least certain\\\" was an example. We are rephrasing.\\n\\n6. \\\"Spectacular\\\".\\n\\nWell, in Fig. 4(a), cycle 0 for instance, Random+PRE+SEMI is better than Uncertainty alone by more than 50% (81.25 vs. 31.63 in Table 4). In Fig. 4(b), cycle 2, this difference is 15% (58.72 vs. 43.66 in Table 4). Differences between different acquisition strategies are rarely above 2%. We do find this spectacular, but we are rephrasing nevertheless.\\n\\n7. \\\"As is often the case in work on AL, there is no real notion of a 'test set' here; instead the authors repeat experiments using different seed label sets.\\\"\\n\\nWe do not understand this comment. Could you please elaborate? Is maybe 'validation set' really meant?\\n\\n8. \\\"It is not entirely clear how much hyperparameter/architecture fine tuning was performed informally, but there is a lot going on here, so I would assume at least some. Therefore there is a risk that all results reported are in some sense optimistic, potentially being \\\"overfit\\\" to these datasets. It would be best to provide additional comparisons of approaches on completely unseen datasets.\\\"\\n\\nWe assume a realistic scenario where only a handful of initial labels is given and therefore we opt to not use a validation set. To that end, we use the parameters specified in prior work like [Iscen et al., 2019], making changes only based on constraints of the protocol. For instance,\\n- we increase the learning rate for faster convergence in all cases, even if this may be suboptimal;\\n- we adjust the mini-batch size such that an epoch consists of several mini-batches in the 100 scenario;\\n- we further reduce the mini-batch size in the extreme case of MNIST (b=10) scenario.\\nMost importantly, hyperparameters are the same across all datasets and all scenarios. Overall, the argument of completely unseen datasets appears to apply to any work on AL, not just ours.\"}", "{\"title\": \"Response to Reviewer #3 (Part 1/2)\", \"comment\": \"Thank you for your review! Please find the response below.\\n\\n1. \\\"one might take from these results is that unsupervised and semi-supervised learning methods can boost predictive performance; but I think this is widely appreciated already.\\\"\\n\\nThis is not exactly our main message. Of course unsupervised and semi-supervised learning methods are effective by themselves, and there is a lot of recent progress as discussed in our related work. The main point is the effect of unsupervised and semi-supervised learning methods when used in AL, relative to the effect of acquisition strategies (which are the core of AL) and the differences thereof. This is not appreciated as much. On the contrary, we clearly discuss cases like [Wang et al., 2017; Ducoffe & Precioso, 2018; Gal et al., 2017] where the combination of semi-supervised and AL is found not so effective or even harmful.\\n\\n2. \\\"Perhaps a better framing for this work is: AL using standard metrics seems to be comparatively ineffective, especially when one uses pre-training/semi-supervised learning.\\\"\\n\\nExactly. This is our main message. In the abstract for instance, we say 'we find that the use of unlabeled data during model training brings a spectacular accuracy improvement in image classification, compared to the differences between acquisition strategies.' (See below about the criticism on \\\"spectacular\\\".)\\n\\nWe are not the first to observe the similar performance of different acquisition strategies in the context of deep learning. See for instance [Gissin & Shalev-Shwartz, 2018; Chitta et al., 2019; Beluch et al.,2018], discussed extensively in our paper. We confirm their findings and in addition, as a main contribution, we systematically evaluate a number of strategies in the presence or not of unsupervised and semi-supervised learning, showing the relative effectiveness of all ideas in the same experimental setup.\\n\\n3. \\\"given that, by the authors' own admission, the \\\"random baseline may actually outperform all other acquisition strategies by a large margin\\\", what is the motivation for adopting \\\"AL\\\" at all? I mean, if we are performing random (iid) sampling, this just reduces to vanilla learning with pre-training and semi-supervision; the 'active' component becomes irrelevant.\\\"\\n\\nExactly. 'Random baseline' means, as always, 'no AL'. So, overall, we evaluate the learning process with/without all three components: unsupervised, semi-supervised, active. In addition, we consider several acquisition strategies in the case of active.\\n\\nNow, on the actual result: indeed, when the label budget is small, random may be better than any other strategy. For instance, Fig. 1(a), SVHN (b=100). This we attribute to the weak representation obtained from few labels. The effect is amplified in the +PRE+SEMI version, Fig. 4(a). In the case of CIFAR-10 (b=100), random and other strategies are all similar in Fig. 1(b), but in the +PRE+SEMI version, Fig. 4(b), this effect happens again. Now, by comparing Fig 4(b) with 4(c), one realizes that random and the other strategies actually cross at some point around 1000 labels, after which AL becomes effective. This is a non-trivial result that can be very helpful in improving AL strategies.\\n\\nTherefore, our message is not necessarily negative: our recommendation is that unsupervised and semi-supervised should be used in the evaluation of AL methods and this may give rise to ideas for improved strategies, especially in the few label regime. We believe it is beyond the scope of this work to investigate such ideas.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your review! Please find the response below.\\n\\n1. \\\"The only conclusion that I can draw is that sometimes unsupervised/semi-supervised learning works better than active learning, but no understanding of when and why this is the case (from other papers, it is not always the case).\\\"\\n\\nUnsupervised/semi-supervised learning helps in all cases and the gain is significantly greater than the differences between AL acquisition strategies. Moreover, in certain cases of few labels (small budget b), all acquisition strategies are outperformed by Random, an effect that is amplified by the presence of unsupervised/semi-supervised learning.\\n\\nWe cannot see where the conclusion above is drawn from. Could the reviewer please elaborate? What other papers are meant?\\n\\n2. \\\"the framework in this work is restricted to semi-supervised methods that use pseudo-labels.\\\"\\n\\nIn each cycle, a new set of labels becomes available. The set of labeled example grows and there is also a set of unlabeled examples. At this point, a model is learned. Standard AL uses only the labeled examples to learn the model. Semi-supervised methods use the unlabeled examples too. There is no constraint as to what semi-supervised method one can choose. We chose a method that uses pseudo-labels. Any other method could be used.\\n\\n3. \\\"It may be the case that active learning doesn't help or even hurts because the batch size is too large and/or the initial seed set size is too small.\\\"\\n\\nWe investigated different label budget scenarios including choices commonly used in AL papers (budget of 1k) as well as providing extreme cases (budget of 10). In general, we actually use equal or smaller budgets than prior work, because few labels is the most interesting case. The initial seed set is the same as the label budget per cycle according to the standard protocol. According to [Gissin & Shalev-Shwartz; 2018], differences between acquisition functions are even smaller when using a larger budget (5k).\\n\\n4. \\\"Active learning and unsupervised/semi-supervised learning have been combined before ...\\\"\\n\\nPlease see R1 point 6.\\n\\n5. \\\"... there are other papers submitted to ICLR this year that combine these.\\\"\\n\\nMaybe the reviewer wants to reconsider this comment.\"}", "{\"title\": \"Response to Reviewer #1 (Part 2/2)\", \"comment\": \"4. \\\"many have used unsupervised learning for AL (which the authors seem less aware of) from pre-clustering (e.g., [Nguyen & Smeulders, Active Learning using Pre-clustering; ICML04]) to one/few-shot learning (e.g., [Woodward & Finn, Active One-Shot Learning; NeurIPS16 workshop]) to using pre-trained embeddings for many \\u2018real-world tasks\\u2019 (e.g., NER [Shen, et al., Deep Active Learning for Named Entity Recognition; ICLR18] using word2vec).\\\"\\n\\n[Nguyen & Smeulders; ICML04], by propagating predictions within clusters, is more related to semi-supervised than unsupervised representation learning. This is a 2004 paper using a linear SVM classifier on raw images. We cannot see how [Woodward & Finn; NeurIPS16 workshop] is related to unsupervised pre-training. [Shen, et al.; ICLR18] is indeed relevant in that word2vec embeddings are used to initialize parameters that are subsequently updated. However, it is not the focus of this work to evaluate how this initialization helps compared to random initialization. It is very interesting to note how acquisition functions again perform similarly in this very different task as shown in Fig. 4. We shall discuss.\\n\\n5. \\\"The \\u2018first semi-supervised\\u2019 claim really only holds in the context of deep learning; however, scope is really more like semi-supervised applied to image classification, which would be a pretty narrow contribution in scope.\\\"\\n\\nDo we claim anything about \\u2018first semi-supervised\\u2019?\\n\\nWe do make it clear that we consider the context of deep learning and that we focus on image classification. We do not find this scope narrow given the quantity of related work on AL focusing on the same task.\\n\\n6. \\\"Overall, there is a general overstatement of contributions and results: this is certainly not the first SSAL or USAL and the statement relative to deep learning is subtle; some of the empirical results are interesting, but I am not sure about \\u2018spectacular gains\\u2019 (and these gains aren\\u2019t seemingly due to the contribution of the paper).\\\"\\n\\nSSAL is definitely not a claimed contribution. On USAL, see point 4 above. The gains are definitely not due to our contribution. However, as discussed, recent work has failed to take advantage of semi-supervised methods in AL [Gissin & Shalev-Shwartz, 2018; Chitta et al., 2019; Beluch et al.,2018]. CEAL for instance shows no improvement, while our choices yield significant gains (10-20% compared to 2-3% of differences between AL methods). See also R3 point 6 on \\\"spectacular\\\". As stated, our primary contribution is to 'systematically benchmark a number of existing acquisition strategies, ... on a number of datasets, evaluating the benefit of unsupervised pre-training and semi-supervised learning in all cases'.\\n\\n7. \\\"Wouldn\\u2019t the right way to do (deep) representation learning in multiple rounds be to fine-tune at least some fraction of the time? If the only claim is pre-training or pre-clustering, people certainly do this \\u2014 just often not as a point of emphasis.\\\"\\n\\nWhen training on the entire training set of labeled and unlabeled examples, fine-tuning seems indeed reasonable. However, we follow a standard protocol [Chitta et al., 2019] which involves training from scratch in each cycle. This keeps our pipeline simple and makes our work comparable to all previous work. It is also argued that training from scratch helps to avoid local minima potentially reached by the previous model due to the smaller number of labels. In fact, we have experimented with fine-tuning and our preliminary findings were that indeed performance was slightly inferior, while gains in speed were not significant as convergence was slow. We did not pursue this further as it was not the main focus of this work.\\n\\n8. \\\"I don\\u2019t understand the ensemble model analogy in the abstract; is it because it is a \\u2018meta-algorithm\\u2019?\\\"\\n\\nUsing ensemble models is an idea orthogonal to the choice of AL acquisition functions and improves performance in all cases, much like integrating unsupervised and semi-supervised methods into the AL pipeline.\"}", "{\"title\": \"Response to Reviewer #1 (Part 1/2)\", \"comment\": \"Thank you for your review. Please find our response below.\\n\\n1. \\\"... the interesting question would be to compare multiple pre-training techniques and ideally the relative effect on the active learning component (assuming this is the focus of the paper). ... Since this is a negative results without a theoretical contribution, I would again expect trying several semi-supervised algorithm and evaluating their relative performance in general and wrt the active learning querying strategy. Accordingly, I don\\u2019t think the contribution of this work in its current state is sufficiently well-developed - and would lean toward rejecting in its current form.\\\"\\n\\nThe focus is indeed AL, this is why we consider several options for acquisition function. The focus is not unsupervised or semi-supervised learning, this is why we make a single choice for each. As we explicitly discuss in section 7, 'Our pipeline is as simple as possible, facilitating comparisons with more effective choices, which can only strengthen our results'. Stated otherwise, there is at least one choice of unsupervised and semi-supervised learning that yields significantly greater gain than any AL acquisition function over Random, or even makes all AL acquisition functions significantly inferior to Random in certain cases in the few label regime. This raises the question of rethinking at least how we should evaluate deep AL methods, as implied by the title. Any stronger unsupervised pre-training or semi-supervised method could only increase the gain, thus strengthening our conclusions.\\n\\nConsidering that we already experiment on several acquisition functions and cycles, several datasets and label budgets, with/without PRE, with/without SEMI, adding any more options would also make our experiments cluttered; the plots of Fig. 4 are already hardly readable.\\n\\nWe cannot follow the argument that a negative empirical result needs to be compensated by exhaustive sets of experiments. If a future work needs to validate that a new acquisition function outperforms others even in the presence of unsupervised/semi-supervised learning, would that validation need to include several options too?\\n\\n2. \\\"However, the current contribution is basically that: (1) active learning doesn\\u2019t seem to really help, (2) semi-supervised learning and unsupervised learning improve performance for this task. Since (1) was really the point of the paper (as stated) in the title, I don\\u2019t think there is enough here to accept in its current form.\\\"\\n\\nDoes that mean that negative results are not welcome?\\n\\n3. \\\"With respect to semi-supervised learning, they have validated that inductive label propagation [Issen, et al., 2019] works for this task, but haven\\u2019t shown that this helps with active learning. ... The empirical emphasis is more around overall performance rather than the interaction between unsupervised representation learning and active learning, which is more toward the stated goal of the paper.\\\"\\n\\nIt is known that unsupervised pre-training and semi-supervised learning help. The same is known for AL. What is not known is what is the relative gain of each of the three components on the same experimental setup. Algorithm 1 is exactly a combination of the three components and different combinations are systematically evaluated across all datasets, label budgets, and acquisition strategies. Our work is exactly on the interaction of the different components:\\n- The finding that AL strategies are all outperformed by Random in certain cases of limited labels, indicates the effect of the quality of the representation on AL.\\n- The a new acquisition function (jLP), more than a technical innovation, is exactly investigating whether manifold similarity (an idea coming from label propagation) helps in acquisition (which it doesn't, in line with our claim that unlabeled data should contribute to parameter updates).\\n- The study of Appendix B investigates the effect of selected examples on label propagation, partially explaining why different acquisition strategies perform similarly, at least in the presence of label propagation.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors study the problem of incorporating unsupervised (representation pre-training) learning and semi-supervised learning into active learning for image classification; specifically, performing pre-training before active learning starts [Caron, et al., 2018] and then applying inductive label propagation [Issen, et al., 2019] (slightly modification in the cost function to look more like importance sampling) before active learning querying occurs for each round (Algorithm 1). The most novel technical innovation of this submission is the joint label propagation (jLP) querying function (which is a method of \\u2018spanning\\u2019 the learned manifold space). Experiments are conducted on four (multi-class) image classification datasets (MNIST, SVHN, CIFAR-10, CIFAR-100), showing that unsupervised learning and semi-supervised learning can improve active learning on these datasets \\u2014 although random selection often works better (as best as I can tell) implying that negative results are also a contribution of this paper. Finally, some active learning experiments are conducted using a per-round label budget of one example per class \\u2014 also demonstrating mixed results with random sampling performing better in general.\\n\\nIn my mind, this paper has two primary components: (1) taking the position that semi-supervised and unsupervised learning can improve overall performance and, in principle, help with active learning and (2) propose jLP, which is a learning algorithm agnostic approach to spanning the manifold space. However, jLP doesn\\u2019t really seem to work in general. Thus, the main result is the first point \\u2014 updating previous (pre-deep learning) results on SS/US AL to deep learning. Honestly, I think the primary conclusion is that semi-supervised and unsupervised learning has improved over the past decade (especially semi-supervised learning for image classification). The second result is that active learning in deep learning (at least for this application) hasn\\u2019t kept up. Wrt to (1), as the authors have pointed out, many others have applied semi-supervised learning to AL (including more that the authors didn\\u2019t include). Additionally, many have used unsupervised learning for AL (which the authors seem less aware of) from pre-clustering (e.g., [Nguyen & Smeulders, Active Learning using Pre-clustering; ICML04]) to one/few-shot learning (e.g., [Woodward & Finn, Active One-Shot Learning; NeurIPS16 workshop]) to using pre-trained embeddings for many \\u2018real-world tasks\\u2019 (e.g., NER [Shen, et al., Deep Active Learning for Named Entity Recognition; ICLR18] using word2vec). Thus, the interesting question would be to compare multiple pre-training techniques and ideally the relative effect on the active learning component (assuming this is the focus of the paper). With respect to semi-supervised learning, they have validated that inductive label propagation [Issen, et al., 2019] works for this task, but haven\\u2019t shown that this helps with active learning. Since this is a negative results without a theoretical contribution, I would again expect trying several semi-supervised algorithm and evaluating their relative performance in general and wrt the active learning querying strategy. Accordingly, I don\\u2019t think the contribution of this work in its current state is sufficiently well-developed \\u2014 and would lean toward rejecting in its current form.\\n\\nBelow are some additional detailed comments (some also covered above): \\n\\u2014 Given that this points toward a negative result, a more convincing direction to take would be to consider more combinations of unsupervised and semi-supervised approaches \\u2014 specifically emphasizing how they affect the active learning component. This might point to more general findings and maybe toward a theory (maybe even consider a second application).\\n\\u2014 The empirical emphasis is more around overall performance rather than the interaction between unsupervised representation learning and active learning, which is more toward the stated goal of the paper.\\n\\u2014 Wouldn\\u2019t the right way to do (deep) representation learning in multiple rounds be to fine-tune at least some fraction of the time? If the only claim is pre-training or pre-clustering, people certainly do this \\u2014 just often not as a point of emphasis.\\n\\u2014 The \\u2018first semi-supervised\\u2019 claim really only holds in the context of deep learning; however, scope is really more like semi-supervised applied to image classification, which would be a pretty narrow contribution in scope.\\n\\u2014 Overall, there is a general overstatement of contributions and results: this is certainly not the first SSAL or USAL and the statement relative to deep learning is subtle; some of the empirical results are interesting, but I am not sure about \\u2018spectacular gains\\u2019 (and these gains aren\\u2019t seemingly due to the contribution of the paper).\\n\\u2014 I don\\u2019t understand the ensemble model analogy in the abstract; is it because it is a \\u2018meta-algorithm\\u2019?\", \"some_more_positive_notes\": [\"It is interesting that there is some contradictory evidence relative to [Wang, et al., 2017; Gal, et al., 2017]; this is probably worth digging into a bit deeper.\", \"The experimental details well-described given space constraints.\", \"In summary, there are some interesting observations that are probably worth pursuing. However, the current contribution is basically that: (1) active learning doesn\\u2019t seem to really help, (2) semi-supervised learning and unsupervised learning improve performance for this task. Since (1) was really the point of the paper (as stated) in the title, I don\\u2019t think there is enough here to accept in its current form.\"]}", "{\"title\": \"Thank you\", \"comment\": \"We would like to thank you for your comment. We are looking forward to having the opportunity of studying your findings.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper explores the setting where unsupervised/semi-supervised learning is combined with active learning. The results are that active learning doesn't really help. This paper is interesting in that it provides additional experiments for the intersection of active learning and unsupervised/semi-supervised learning. However, I don't really see the point of this paper. Active learning and unsupervised/semi-supervised learning have been combined before and there are other papers submitted to ICLR this year that combine these. The paper does not claim to provide anything new algorithmically (other than jLP which appears to work no better than random and isn't really advertised as the point of this paper). The only conclusion that I can draw is that sometimes unsupervised/semi-supervised learning works better than active learning, but no understanding of when and why this is the case (from other papers, it is not always the case).\", \"comments\": [\"Although the paper claims to yield a general framework, it only does so partially. For instance, the framework in this work is restricted to semi-supervised methods that use pseudo-labels.\", \"It may be the case that active learning doesn't help or even hurts because the batch size is too large and/or the initial seed set size is too small. Although this paper varies the acquisition strategies, these other hyper-parameters are equally, if not more, important.\"]}", "{\"comment\": \"The paper title is a very friendly description of the devastating results (for active learning). We recently had the same finding with a slightly different experimental design: semi-supervised learning consistently outperforms active learning on the same data pool by a large margin, and the combination of semi-supervised and active learning is hardly better than just semi-supervised learning alone.\", \"title\": \"Indeed, some serious rethinking is necessary about deep active learning\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper argues that active learning (AL) methods shold combine unsupervised and semi-supervised learning during the iterative training process. Combining these complementary is indeed sensible, and this work is therefore a welcome effort. However, the results are quite mixed, and in fact seem to suggest that AL is rather ineffective. Therefore, what one might take from these results is that unsupervised and semi-supervised learning methods can boost predictive performance; but I think this is widely appreciated already. Perhaps a better framing for this work is: AL using standard metrics seems to be comparatively ineffective, especially when one uses pre-training/semi-supervised learning.\", \"some_specific_comments_and_questions\": [\"The authors have decided to frame this paper in terms of improving AL using un/semi-supervised learning. But given that, by the authors' own admission, the \\\"random baseline may actually outperform all other acquisition strategies by a large margin\\\", what is the motivation for adopting \\\"AL\\\" at all? I mean, if we are performing random (iid) sampling, this just reduces to vanilla learning with pre-training and semi-supervision; the 'active' component becomes irrelevant.\", \"I think the characterization of AL is not quite right on page 2. The authors write that AL is focuses on the \\\"least certain\\\" instances. This is often true -- namely under the popular uncertainty sampling regime -- but not all acquisition strategies use this heuristic. Indeed, even the geometry method the authors use explicitly ignores classifier confidence.\", \"The use of sampling in the SSL component is interesting, although an ablation here investigating this specific choice (as opposed to, say, naive sampling with uniform probability over unlabeled instances).\", \"I would not characterize the gains brought by unlabeled data here as \\\"spectacular\\\".\", \"As is often the case in work on AL, there is no real notion of a 'test set' here; instead the authors repeat experiments using different seed label sets. It is not entirely clear how much hyperparameter/architecture fine tuning was performed informally, but there is a lot going on here, so I would assume at least some. Therefore there is a risk that all results reported are in some sense optimistic, potentially being \\\"overfit\\\" to these datasets. It would be best to provide additional comparisons of approaches on completely unseen datasets.\"]}" ] }
r1ghgxHtPH
Blurring Structure and Learning to Optimize and Adapt Receptive Fields
[ "Evan Shelhamer", "Dequan Wang", "Trevor Darrell" ]
The visual world is vast and varied, but its variations divide into structured and unstructured factors. We compose free-form filters and structured Gaussian filters, optimized end-to-end, to factorize deep representations and learn both local features and their degree of locality. In effect this optimizes over receptive field size and shape, tuning locality to the data and task. Our semi-structured composition is strictly more expressive than free-form filtering, and changes in its structured parameters would require changes in architecture for standard networks. Dynamic inference, in which the Gaussian structure varies with the input, adapts receptive field size to compensate for local scale variation. Optimizing receptive field size improves semantic segmentation accuracy on Cityscapes by 1-2 points for strong dilated and skip architectures and by up to 10 points for suboptimal designs. Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.
[ "scale", "deep learning", "dynamic inference", "fully convolutional" ]
Reject
https://openreview.net/pdf?id=r1ghgxHtPH
https://openreview.net/forum?id=r1ghgxHtPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Az2LN4BH1R", "SylSK0_jjr", "B1xLU0OosH", "HJg6MAOsjB", "Skg0uXMXqS", "Bye9gaWgqB", "S1e3I7w9KH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740859, 1573781116582, 1573781069859, 1573781013358, 1572180854400, 1571982578276, 1571611476350 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2113/Authors" ], [ "ICLR.cc/2020/Conference/Paper2113/Authors" ], [ "ICLR.cc/2020/Conference/Paper2113/Authors" ], [ "ICLR.cc/2020/Conference/Paper2113/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2113/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2113/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes an interesting idea of inserting Gaussian convolutions into ConvNet in order to increase and to adapt effective receptive fields of network units. The reviewers generally agree that the idea is interesting and that the results on CityScapes are promising. However, it is hard not to agree with Reviewer 3, that validation on a single dataset for a single task is not sufficient. This criticism is unaddressed.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Efficiency, the Use of Later Layers, Qualitative Results, and Further References\", \"comment\": \"Thank you for the feedback, and especially for coupling each point with advice for improvement.\\n\\n> improved efficiency (one of the main claims) is only assessed on the number of parameters\\n\\nOur main claim is to make filter size differentiable and unbounded (Figures 1 & 2), and we make use of Gaussian structure to do so with parameter efficiency. The decoupling of filter size from the number of filter parameters is the point. That said, computational efficiency is important too, and relative to the use of larger kernels are method saves a significant amount of computation and memory (Figure 2 and Sec. 4.1). Relative to standard deformable convolution (Figure 6), there is a 18x reduction in memory usage going from 2*k^2 offsets to 1 spherical covariance parameter, but this is only a minor effect in the large-scale architectures in current use. With respect to sample efficiency, we do not expect the inclusion of additional Gaussian parameters to train on any less data, since by composition there is no reduction in the free-form parameters.\\n\\nThe suggestion to explore whether Gaussian receptive fields make it possible to train more effective deeper (or shallower) nets is an interesting further direction, but we focus on characterizing their effect in already established architectures like DLA and DRN.\\n\\n> Why modifying only later layers in the architecture (end of 4.1)? \\n\\nFor dynamic inference, the scale regressor needs to have sufficient receptive field itself to infer how to adjust the receptive field for the task. Sufficient receptive field is achieved by including these layers later in the network. A fuller analysis of early/intermediate/late usage would be informative for future work. Here we have concentrated on the static vs. dynamic instead.\\n\\n> [...] what are typical covariances learned?\\n> typical hard cases [like boundaries] where blurring might be counterproductive?\\n\\nFigure 7 is representative of the learned dynamic covariances, in particular showing their range and how they vary within and across segments. Note that boundaries are respected in the covariance maps, in that scale can change sharply from one side to the other, and boundaries are estimated to be small. By transforming the filters, and not the input, nearby pixels can have far apart scales in this way.\\n\\nFor learned static covariances, for instance in the DLA architecture, different covariances are learned across the skips. The deepest layer is merged with such a large covariance that it is effectively global pooling, which is of interest because the original architecture lacked a global feature (this does not hurt localization, because features from shallower layers maintain resolution).\\n\\n> missing reference\\n\\nThank you for the further references on relevant but distinct filtering methods, which we can certainly include in the related work.\\n\\n- Lee et al. compose large kernels with a differentiable mask such that learning this mask controls the filter size. In contrast with our work, the mask approach requires more parameters for larger filters (as discussed in our FIgure 2), and still has a bounded maximum size equal to the kernel size.\\n- Su et al. adaptively multiply filters by a fixed Gaussian kernel for spatially-varying weighting. Their filtering does not learn or adapt the size of the Gaussian, as is the focus of our work for learning receptive field size.\"}", "{\"title\": \"Clarifying Gaussian Sharing, Sampling, and Blurring\", \"comment\": \"Thank you for your feedback, and the precise clarification questions, which we address point-by-point:\\n\\n> single gaussian is shared across different free-form filters. Is same gaussian also shared across input channels ?\\n\\nThe Gaussian is shared across all input and output channels of a layer. In effect, this lets a layer learn/adapt a shared scale for all of its filters. Not sharing the Gaussians, for channel-wise scaling, is an extension for future work.\\n\\n> For dynamic inference, what is the sampling resolution used ?\\n\\nWe experimented with setting the sampling rate to 2*sigma, as we did for static filtering, but found a constant sampling rate (as shown in Figure 6) to suffice in our experiments. That said, we expect that more extreme ranges of scale would require setting the resolution as a function of sigma, or else the sampling could be too sparse.\\n\\n> In case of blurring and resampling, does the model learn another filter for sampling ? To me, sampling seems similar to dynamic inference operation but with static parameters.\\nThis is exactly right. The sampling coordinates and the blurring filter are determined by the same covariance. This is analogous to smoothing and decimation when forming a pyramid: only smoothing would merely blur, but gaussian filtering then resampling/dilating the following filter instead changes scale.\\n\\n> blurring is fundamental when dilating. Does DRN-A and DLA-34 models used for comparison in Table 1 includes blurring prior to dilation ?\\n\\nYes, but results with these architectures were not sensitive to this, since the dilation rates (2, 4) are not so large. The effect of blur was stronger for ASPP and CCL (Table 3) with larger rates (6, 12, 18).\"}", "{\"title\": \"Two-step Convolution and the Gaussian as a Prior for Learning\", \"comment\": \"Thank you for pointing out the decomposition of convolution and the role of the Gaussian parameters for clarification.\\n\\n> authors proposed to compose the free-form filters and structured filters with a two-step convolution. [please] clarify why and how\\n\\nThe two-step decomposition (Sec. 4.1) follows from the associativity of convolution: rather than convolve the gaussian and free-form filters then convolve the input, we can convolve the input with the gaussian and then the free-form filters. The purpose is to make use of specialized filtering for Gaussian step, in particular to use separability to reduce the complexity of filtering with a K-size filter to O(2KMN) down from O(K^2MN) for an MxN input.\\n\\n> authors actually introduce some prior to the learning process\\n\\nWe do not introduce a Gaussian prior in the sense of regularizing convolutional filters to be more Gaussian. We include Gaussian filters in composition with standard free-form filters to give our networks more parameters, not fewer, for optimizing and adapting scale (Figure 1). In this sense the Gaussian is not a prior, but a different kind of parameter. We do not aim to learn from fewer samples, but instead to learn more general networks that can better handle scale differences: results show robustness to changes in architecture and data (Table 4), improved accuracy by locally adapting scale (Table 5), and qualitatively sensible scale estimates (Figure 7).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors proposed a semi-structured composition of free-form filters and structured Gaussian filters to learn the deep representations. Experiments demonstrate its effectiveness in semantic segmentation. The idea is interesting and somewhat reasonable but I still have several concerns. However, I still have several concerns:\\n1.\\tThe authors proposed to compose the free-form filters and structured filters with a two-step convolution. The authors are expected to clarify why and how the decomposition can realized its purpose? The authors need to further justify the methods by providing more theoretical analysis, and comparing with alternative methods. \\n2.\\tThe experiments are rather insufficient, and the authors are expected to make more comprehensive evaluations, e.g., more comparisons with the traditional CNN models. \\n3.\\tThe improvement is rather incremental compared with the alternative methods. The authors actually introduce some prior to the learning process. It would be better if the authors could show some other advantages, e.g., whether it can train the model with smaller number of samples, and whether we can integrate other prior besides Gaussian filters for other structures since Gaussian is a good prior for blurring.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes semi-structured neural filter composed of structured Gaussian filters and the usual structure-agnostic free-form filters found in neural networks. They are optimized using end-to-end training. Effectively, this lead to increased receptive field size and shape with just few additional parameters. Further, this module is architecture agnostic and can also be integrated with any dynamic inference models. Specifically, when applied on deformable convolutional filters, the deformation at each input can be structured using gaussian filters. Empirical experiments suggest that when integrated with state-of-the-art semantic segmentation architectures, the absolute accuracy on Cityscapes improves by 2%. Large improvement in seen on naive / sub-optimal architectures for segmentation.\\n\\nGiven that this is first work which demonstrates the efficient composition of classic structured filters with neural layer filters, I believe that research community will benefit to good extent if this paper is accepted.\", \"clarification\": \"1. I note that single gaussian is shared across different free-form filters. Is same gaussian also shared across input channels ?\\n2. For dynamic inference, what is the sampling resolution used ? How is it related to diagonal elements of covariance ? 2\\\\sigma ?\\n3. In case of blurring and resampling, does the model learn another filter for sampling ? To me, sampling seems similar to dynamic inference operation but with static parameters.\\n4. As noted in paper, blurring is fundamental hwen dilating. Does DRN-A and DLA-34 models used for comparison in Table 1 includes blurring prior to dilation ?\", \"additional_experiment\": \"1. Does improved receptive field size and shape also lead to improvement in other downstream tasks such as classification, object detection, depth estimation etc. ?\\n2. Table 4 shows that the networks with reduced depth when integrated with composed filters can perform as well as large networks. Does this holds true when extended to above tasks ? \\n3. I note that in all the presented results, the composed filters are only included at the last few layers. How the results prunes out when included at the lower as well as at the intermediate layers ? Please include a plot of accuracy vs depth (at which it is included).\\n4. I am glad to note that Gaussian deformable models performs as good as free-form deformable models with largely reduced parameters. Can you please add total network parameters comparison in Table 5 ? Further, are these also included only at the top few layers ?\\n5. In Table 1, DLA-34 + DoG ?\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Summary:\", \"key problem: improved visual representation learning with limited increase in parameters by leveraging Gaussian structure;\", \"contributions: 1) compose Gaussian blurs and free-form convolutional filters in an end-to-end differentiable fashion, 2) showing that learning the covariance enables a factorized parameter-efficient representation covering wide and flexibly structured filters, 3) experiments on CityScapes showing the proposed layers can help improve semantic segmentation performance for different architectures (DRN, DLA, and ResNet34).\"], \"recommendation\": \"weak reject\", \"key_reason_1\": [\"mismatch between the generality of the claims and experiments.\", \"Learning to adapt and optimize receptive fields successfully would be a great fundamental improvement to CNN architectures. Experiments are done on a single dataset for a single task, which seems insufficient to support the generality of the approach and claims in the submission. I would recommend using other datasets (e.g., COCO) and tasks (e.g., object detection, instance segmentation, depth estimation/completion), where the benefits of the approach could be demonstrated more broadly and clearly (including its inherent trade-offs).\", \"The improved efficiency (one of the main claims) is only assessed on the number of parameters, which is a direct consequence of the parametrization. Is it significant at the scale of the evaluated architectures? Does it result in runtime performance benefits? If it is indeed a useful structural inductive bias, does it result in improved few-shot generalization performance or less overfitting? Does it enable learning deeper networks on the same amount of data?\", \"Why modifying only later layers in the architecture (end of 4.1)? It seems that early layers would make sense too, as it is where most of the downsampling happens.\"], \"key_reason_2\": [\"lack of clarity and details.\", \"Section 1 and the beginning of section 4 are repetitive and verbose; in particular, Sections 4.1 and 4.2 would benefit from less textual descriptions replaced by more concise mathematical formula (simpler in this case), especially in order to know the details behind the methods compared in Tables 1-2-3.\", \"Overall, the paper could contain less text describing the hypothetical advantages of the method and the basic preliminaries (section 3), to focus more on the method itself, its details and evaluated benefits. In particular, the dynamic part (section 4.2) is unclear and the method is mostly described in one sentence: \\\"To infer the local covariances we learn a convolutional regressor, which is simply a convolutional filter.\\\" Another example of the lack of details is \\\"many\\\" vs. \\\"some\\\" in the \\\"params\\\" column of Table 4.\", \"There is also a missed opportunity to provide compelling feature visualizations and qualitative experiments (beyond Fig. 7). For instance, what are the typical covariances learned? What are the failure modes that the proposed modifications address, in particular w.r.t. thin structures and boundaries that are typical hard cases for semantic segmentation and where blurring might be counterproductive?\"], \"additional_feedback\": [\"missing reference: Learning Receptive Field Size by Learning Filter Size, Lee et al, WACV'19;\", \"missing reference (w.r.t. local filtering): Pixel-Adaptive Convolutional Neural Networks, Su et al, CVPR'19\"]}" ] }
BkxoglrtvH
Layerwise Learning Rates for Object Features in Unsupervised and Supervised Neural Networks And Consequent Predictions for the Infant Visual System
[ "Rhodri Cusack", "Cliona O'Doherty", "Anna Birbeck", "Anna Truzzi" ]
To understand how object vision develops in infancy and childhood, it will be necessary to develop testable computational models. Deep neural networks (DNNs) have proven valuable as models of adult vision, but it is not yet clear if they have any value as models of development. As a first model, we measured learning in a DNN designed to mimic the architecture and representational geometry of the visual system (CORnet). We quantified the development of explicit object representations at each level of this network through training by freezing the convolutional layers and training an additional linear decoding layer. We evaluate decoding accuracy on the whole ImageNet validation set, and also for individual visual classes. CORnet, however, uses supervised training and because infants have only extremely impoverished access to labels they must instead learn in an unsupervised manner. We therefore also measured learning in a state-of-the-art unsupervised network (DeepCluster). CORnet and DeepCluster differ in both supervision and in the convolutional networks at their heart, thus to isolate the effect of supervision, we ran a control experiment in which we trained the convolutional network from DeepCluster (an AlexNet variant) in a supervised manner. We make predictions on how learning should develop across brain regions in infants. In all three networks, we also tested for a relationship in the order in which infants and machines acquire visual classes, and found only evidence for a counter-intuitive relationship. We discuss the potential reasons for this.
[ "deep learning", "unsupervised", "supervised", "infant learning", "age of acquisition", "DeepCluster", "CORnet", "AlexNet" ]
Reject
https://openreview.net/pdf?id=BkxoglrtvH
https://openreview.net/forum?id=BkxoglrtvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "shxyC0wba", "SylW7oE2or", "Byx94FVsoB", "HylJUqycsr", "r1lrJqk5sB", "rkeLLY15sB", "H1emeU9CtH", "BJlwdkiTtr", "HkgYR6zaFH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740831, 1573829401022, 1573763377845, 1573677639137, 1573677532647, 1573677389572, 1571886570746, 1571823470884, 1571790288802 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2112/Authors" ], [ "ICLR.cc/2020/Conference/Paper2112/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2112/Authors" ], [ "ICLR.cc/2020/Conference/Paper2112/Authors" ], [ "ICLR.cc/2020/Conference/Paper2112/Authors" ], [ "ICLR.cc/2020/Conference/Paper2112/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2112/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2112/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper investigates the properties of deep neural networks as they learn, and how they may relate to human visual learning (e.g. how learning develops across regions of the infant brain). The paper received three reviews, all of which recommended Weak Reject. The reviewers generally felt the topic of the paper was very interesting, but overall felt that the insights that the paper revealed were relatively modest, and had concerns about the connections between DNN and human learning (e.g., the extent to which DNNs are biologically plausible -- including back propagation, batch normalization, random initialization, etc. -- and whether this matters for the conclusions of the present study). In response to comments, the authors undertook a significant revision to try to address these points of confusion. However, the reviewers were still skeptical and chose to keep their Weak Reject scores.\\n\\nThe AC agrees with reviewers that investigations of the similarity -- or not! -- between infant and deep neural networks is extremely interesting and, as the authors acknowledge, is a high risk but potentially very high reward research direction. However, in light of the reviews with unanimous Weak Reject decisions, the AC is not able to recommend acceptance at this time. I strongly encourage authors to continue this work and submit to another venue; this would seem to be a perfect match for CogSci conference, for example. We hope the reviews below help authors to improve their manuscript for this next submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"An iterative approach\", \"comment\": \"We appreciate your scepticism: this research was conceived and explicitly funded as a high-risk/high-gain programme. However, we believe the challenge is tractable because it can be tackled in an iterative way. Our current goal is to find initial points of contact between DNNs and infants, in the form of correspondences that are only loosely affected by the specific architecture, learning rule, optimiser, and hyperparameters. Once this initial contact is made, we will proceed to gradually more specific correspondences, iteratively refining our DNN design with the goal of converging on a more infant-like model. In the course of this process, we will aim to identify places where existing DNNs deviate strongly from infants. For example, if a subset of visual classes, such as faces or things that move, are learned much earlier by infants than any of the DNNs we test, this will suggest that perhaps an innate \\u201cface template\\u201d or \\u201cmovement salience\\u201d is needed in the DNNs.\\n\\nThank you for the time you have taken to engage with this work.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your careful and considered response.\\n\\nRegarding the parallel between DNN training and human learning, it seems to me that while there is a nontrivial body of work exploring the relationship between adult brains and fully-trained DNNs, there is not yet much work investigating whether the DNN training process should correlate with human learning. \\nAs such, perhaps this work can serve as a stepping stone in this direction as the new abstract suggests. \\n\\nHowever, I am still skeptical about whether there is enough evidence to indicate that DNN training is likely to correlate with human learning. It may be that an existing DNN architecture correlates well with what we know about the human brain after the architecture is finished training, but that doesn't necessarily indicate that the same should be true during training. For DNN training itself, there are many variations such as different optimizers (SGD, SGD with momentum, Adam, etc) and other hyperparameters (gradient clipping, learning rate, etc); how should we pick these to best model human development?\\n\\nAs for random weight initialization, I agree that a significant proportion of visual knowledge must be learned; however, the ability of babies to quickly perform tasks like looking at faces may indicate that there is at least some level of pre-encoded visual knowledge (e.g. edge detection, looking for areas of contrast, etc) in the human brain, which randomly initialized DNNs do not have. If babies do not need to learn such knowledge from scratch, but DNNs do, then it seems reasonable to expect that that could also lead to different outcomes during DNN training.\\n\\nGiven my lack of experience in this area, I will defer to the other reviewers who have indicated greater experience.\"}", "{\"title\": \"Revisions to address your suggestions\", \"comment\": \"Thank you for taking the time to consider our paper and for providing interesting and constructive feedback. We edited the paper accordingly, clarifying and correcting where appropriate.\\n\\n>> The work did show that unsupervised training results in a different pattern of layer learning than supervised learning, but neither form of learning was able to model the development in children. Perhaps a self-supervised multimodal learning system should be tried?\\n>> The decision to train the DeepCluster type net in a supervised way for a control on training method vs architecture type is nice, but it would also have been good to try other kinds of networks.\\n\\nWe have now added the following paragraph to section 4.1.\\n* \\u201cIn future work, it will be important to extend the investigations to further DNNs. These could test generalisation to variants on the strategies, such as local aggregation, a recently proposed unsupervised training objective, which like DeepCluster learns a visual embedding based on clustering of images (Zhuang et al., 2019). It would be informative to investigate a wider range of objectives, for example testing DNNs that exploit other structures in visual input such as temporal prediction (Lotter at al., 2016) or cross-modal learning (Wang et al., 2013).\\u201d\\n\\n\\n>> It is not clear that age of acquisition of the verbal word should be related to age of acquisition of the visual concept. The author's state \\\"A number of linguistic factors are known to affect when words are first used, including the frequency of the word in language and its number of phonemes, but the second strongest factor is the \\\"concreteness\\\" of the word (Braginsky et al., 2015). This suggests that the strength of the visual representation of a class has an effect on when its label is acquired.\\\" It is not clear to me how the concreteness of a word relates to the strength of visual representation.\\n\\nWe have added some further resources which we believe better support our rationale for using age of acquisition (AoA) as a proxy for visual class acquisition. \\n* We have further unpacked the pros and cons of this measure. At present, there are no measures of the age of visual classes on a sufficient scale, which is now highlighted in section 2.4. \\n* The relationship of concreteness to ventral visual stream representations has been established. Studies of \\u201cembodiment theory\\u201d using fMRI and EEG have shown that reading a concrete noun evokes visual representations of the object (Anderson et al. 2015, Kellenbach et al. 2000). This is now discussed in Aim 2 of the paper and section 2.4. These results suggest that learning a word will build upon these visual representations.\\nGiven the evidence and constraints, we believe that AoA is a valuable starting point.\\n\\nWe hope you agree that the paper is greatly strengthened.\\n\\nReferences\\n Anderson AJ, Bruni E, Lopopolo A, Poesio M, Baroni M. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text. NeuroImage. 2015 Oct 15;120:309-22.\\n Kellenbach ML, Wijers AA, Mulder G. Visual semantic features are activated during the processing of concrete words: Event-related potential evidence for perceptual semantic priming. Cognitive Brain Research. 2000 Sep 1;10(1-2):67-75.\"}", "{\"title\": \"Revisions to address your comments\", \"comment\": \"Thank you for taking the time to consider our paper and for providing constructive feedback. We edited the paper accordingly, clarifying and correcting where appropriate.\\n \\nTo better articulate the rationale for using DNNs as a model of infant visual development, we have thoroughly revised the introduction in three ways: \\n* We now describe what is known about infant visual development, explaining the need for computational models that will help us to further understand how the brain learns and we highlight possible practical applications. \\n* DNNs are already being extensively used as computational models for adult vision and in the second section of the introduction we present the relevant literature (Yamins and DiCarlo, 2016).\\n* We have clarified how the DNNs\\u2019 learning process represents a promising tool to understand macro-scale characteristics of infants\\u2019 visual development.\\n\\nMoreover, we address your doubts about the parallel between DNNs training process and human learning in the following ways:\\n* Although backpropagation is not biologically plausible, it can be approximated by a number of biologically plausible mechanisms, including the synaptic plasticity observed in top-down processes. The value of DNNs as models of the brain has also been recently discussed in two high-profile reviews (Richards et al., 2019; Sinz et al., 2019).\\n* The parallel we intended to draw was unclear. DNNs are currently the best models of the adult visual system, but they do not attempt to map artificial neurons onto biological ones. Instead, activation profiles across the layer of a DNN show macroscale similarities to activation patterns in a brain region. We propose that similar macroscale statistics might provide insight into learning in the infant brain. This has been clarified throughout.\\n* Infants\\u2019 access to labels is extremely impoverished. However, there is evidence that by 3-4 months of age, they can cluster together visually similar stimuli (Quinn, 1993). DeepCluster was therefore chosen as it is a state of the art unsupervised DNN which relies on a clustering strategy. Again, it was not our intention to claim a parallel between the specifics of the network and the brain, but rather that the DNN might capture some of the macroscale properties of the brain.\\n* We agree that batch normalisation is not biologically plausible, or even applicable to online learning in machines. We have highlighted this in the discussion.\\n\\nFinally, we clarified the roles of genetic coding, learning, and random weight initialization:\\n* The overall architecture of the brain is strongly shaped by genetics. However, the genetic code is far too limited to encode even a small proportion of the synaptic weights in the brain, and so a significant proportion of visual knowledge must be learned. This is why we learn to recognise object categories that are too recent to be specified in the genetic code, such as Pok\\u00e9mon, as highlighted by a recent paper (Janini and Konkle, 2019). Indeed, during the early stages of brain development, a large excess of synaptic connections is created which are then be pruned, to leave only the important ones. This may easily be thought of as a random initialisation followed by learning.\\n\\nWe hope you agree that the paper is greatly strengthened.\\n\\nReferences\\n Janini D, Konkle T. A Pok\\u00e9mon-sized window into the human brain. Nature human behaviour. 2019 Jun;3(6):552.\\n Quinn PC, Eimas PD, Rosenkrantz SL. Evidence for representations of perceptually similar natural categories by 3-month-old and 4-month-old infants. Perception. 1993 Apr;22(4):463-75.\\n Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ. A deep learning framework for neuroscience. Nature neuroscience. 2019 Nov;22(11):1761-70.\\n Sinz FH, Pitkow X, Reimer J, Bethge M, Tolias AS. Engineering a less artificial intelligence. Neuron. 2019 Sep 25;103(6):967-79.\\n Yamins DL, DiCarlo JJ. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience. 2016 Mar;19(3):356.\"}", "{\"title\": \"Revisions to address your comments\", \"comment\": [\"Thank you for taking the time to consider this paper and for providing constructive suggestions.\", \"To articulate the rationale for using DNNs as a model of human learning, we have thoroughly revised the introduction. We address your comment on backpropagation in three ways:\", \"Although backpropagation is not biologically plausible, it can be approximated by a number of biologically plausible mechanisms, including the synaptic plasticity observed in top-down processes (Richards et al., 2019; Sinz et al., 2019). This has been clarified in section 1.3.\", \"The parallel we intended to draw was unclear. DNNs are currently the best models of the adult visual system, but they do not attempt to map artificial neurons onto biological ones. Instead, activation profiles across the layer of a DNN show macroscale similarities to activation patterns in a brain region. For example, the degree to which two images cause similar patterns of activity in a DNN has proven predictive of the degree to which they create similar patterns of activity in the brain, when a person or monkey is viewing them. We propose that similar macroscale statistics might provide insight into learning in the infant brain. This has been clarified throughout.\", \"We have cited two recent high-profile reviews presenting the value of the DNNs as a model of the brain (Richards et al., 2019, Sinz et al., 2019).\"], \"we_have_expanded_upon_the_motivation_for_choosing_specific_dnns\": \"* CORnet-S was chosen as it is currently the top performing DNN on the Brain-Score benchmark (Schrimpf et al. 2018; www.brain-score.org) and is therefore the best existing model of neural firing in the mature ventral visual stream. \\n* Infants\\u2019 access to labels is extremely impoverished. However, there is evidence that by 3-4 months of age, they can cluster together visually similar stimuli (Quinn, 1993). DeepCluster was therefore chosen as it uses a clustering strategy, and it is one of the best performing unsupervised strategies for learning object representations. Again, it was not intention to claim a parallel between the specifics of the network and the brain, but rather that the DNN might capture some of the macroscale properties of the brain.\\n\\nFinally, we have thoroughly revised the discussion (section 4) to clarify the contribution of this work to neuroscience and machine learning.\\n\\nThank you also for identifying the minor issues, which we have addressed. We hope you agree that the paper is greatly strengthened.\\n\\nReferences\\n Quinn PC, Eimas PD, Rosenkrantz SL. Evidence for representations of perceptually similar natural categories by 3-month-old and 4-month-old infants. Perception. 1993 Apr;22(4):463-75.\\n Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ. A deep learning framework for neuroscience. Nature neuroscience. 2019 Nov;22(11):1761-70.\\n Schrimpf M, Kubilius J, Hong H, Majaj NJ, Rajalingham R, Issa EB, Kar K, Bashivan P, Prescott-Roy J, Schmidt K, Yamins DL. Brain-Score: which artificial neural network for object recognition is most brain-like?. BioRxiv. 2018 Jan 1:407007.\\n Sinz FH, Pitkow X, Reimer J, Bethge M, Tolias AS. Engineering a less artificial intelligence. Neuron. 2019 Sep 25;103(6):967-79.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper attempts to model the development of the human visual system in infants, by training deep neural network architectures inspired by the human visual system on images from ImageNet, and learning a linear decoder on the outputs of each layer (following Zhang et al 2017) to measure how much information useful for distinguishing between classes is contained within each layer in the architecture. The paper measures the amount of class information in each layer over the progress of training.\\n\\nI agree that deep networks could serve as good models for various parts of the brain, including the visual system especially given that convolutional networks have been inspired from studies of the visual system. However, the paper doesn't seem to provide any evidence for how the training process used for deep neural networks should correspond to the development of the visual system in infants. In particular, backpropagation is considered biologically implausible [1], whereas backpropagation serves as the main method for learning in the neural networks. Furthermore, neural networks have randomly initialized parameters, whereas it seems unlikely that human infants' brains would lack existing organization to such a drastic extent. In order for the results in this paper to hold greater weight, I would expect to see more evidence about how the neural network training process (also including aspects such as batch normalization, and the self-supervised clustering method in DeepCluster) are expected to correlate with learning in human brains.\\n\\nFor the above reasons, I vote to reject the paper. My conclusions above are based on my surface-level knowledge of neuroscience, so I welcome any clarifications or corrections from the authors about the above points.\\n\\n[1] https://arxiv.org/abs/1502.04156\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims to examine whether DNNs are a good model of infant\\nbehavior and development. The paper is very well written and easy to\\nread.\\n\\nThe goal was to compare the development of object representations\\nacross layers with the development in children and to compare the\\norder of learning of different object classes.\\n\\nThe work did show that unsupervised training results in a different\\npattern of layer learning than supervised learning, but neither form\\nof learning was able to model the development in children. Perhaps a\\nself-supervised multimodal learning system should be tried?\\n\\n\\nThe decision to train the DeepCluster type net in a supervised way for\\na control on training method vs architecture type is nice, but it would also\\nhave been good to try other kinds of networks.\\n\\n\\nIt is not clear that age of acquisition of the verbal word should be\\nrelated to age of acquisition of the visual concept. The author's\\nstate \\\"A number of linguistic factors are known to affect when words\\nare first used, including the frequency of the word in language and\\nits number of phonemes, but the second strongest factor is the\\n\\\"concreteness\\\" of the word (Braginsky et al., 2015). This suggests\\nthat the strength of the visual representation of a class has an\\neffect on when its label is acquired.\\\" It is not clear to me how the concreteness\\nof a word relates to the strength of visual representation.\\n\\nI don't think there is enough new insight gained from this paper for ICLR publication\\nat this stage.\", \"minor_comments\": \"What are dashed lines in Figure 2 top left box?\\n\\nFunding acknowledgement (especially with grant number) should not be\\nin an anonymous submission.\\n\\nmagenetic\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n\\nThe paper aims to understand how object vision develops in infancy and childhood by using deep learning models. In particular, it chooses two deep nets, CORnet and DeepCluster to measure learning. CORnet is supervised and is designed to mimic the architecture and representational geometry of the visual system. It tries to quantify the development of explicit object representations at each level of this network through training by freezing the convolutional layers and training an additional linear decoding layer. The paper evaluates the decoding accuracy on the whole ImageNet validation set for individual visual classes. DeepCluster differ in both supervision and in the convolutional networks. To isolate the effect of supervision, it ran a control experiment in which the convolutional network from DeepCluster (an AlexNet variant) is trained in a supervised manner. The paper tries to draw conclusions on how learning should develop across brain regions in infants. In all the networks, it also tested for a relationship in the order in which infants and machines acquire visual classes, and found only evidence for a counter-intuitive relationship. \\n\\nLimitations\\n\\nThe topic is extremely interesting and worth intense study. However, the approach is not convincing. CORnet may have some relevances. It is not clear how well it models the representational geometry of the visual system. It is even less clear whether DeepCluster is relevant. Why would it be related to infant learning? \\n\\nThe whole idea of using DNN to infer biological learning is built on shaky ground given how little we know the learning mechanism of the brain. In particular, back propagation is not widely considered possible in biology. Given the learning mechanism may be very different. What is the basis of using DNN to study the infant learning?\\n\\nThe findings are also not very surprising and offer much for the community.\\n\\nGiven the paper lacks rigor and findings, it does not meet the bar of ICLR.\"}" ] }
r1xjgxBFPB
Continual Deep Learning by Functional Regularisation of Memorable Past
[ "Pingbo Pan", "Alexander Immer", "Siddharth Swaroop", "Runa Eschenhagen", "Richard E Turner", "Mohammad Emtiyaz Khan" ]
Continually learning new skills without forgetting old ones is an important quality for an intelligent system, yet most deep learning methods suffer from catastrophic forgetting of the past. Recent works have addressed this by regularising the network weights, but it is challenging to identify weights crucial to avoid forgetting. A better approach is to directly regularise the network outputs at past inputs, e.g., by using Gaussian processes (GPs), but this is usually computationally challenging. In this paper, we propose a scalable functional-regularisation approach where we regularise only over a few memorable past examples that are crucial to avoid forgetting. Our key idea is to use a GP formulation of deep networks, enabling us to both identify the memorable past and regularise over them. Our method achieves state-of-the-art performance on standard benchmarks and opens a new direction for life-long learning where regularisation methods are naturally combined with memory-based methods.
[ "Continual learning", "deep learning", "functional regularisation" ]
Reject
https://openreview.net/pdf?id=r1xjgxBFPB
https://openreview.net/forum?id=r1xjgxBFPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "-MJSAJaZXj", "r1lb8nRosB", "Hkev9olmiS", "rJgGDoemsr", "SylF4jgXjr", "BygTT5g7sr", "SkxrQ_HCKS", "SJezZcn6FS", "Skgef0KaYH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740802, 1573805129188, 1573223310911, 1573223257748, 1573223216984, 1573223109042, 1571866652845, 1571830266025, 1571819015748 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2111/Authors" ], [ "ICLR.cc/2020/Conference/Paper2111/Authors" ], [ "ICLR.cc/2020/Conference/Paper2111/Authors" ], [ "ICLR.cc/2020/Conference/Paper2111/Authors" ], [ "ICLR.cc/2020/Conference/Paper2111/Authors" ], [ "ICLR.cc/2020/Conference/Paper2111/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2111/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2111/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This work tackles the problem of catastrophic forgetting by using Gaussian processes to identify \\\"memory samples\\\" to regularize learning.\\n\\nAlthough the approach seems promising and well-motivated, the reviewers ultimately felt that some claims, such as scalability, need stronger justifications. These justifications could come, for example, from further experiments, including ablation studies to gain insights. Making the paper more convincing in this way is particularly desirable since the directions taken by this paper largely overlap with recent literature (as argued by reviewers).\", \"title\": \"Paper Decision\"}", "{\"title\": \"Paper improvements\", \"comment\": \"We have update the paper in line with the reviewers' feedback.\", \"summary_of_changes\": [\"We added backward transfer and forward transfer metrics on split CIFAR for continual learning. They show how FROMP outperforms the baselines. We also added an upper bound of a model jointly trained on all tasks. FROMP performs close to this model, especially on tasks 4-6.\", \"We added a visualisation of memorable past vs random examples for split MNIST. The memorable past examples are harder to distinguish from other classes, in line with the toy example in Figure 1.\", \"We added a paragraph discussing the time complexity of our algorithm. It is small for small memorable past size M.\", \"We added a detailed discussion regarding FRCL.\", \"We added detailed hyperparameters for our experiments.\", \"We added more references and expanded upon some previous work (as suggested by AnonReviewer1).\", \"We cleaned up some notation and explanation in Section 3.3 and the Algorithm (and Appendix A). Please note that nothing technical has changed, and the overall algorithm is exactly the same.\", \"Many thanks for your time.\"]}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your review. Our response is given below.\", \"regarding_the_use_of_deeper_networks\": \"This paper develops a new method for continual learning. In line with previous literature in the area, we first evaluate the method on standard continual learning benchmarks established to evaluate new methods including EWC, SI, VCL and FRCL. In terms of scaling up, our current implementation requires Jacobian computation, which requires additional implementation to speed up computation. We hope to do this in the future.\", \"regarding_the_dependance_on_the_number_of_the_coreset\": \"We believe there is a misunderstanding about our experiments. Coresets are an important part of FROMP, and are the only way in which past information is propagated. Using very few coresets is not meaningful since there is very little to \\u201cremember\\u201d from the past. Comparing such small-size coreset cases to methods \\u201cwithout coresets\\u201d is not meaningful as these are complementary approaches, not direct competitors. As the coreset size gets very large, the selection strategy is not expected to matter. The purpose of Fig. 3c and 4b is to show that increasing coreset size improves results as expected, and using selected coresets rather than random is useful when the size of coreset is small. For example, selectively choosing a coreset of size 10 is about the same as randomly choosing 30 (on split CIFAR, Figure 4b). The ultimate number of coreset examples depends on the problem (e.g. data and network size). We are happy to discuss this further if this is unclear. Thanks!\", \"training_time\": \"We will add a discussion in the paper. Our algorithm only adds a small computational overhead on top of Adam on a standard neural network. The additional complexity scales cubically in M, the coreset size. This is due to the inversion of the kernel in fr_grad. Another overhead is the computation of Jacobian which is order PKM, where K is the dimensionality of the output and P is the number of parameters. Both of these additional costs are small for small coreset sizes M.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks for your comments about the strengths and weaknesses of our work. Our response is given below.\\n\\nWe agree regarding the comparison with FRCL, but this is a very recent work and there is no available code. We will try to add this in the camera-ready but it will depend on the reproducibility of the FRCL paper (e.g. if they provide all the details necessary to reproduce results).\\n\\nWe also agree on your comment about experimental details. We shall add them. For permuted MNIST, we used 10 tasks.\\n\\nRegarding your comment about EWC, could you provide a reference regarding this? We have reported the results from [1]. It is also possible that 97% is obtained with a much larger network than ours.\\n\\n[1] Nguyen, Cuong V et al. Variational continual learning. ICLR, 2018.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your long and useful review. We will first provide a short summary of our response, before going into more detail.\\n- In terms of scalability, we test on standard small to medium size benchmarks, with complexity above Adam (on a standard neural network) dependent on M, the coreset size. \\n- We will add more details comparing our method to FRCL [1], and provide a short summary below. \\n- We will provide more metrics for measuring forgetting. \\n- We will add a more detailed review of the literature on other coreset selection strategies, but unlike our strategy, these do not naturally fit within our framework. \\n- We respectfully disagree with you on our claim about our method being state-of-the-art being false.\\n\\n1. Scalability: Our algorithm only adds a small computational overhead on top of Adam on a standard neural network. This is what we mean by scalable. The additional complexity scales cubically in M, the coreset size. This is due to the inversion of the kernel in fr_grad. Another overhead is the computation of Jacobian which is order PKM, where K is the dimensionality of the output and P is the number of network parameters. Both of these additional costs are small for small coreset sizes M. We will add these details to make these points clear in the paper.\\n\\n2. Comparison to FRCL [1]: \\n(a) Thank you for raising this point. FRCL proposes using the last layer of the neural network as kernel features. This is limiting as it does not use the whole network\\u2019s weights, unlike what we do. A more important issue is with the difficulty of optimising inducing points; they are usually obtained by an ad-hoc procedure. In comparison, we provide a simple, effective way that is naturally consistent with our GP formulation. As per your suggestions, we will add a more detailed discussion explaining this.\\n(b) There is a misunderstanding about our statement on \\u201ctractability of the objective function only when we assume independence across tasks\\u201d. This is not about the task boundaries. We mean that the GP used in FRCL defines separate kernels for each task, since otherwise the kernel is too big.\\n\\n3. Measuring forgetting: Thank you for raising this point. We agree and will provide these. We are trying our best, but these may not be available by the end of rebuttal, in which case we will add them in the next version of the paper.\\n\\n4. Prior work: We discuss other works in Section 1 (\\u201ctwo separate methods are usually used for regularisation and memory-building\\u201d), and we will expand upon this sentence, going into more detail, and also referencing iCaRL and other works (including [3]). Note that our method of choosing a memorable past follows directly from the theory in Section 3.1, and is achieved with a single forward-pass through the trained network (as mentioned in the paper). Other techniques for sample selection do not integrate so naturally with the framework, and are not as straightforward to understand or implement either.\\n\\n5. Claim on state-of-the-art: We respectfully disagree with you on our claim being false. The 99.59% accuracy of HAT [2] on split MNIST is achieved with a much larger network. On the network we use, HAT achieves 91.6% on permuted MNIST, significantly lower than FROMP (94.9%), FRCL (94.3%) and VCL (93%). The openreview link you provided also uses a much larger network size (1200 units per hidden layer, as opposed to 256). We will add a reference to this work.\\n\\n[1] Titsias, Michalis K et al. Functional regularisation for continual learning using gaussian processes. arXiv preprint arXiv:1901.11356, 2019.\\n[2] Serr\\u00e0, J., Sur\\u00eds, D., Miron, M. & Karatzoglou, A.. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:4548-4557.\\n[3] Ebrahimi, Sayna, et al. \\\"Uncertainty-guided Continual Learning with Bayesian Neural Networks.\\\" arXiv preprint arXiv:1906.02425 (2019).\"}", "{\"title\": \"Thanks to reviewers\", \"comment\": \"We would like to thank all the reviewers for their reviews, and the time they put into providing feedback. We will update the paper incorporating their feedback. We are in the process of obtaining some further metrics and visualisations as suggested by the reviewers, and will report them once we have them.\\n\\nWe will now address the points made by each reviewer in turn.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: The paper uses a Gaussian Processes framework previously introduced in [1] to identify the most important samples from the past for functional regularization. For evaluation authors report their average accuracy on Permuted MNIST, Split-MNIST, and CIFAR10-100 and achieve superior performance over EWC, DLP, SI, VCL-Coreset, and FRCL.\", \"pros\": \"(+): The paper is well-written, addressed the prior work quite well despite missing a few important work from the past (more on this later)\\n(+): The paper is well motivated\", \"cons_that_significantly_affected_my_score_and_resulted_in_rejecting_the_paper_are_as_follows\": \"1- lack of support for \\u201cscalability\\u201d:\\nAuthors claim their method is scalable in several parts of the paper (abstract in line 7, Section 3 in the 1st paragraph, and Section 5 in Discussion). However, this claim is not supported in the experimental setting as the benchmark used are only toy datasets (Permuted MNIST, Split MNIST, and CIFAR10 followed by CIFAR100) where the maximum # of task considered is 10 and the maximum size of the datasets is 60K which is not convincing for ability to scale. There is also no time complexity provided. \\n\\n2- Incremental novelty over the prior work (FRCL by Titsias et al 2019):\\nThis baseline is the closest prior work to this work which according to the experiments shown in Table 2 are slightly outperformed by the proposed method. (for example for P-MNIST the gain is 0.6%+-0.1) where there is a lack of complete discussion on how the two methods are different. Particularly I suggest that the authors elaborate more on their claimed differences stated on page 4, paragraph 5 such as \\u201ctractability of the objective function only when we assume independence across tasks\\u201d. Do authors mean assuming clear task boundary between tasks? If so, have they considered a \\u201cno-task\\u201d or an \\\"overlapping\\u201d task boundary in their experiment? Isn't it necessary to back up this if it is stated as a shortcoming of FRCL? Also, how are these methods differ in their computational expenses?\", \"3__lack_of_measuring_forgetting\": \"This is the most important drawback in the experimental setting. Authors indicate on page 3 \\u201cOur goal in this paper is to design methods that can avoid such catastrophic forgetting.\\u201d and reiterate on this on other parts of the paper yet there is no forgetting evaluation to support this claim. Authors can simply report the initial performance of the model on each task so that readers can compare it with the reported accuracy after being done with all tasks. Having a method with high average accuracy does not necessarily mean it has minimum forgetting. You can use forgetting measurements such as Backward Transfer (BWT) introduced in [1] or forgetting ratio defined in [4] for this assessment.\", \"4__ambiguous_claims_about_prior_work\": \"(a) On page 1, paragraph 3, when authors mention that methods such as GEM or iCaRL use random selection to pick previous samples, I think the line of follow-up work on these methods should be mentioned as well that have explored different techniques for sample selection and have provided benchmark comparisons (ex. [2,3]). In fact it would be beneficial if authors could compare the samples selected by their method versus other sampling techniques. \\n(b) On page 1, paragraph 3, they mention some prior work such as GEM and iCaRL \\u201cdo not take uncertainty of the output into account\\u201d. While it is true, there have been methods proposed that use uncertainty of the output for parameter regularization [5]. It appears to be a parallel work to this but it\\u2019s worth mentioning to prevent false claims.\", \"5__claim_on_the_state_of_the_art_should_be_double_checked\": \"Although the results shown for the experiments are superior to the provided baselines, there is an important baseline missing which has achieved higher performance than the reported ones. Also missed to be cited in the prior work list. Serra et al [4] proposed a method at ICML 2018 called HAT, which is a regularization technique with no memory usage that learns an attention mask over parameters and was shown to be very effective on small and long sequence of significantly different tasks. They do not use samples from previous task but yet achieved good average ACC as well as minimum forgetting ratio. Note that 5-Split MNIST is not reported in [4], but a recent work has reported HAT\\u2019s performance on this dataset (https://openreview.net/forum?id=HklUCCVKDB) that achieves 99.59%. I recommend authors provide comparison of their own on the given benchmarks with the original HAT\\u2019s implementation (https://github.com/joansj/hat) before claiming to be SoTA. In my opinion, it is not an issue if a novel method achieves a slightly lower performance to the sota because I think it still adds value and proposes a new direction. However, a false claim should not be stated.\\n\\nLess major (only to help, and not necessarily part of my decision assessment):\\n\\n1- Providing upper bound?\\nIt is common to show an upper bound for any continual learning algorithm by showing joint training performance which is considered to be the maximum achievable performance. I also recommend showing the naive baseline of fine-tuning for the proposed method which often can give insight to maximum forgetting ratio.\\n\\n2- Forward transfer?\\nRegularization techniques combined with memory might have an ability to perform zero-shot transfer or so called FWT. I recommend authors provide such metric to further support their method.\\n\\n3- Hyper parameter tuning?\\nIt is also worth mentioning how the tuning process was performed. In continual learning we cannot assume that we have access to all tasks' data, hence authors might want to shed some light on this.\", \"references\": \"[1] Khan, Mohammad Emtiyaz, et al. \\\"Approximate Inference Turns Deep Networks into Gaussian Processes.\\\" arXiv preprint arXiv:1906.01930 (2019).\\n\\n[2] Chaudhry, Arslan, et al. \\\"Continual Learning with Tiny Episodic Memories.\\\" arXiv preprint arXiv:1902.10486 (2019). (https://arxiv.org/abs/1902.10486)\\n\\n[3] Aljundi, Rahaf, et al. \\\"Gradient based sample selection for online continual learning.\\\" arXiv preprint arXiv:1903.08671 (2019). (https://arxiv.org/abs/1903.08671)\\n\\n[4] Serr\\u00e0, J., Sur\\u00eds, D., Miron, M. & Karatzoglou, A.. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:4548-4557\\n\\n[5] Ebrahimi, Sayna, et al. \\\"Uncertainty-guided Continual Learning with Bayesian Neural Networks.\\\" arXiv preprint arXiv:1906.02425 (2019).\\n\\n\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"post_rebuttal_response_from_r1\": \"Thank you for taking the time and replying to comments. Here are my responses to authors' replies:\\n\\n[Authors' response:] 1. Scalability: Our algorithm only adds a small computational overhead on top of Adam on a standard neural network. This is what we mean by scalable. The additional complexity scales cubically in M, the coreset size. This is due to the inversion of the kernel in fr_grad. Another overhead is the computation of Jacobian which is order PKM, where K is the dimensionality of the output and P is the number of network parameters. Both of these additional costs are small for small coreset sizes M. We will add these details to make these points clear in the paper.\\n\\n[Reviewer's response:] I still insist on the fact that simply explaining the overhead of a method is not a support for scalability claim versus showing the performance on a large scale dataset and comparing it with other CL methods that also have high scalability given the fact that authors only use MNIST and CIFAR datasets.\\n\\n[Authors' response:] 4. Prior work: We discuss other works in Section 1 (\\u201ctwo separate methods are usually used for regularisation and memory-building\\u201d), and we will expand upon this sentence, going into more detail, and also referencing iCaRL and other works (including [3]). Note that our method of choosing a memorable past follows directly from the theory in Section 3.1, and is achieved with a single forward-pass through the trained network (as mentioned in the paper). Other techniques for sample selection do not integrate so naturally with the framework, and are not as straightforward to understand or implement either.\\n\\n[Reviewer's response:] I disagree with authors on this because GEM, its faster version (A-GEM (Chaudhry et al. 2018)), and all other methods explored in the recent study which I mentioned in my review (Ref#2) use the single epoch protocol and are perfect match to be compared with this method but there is no memory-based baseline except for VCL with coreset and FRCL (only for MNIST variations) which makes it difficult to measure this method's capabilities (performance, memory size, and computational time) against methods which only require one epoch to be trained.\\n\\nAuthors have provided FWT for their method as 6% which is unbelievably large for this metric (see GEM paper) and hence does not make sense to me. Please double check whether you computed this value right. \\n\\nWhile I accept the response for the remaining questions from authors but I am still concerned about the weak experiments and an issue brought up by R3 regarding lack of enough comparisons with FRCL on any other datasets besides split MNIST and P-MNIST. Also in CIFAR experiment, what is the architecture used across the baselines? More importantly in results reported for VCL on CIFAR, it is not clear to me how authors obtained this results. Did they use a conv net? VCL was originally shown on MLPs only and it is one of the downside of this method that was never shown to be working in convolutional networks. Therefor, it is important to mention how they are obtained. This might explain the reason for the huge forgetting reported for VCL with coreset (\\u22129.2 \\u00b1 1.8) as opposed to \\u22122.3 \\u00b1 1.4 for EWC which is really strange as VCL even without coreset (on permuted mnist for example) is reported superior to EWC by a large margin (6%) in the original VCL paper. Overall I am concerned about the experimental setup and some of the reported results and hence intend to keep my score.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary\\nThe paper proposes a method for continuous learning called Functional Regularization of Memorable Past (FROMP) which maintains the output distribution of models on memory samples. FROMP uses the Laplace approximation and Gaussian process with neural tangent kernel (NTK) to approximate the output distribution. According to the leverage score strategy, the sample to be stored is selected. The leverage score strategy tends to select the sample of highest variances. \\n \\nStrengths\\nTo some extent, I think the proposed method is novel, although there is a similar work named as Functional Regularisation for Continual Learning (FRCL). FROMP first uses NTK in Gaussian process for continual learning and proposes a new strategy of selecting memory samples.\\nThe strategy of selecting samples to be stored is simple and effective.\\nThe method achieves a good performance.\\nThe paper is clearly written and easy to follow.\\n \\nWeaknesses\\nIt needs more experimental comparisons between FROMP and FRCL, like adding comparison results of FROMP and FRCL for Split-Cifar. Currently, this paper only shows the performance on Permuted MNIST and Split MNIST but those two benchmark are quite simple and also the improvement is limited. \\nThe experimental section needs more detailed analysis. At least, in current version, it is not clear how many tasks in Permuted MNIST. The setting of hype-parameters for dropout are not provided.\\n \\nOther comments \\nIn this paper, for Split MNIST experiment with multi-head, it shows that the method of EWC achieves worse results than SI. However, in my experiment, the precision of EWC is at least larger than 97%. In theory, I think they should have the similar performance and at least the discrepancy of accuracy between them is not as big as shown in this paper. I expect authors could explain this point.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed a new functional regularization method with gaussian process which has similar direction with recent two works (khan et al, titsias et al).\\nTo perform functional regularization, they introduce small coreset which are selected from previous dataset instances, called memorable past. They select most memorable samples depends on eigenvalue. The model FROMP outperforms baselines and their ablations. However, the experiments are only performed on shallow networks, it is required to apply on much deeper networks, such as ResNet. Also, in the experiment results, I feel the performance of the FROMP largely depends on the number of the coreset, while 'important' selection just shows marginal effects even on split CIFAR. \\nFROMP show higher performance than FRORP with only a few of examples, but it isn't meaningful results that anyway the performances are too poor that are even worse than old baseline, EWC. \\n\\nI have several wonderings on the paper.\\n\\n- How about of training time on FROMP? I wonder if utilizing or selecting memorable pasts requires much time for training.\\n\\n- Is there an analysis like figure 1 on real dataset, such as MNIST or CIFAR?\"}" ] }
Hyl9xxHYPr
Demystifying Inter-Class Disentanglement
[ "Aviv Gabbay", "Yedid Hoshen" ]
Learning to disentangle the hidden factors of variations within a set of observations is a key task for artificial intelligence. We present a unified formulation for class and content disentanglement and use it to illustrate the limitations of current methods. We therefore introduce LORD, a novel method based on Latent Optimization for Representation Disentanglement. We find that latent optimization, along with an asymmetric noise regularization, is superior to amortized inference for achieving disentangled representations. In extensive experiments, our method is shown to achieve better disentanglement performance than both adversarial and non-adversarial methods that use the same level of supervision. We further introduce a clustering-based approach for extending our method for settings that exhibit in-class variation with promising results on the task of domain translation.
[ "disentanglement", "latent optimization", "domain translation" ]
Accept (Poster)
https://openreview.net/pdf?id=Hyl9xxHYPr
https://openreview.net/forum?id=Hyl9xxHYPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "-fUDu4ZUH4", "rklwiTzhiH", "r1xm5GG2sr", "H1xVCs-nsH", "HklL3yW9sB", "ryla4a5BiB", "H1lSLh9SoH", "SklcVicBor", "rkeZVQEecr", "SJxMpdc5KS", "HyeLAxM9FB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740772, 1573821854850, 1573819019219, 1573817292094, 1573683118495, 1573395765274, 1573395533338, 1573395249852, 1571992360829, 1571625146420, 1571590350021 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2110/Authors" ], [ "ICLR.cc/2020/Conference/Paper2110/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2110/Authors" ], [ "ICLR.cc/2020/Conference/Paper2110/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2110/Authors" ], [ "ICLR.cc/2020/Conference/Paper2110/Authors" ], [ "ICLR.cc/2020/Conference/Paper2110/Authors" ], [ "ICLR.cc/2020/Conference/Paper2110/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2110/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2110/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a novel method for class-supervised disentangled representation learning. The method augments an autoencoder with asymmetric noise regularisation and is able to disentangled content (class) and style information from each other. The reviewers agree that the method achieves impressive empirical results and significantly outperforms the baselines. Furthermore, the authors were able to alleviate some of the initial concerns raised by the reviewers during the discussion stage by providing further experimental results and modifying the paper text. By the end of the discussion period some of the reviewers raised their scores and everyone agreed that the paper should be accepted. Hence, I am happy to recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #2 - Part 3\", \"comment\": \"We thank the reviewer for the dedicated and fruitful review.\\n\\nTable 5 only shows losses on Cars3D as we have fully assessed the results of the entire ablation study only on Cars3D (as presented in Table 3). As per your request, an extended ablation evaluation on the other datasets (including individual loss values) will be added to the paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your clarification. I sincerely appreciate the effort you put into responding to my request and will update my score accordingly. I still have some reservations about some of the claims put forth in the paper, but I believe these reservations are outweighed by the experimental merits of the paper.\\n\\nRegarding the training and test loss, may I ask why Table 5 only shows Cars3D? I would appreciate if the authors can make a commitment to provide the training and test loss for all of the datasets used in the paper. \\n\\nI think the numbers will be valuable in shedding light on the behavior of GLO v. amortization. For example, I am surprised that you achieve better training loss without amortization than with, especially since you only take a single gradient step on the latent code. This defies conventional wisdom that amortization accelerates optimization. Perhaps with enough gradient steps, single-gradient-step-GLO will ultimately still achieve better training loss than amortization, just as is the case in Table 5. Having access to Table 5 for all of your datasets would be illuminating and also give practitioners are better sense of what to expect when they try out your method versus others.\"}", "{\"title\": \"Response to Review #2 - Part 2\", \"comment\": \"Thank you for your response.\\n\\n1. \\u201cprovide the training and test losses\\u201d - We provide the training and test losses in Tab. 5 (added to appendix A.4). \\n\\n2. \\u201cRegarding the clarification of the UDT preprocessing procedure, can you describe how Algorithm 1 (A.8) matches up to what you did for Edges2Shoes\\u201d - In the task of unsupervised domain translation on the Edges2Shoes dataset, we first perform per-class clustering in which shoe images are clustered into 100 styles, and edge images are clustered into 100 styles as well (although clustering the edges is done for technical reasons and does not affect the results significantly, good results obtained regardless of clustering edge images). We then define the class as the unique label of domain label x style label (For example: 1-100 class ids for the shoe images and 101-200 class ids for the edges).\"}", "{\"title\": \"Request for training and test losses on objective function\", \"comment\": \"Thanks for the response. Can you, as requested, provide the training and test losses (Eq 6 and its decomposition into reconstruction + regularization terms) for all your models?\\n\\nRegarding the clarification of the UDT preprocessing procedure, can you describe how Algorithm 1 (A.8) matches up to what you did for Edges2Shoes? What counts as a class label versus a style label for Edges2Shoes? When performing k-means clustering, did you do that only on the shoes images, on the collection of both shoes+edges images, or something else?\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the dedicated and positive review.\\n\\n\\u201cHow was the sigma tuned for the regularization? Are the results dependent on this parameter?\\u201d: In all our experiments, we used a fixed value of sigma=1, it is possible that better results may be obtained with a different value of sigma.\\n\\n\\u201cDoes the randomness in content code during the training account for the variations in the images not covered by class code and content code.\\u201d: The purpose of regularizing the content code with random noise and activation decay is to obtain disentangled representations (by minimality of information) and not for diverse generation. In the datasets considered, the assumption that variation not covered by class and content is small holds quite well. In cases where this assumption is insufficient, we introduced the preliminary style clustering step.\\n\\n\\u201creliant on the imagenet trained VGG perceptual loss \\u2026 do the authors anticipate any limitations generalizing to datasets such as in medical domains, etc.\\u201d: Relying on the VGG loss is obviously a limitation for non-image datasets (although perceptual losses exist in other modalities). Regarding images, previous research has shown that the VGG loss is quite generally effective e.g. Yang et al. [1] used the VGG loss successful in the medical domain.\\n\\n\\u201cWhy is lighting chosen as the class label in the datasets?\\u201d: We followed the SmallNORB protocol in DrNet, which kept object identity, lighting and elevation constant. We further extended this protocol to another configuration in which the elevation also varied. \\n\\n\\u201cWhat are limitations from the assumption of low variability within a class?\\u201d: The limitation is in the case where high intra-class variability is not explained by the content (and is not expected to be transferred across classes). In cases where the assumption is not satisfied, reconstruction will suffer. Our method can work even in such cases e.g. in the CelebA experiments. In cases where the non-content intra-class variability is very high, we perform the preliminary style-clustering step (e.g. Edges2Shoes). For more details, please see our response to R1. \\n\\n[1] Yang, Qingsong, et al. \\\"Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss.\\\" IEEE transactions on medical imaging, 2018.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for the dedicated and positive review, and for recognizing the novelty of the method and significance of the results.\\n\\n\\u201cLORD achieves significantly better performance than the state-of-the-art baseline on non-adversarial disentanglement methods\\u201d: We wish to highlight that additionally to outperforming non-adversarial methods, our method outperforms state-of-the-art adversarial methods such as DrNet and StarGAN.\\n\\n \\u201cAuthors claim that no information leakage between class and content representation \\u2026 experiments only verify 'no class information in content code', but miss the inverse proposition\\u201d: Table.2 already contains both class classification from content code as well as content classification from class code. The results show that our method achieves near perfect disentanglement on both directions.\\n\\n\\u201cThis paper is based on the assumption that \\u2018inter-class variation is larger than intra-class variation\\u2019. Authors should verify their assumption\\u201d: Our image formation model, models images as being formed by class, content and residual (style) codes. The intra-class variation is formed by both the content and the residual information. The content is transferable between classes, the residual information is not. Given class and content codes, if the residual information is small, reconstruction will be successful (as demonstrated in our experiments). If the residual information is very significant, it will not be possible to reconstruct images well only based on class and content leading to poor image formation models. For example, in the Cars3D experiment, the class labels represent the car model, content codes represent azimuth and elevation, and there is no residual information. In this case LORD performs well. We perform an exploratory experiment in which we aggregate similar car models into a single unified class (163 original car models are clustered into 50 super classes). In this case, the residual information contains the specification of the exact car model within the super class. The residual information is therefore significantly larger. The class and content information is not sufficient for reconstructing the original image perfectly. Quantitatively, we found that the reconstruction error increased from 32.5 to 55.64. A visualization of this experiment is provided in the appendix (A.7). It should be noted that our method can work well in cases where there is a moderate amount of residual information, e.g. in the CelebA experiments. Moreover, in datasets which exhibit large intra-class variations (e.g. Edges2Shoes) we introduce our preliminary step of style clustering which significantly reduces intra-class variation.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for the dedicated review and for recognizing our \\u201cimpressive empirical results\\u201d. The reviewer raised several valid points, which we believe can be easily addressed.\\n\\n\\u201cregularizing with KL-divergence leads to posterior collapse\\u201d: This phenomenon was observed in all our datasets (e.g. Cars3D, CelebA), not just in SmallNorb. However, we agree with the reviewer that this does not imply that posterior collapse always happens but only in the settings that we tested. We revised the text to clarify this. We believe that after the revision, the scope of our finding is related precisely.\\n\\n\\u201cspecial case of KL-divergence regularization\\u201d: We completely agree. Although the difference is subtle, we have extensively shown that its contribution to disentanglement is significant. This was clarified in the text.\\n\\n\\u201cHow much slower is GLO compared to amortized models?\\u201d, \\u201cHow many iterations do you employ on a given minibatch of data when using GLO?\\u201c: GLO requires about twice the number of iterations than amortized models for convergence. For each mini-batch, we perform a single gradient step for the generator parameters and latent codes. This was clarified in the text.\\n\\n\\u201cauthors should show us the actual visualizations for the amortized models\\u201d: We added the visualizations to the appendix (A.6).\", \"inductive_bias_of_latent_optimization\": \"Following the reviewer\\u2019s advice we have dug deeper into the inductive bias of GLO vs. amortized models. We trained our latent optimization (no amortization) model and its semi-amortized variant on Cars3D and measured the accuracy of classifying class labels from content codes after every epoch. The change in the amount of class-dependent information contained in the content codes is presented in the appendix (A.4). It can be observed that a randomly initialized content encoder (for amortization) encodes class-dependent information, which needs to be minimized as the training evolves i.e. initial mutual information is high and is decreased in the process of training. In latent optimization, content and class codes are randomly initialized, there is therefore zero mutual information between them. By the end of training, amortized models often do not completely remove the mutual information between the class and content codes and provide entangled representations, while a model trained with latent optimization preserves a very high degree of disentanglement. We hypothesize that achieving a similar degree of disentanglement by amortization requires a more sophisticated objective and a more careful hyperparameter tuning.\\n\\n\\u201cUnsupervised Domain Translation \\u2026 I recommend that the authors try at least one other dataset\\u201d: We have run our method on two more domain translation tasks - (i) male to female (ii) faces to anime. The results are presented in the appendix (A.9), our method performed well on both tasks. In the second task, adding the preliminary clustering step allowed for diverse face to anime translation. For added clarity, we added pseudo code (appendix A.8) precisely detailing our procedure, more explanations were added to the text and clustering code was added to our repository.\\n\\nWe believe that all of the reviewer\\u2019s questions were addressed. The reviewer stated that addressing the questions would form a basis for raising the score.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: this paper proposes to basically combine class-conditional noisy autoencoding with GLO to achieve disentanglement while having only access to the content labels. They demonstrate that the method achieves impressive empirical results both in terms of disentanglement and a limited experiment on unsupervised domain translation.\", \"decision\": \"Weak Reject. I find the experimental results in this paper very appealing. However, I am weakly rejecting the paper because of 1) some problematic claims put forth in the paper, which I worry might mislead the reader and 2) lack of clarity in describing the procedure in the unsupervised domain translation setting.\", \"here_are_some_main_comments\": \"1. KL-divergence v. asymmetric noise\\nFirst, the authors claim that regularizing with KL-divergence leads to posterior collapse. But the particular experimental set up is tested on SmallNORB, which only has a small handful of factors of variation anyway). That KL-divergence \\u201ccauses\\u201d posterior collapse is a claim that must be made very carefully. There are some very specific conditions under which this is known to be true empirically (for example, see the experiments in Burda\\u2019s IWAE paper and Hoffman\\u2019s DLGM paper), but in general, one should be careful with this claim. Can the authors please walk back on this statement?\\n\\nSecond, it is worth noting that asymmetric noise regularization is itself actually a special case of KL-divergence regularzation. When q(z|x) is forced to have a globally fixed variance, KL-divergence regularization becomes asymmetric noise regularization. \\n\\n2. Cost of training\\nOne thing I feel should be made more clear in the paper is the training cost of GLO v. amortized models. How much slower is GLO compared to amortized models? How many iterations do you employ on a given minibatch of data when using GLO? \\n\\n3. Ablation study\\nFirst, I think the authors should show us the actual visualizations for the amortized models. Without visual inspection, it\\u2019s hard to gauge the significance of the numbers in Table 3. \\n\\nSecond, the authors observed that the amortized models leak class information into the content representation. I find it fascinating that GLO does not. I would like the authors to dig deeper into what exactly is the inductive bias conferred by latent optimization. As of the moment, claim that \\u201cthis variant is inferior to our fully unamortized model as a result of an inductive bias conferred by latent optimization\\u201d is a vacuously true statement since we know that amortized models and unamortized models should in theory have equivalent behavior in the infinite-capacity / oracle optimizer regime. I request that the authors show us the training and test losses (Eq 6 and its decomposition into reconstruction + regularization terms). Inspecting it may shed light on the inductive bias. \\n\\n4. Unsupervised Domain Translation\\nThe result looks very good. However, the experimentation is too limited. I recommend that the authors try at least one other dataset.\\n\\nFurthermore, the description of how to apply LORD to unsupervised domain translation is uncomfortably vague. I am not sure if the provided code and description in the main text allows for reproduction of the UDT experiments. \\n\\nIf the authors are able to address the above questions and requests, then I am more than happy to raise my score.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes LORD, a novel non-adversarial method of class-supervised representation disentanglement. Based on the assumption that inter-class variation is larger than intra-class variation, LORD decomposes image representation into two parts: class and content representations, with the purpose of getting disentangled representation in those parts.\\nInspired by ML-VAE, authors try to: 1. learn to reconstruct the original image of disentangled representation. 2. eliminate the class information from content code by asymmetric noise regularization. The experimental results indicate that LORD succeeds to disentangle class information on content codes, while it outperforms style-content disentangled representations on style switching tasks (Figure 2 & 3).\", \"strengths\": \"1.LORD achieves significantly better performance than the state-of-the-art baseline on non-adversarial disentanglement methods.\\n2., In terms of confusing the classifier in \\u201cClassification experiments\\u201d (Table2), disentangled content representation of LORD behaves like a random guess. This shows that LORD is indeed in preventing class information from leaking into content representations.\", \"weaknesses\": \"1. This paper is based on the assumption that \\u201cinter-class variation is larger than intra-class variation\\u201d. Authors should verify their assumption by quantitative results and illustrate the importance of inter/intra-class variation (e.g. how much information we may lose if ignoring the intra-class variation).\\n2. Authors claim that no information leakage between class and content representation in Sec 1.1. However, the experiments only verify \\u201cno class information in content code\\u201d, but miss the inverse proposition (Is there any content information is class code?)\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a framework, called LORD, for better disentanglement of the class and content information in the latent space of image representations. The authors operate in a setting where the class labels are known. The authors perform\\noptimization in the latent space to obtain these representations and argue that this is simpler and effective than adversarial and amortized inference approaches.\\n\\nThe main issue that the authors tackle is the information leakage between the representations for the class and content. The authors suggest several fixes for this. Firstly, the paper makes a distinction between content and style. Content is defined as the information that is unchanged when only the class labels are changes in the data generative process. The inherent randomness in the data generative process is defined as the style. \\n\\nTo disallow, the leakage from content/style code to class code, the authors suggest learning fixed codes for each class that does not vary across images. That is if two images have the same class by virtue of design they will have the same class codes. \\n\\nThe reverse, leakage from class codes to content codes is achieved by adding any asymmetric noise regularization term. This also seems to be aimed at reducing the total variability in the content codes. The authors claim that this is better than the bottleneck approach such as matching the code distribution to uniform prior and provide empirical evidence. Though in theory, it is not clear why one is better than the other. How was the sigma tuned for the regularization? Are the results dependent on this parameter?\\n\\nAfter learning the class and content embeddings for each sample in the training example, a pair of encoders are learned to predict these codes for unseen test images, without the need for optimization.\", \"other_comments\": \"The style code being 0 is not clear. Does the randomness in content code during the training account for the variations in the images not covered by class code and content code. \\n\\nThe methods seem heavily reliant on the imagenet trained VGG perceptual loss. This does not seem to be an issue in the datasets shown, do the authors anticipate any limitations generalizing to datasets such as in medical domains, etc. \\n\\nWhy is lighting chosen as the class label in the datasets? It will be interesting to see how the results change with different subsets of class labels and what is captured in the style codes. \\n\\nWhat are limitations from the assumption of low variability within a class?\", \"typo\": \"Page 4 - minimally -> minimality\"}" ] }
r1lclxBYDS
On the implicit minimization of alternative loss functions when training deep networks
[ "Alexandre Lemire Paquin", "Brahim Chaib-draa", "Philippe Giguère" ]
Understanding the implicit bias of optimization algorithms is important in order to improve generalization of neural networks. One approach to try to exploit such understanding would be to then make the bias explicit in the loss function. Conversely, an interesting approach to gain more insights into the implicit bias could be to study how different loss functions are being implicitly minimized when training the network. In this work, we concentrate our study on the inductive bias occurring when minimizing the cross-entropy loss with different batch sizes and learning rates. We investigate how three loss functions are being implicitly minimized during training. These three loss functions are the Hinge loss with different margins, the cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. This Gcdf loss establishes a connection between a sharpness measure for the 0−1 loss and margin based loss functions. We find that a common behavior is emerging for all the loss functions considered.
[ "implicit minimization", "optimization bias", "margin based loss functions", "flat minima" ]
Reject
https://openreview.net/pdf?id=r1lclxBYDS
https://openreview.net/forum?id=r1lclxBYDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "yK4J9sWXy6", "B1lQ53WCKH", "ryeeKOZatS", "ryx18TZ6dr" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740744, 1571851403252, 1571784823781, 1570737479286 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2109/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2109/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2109/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes an interesting setting in which the effect of different optimization parameters on the loss function is analyzed. The analysis is based on considering cross-entropy loss with different softmax parameters, or hinge loss with different margin parameters. The observations are interesting but ultimately the reviewers felt that the experimental results were not sufficient to warrant publication at ICLR. The reviews unanimously recommended rejection, and no rebuttal was provided.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper makes a step towards understanding of the implicit bias of optimization algorithms in deep learning. The authors consider alternative loss functions for deep networks: (1) the temperature-scaled cross-entropy loss with different values of the temperature; (2) the hinge-loss with different values of the margin parameter; (3) the Gcdf loss with different values of the variance parameter. The paper introduces the Gcdf loss which is derived as a modification of the 0-1 loss under the noise in the parameters of the linear output layer. The authors propose to use the alternative losses as measures of margin and sharpness associated with a solution found by an optimization algorithm. The experiments show how SGD in different learning scenarios (low/high learning rate and small/large batch) performs implicit minimization of the alternative loss functions with different parameters. Specifically, using larger learning rates/smaller batch sizes is shown to implicitly minimize the losses corresponding to higher values of the temperature/margin/variance. The results provide insights about margins and sharpness of solutions found by different modes of SGD.\\n\\nThe direction explored in the paper is important for the understanding of the connections between optimization, properties of the loss landscapes (such as sharpness), and generalization. The results reported in the paper are interesting. However, currently I am not convinced that the contributions are sufficient for publication at ICLR as the scope of the performed analysis is limited. In my view, the study is not comprehensive enough and the paper would benefit from incorporating additional results.\", \"detailed_comments\": \"1) My main concern is that currently there is very little explanation provided for the observed experimental findings. The paper would strongly benefit from additional results focused on identification and verification of the mechanisms behind the observed behavior of the optimizer. \\n\\n2) Many connections mentioned in the paper are left unexplored. It would help to investigate the mentioned connections between the implicit minimization of the considered losses and sharpness, curvature, and generalization. A similar design of the experiment can be used in which the alternative loss values can be tracked alongside with the validation loss (or multiple losses) as well as the measures of sharpness and the characteristics of the Hessian.\\n\\n3) Another direction for improvement is the extension of the set of analyzed settings (as it was mentioned in the discussion section). This includes performing the analysis for a broader set of architectures (potentially with different normalization schemes), optimizers, and choices of the hyperparameters (momentum, weight decay). These experiments would help to better understand the observed phenomenon and analyze the effect of different settings.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper adds to a large body of research on implicit regularization:\\nin the context of deep networks, that, when SGD with a certain step\\nsize or batch size is used, from among parameter vectors that fit\\nthe data equally well, some are much more likely to be chosen than\\nothers. This paper attacks this question through the lens of\", \"loss_functions\": \"when SGD is applied to the usual softmax loss\\nwith a given learning rate, how does the choice of learning\\nrate effect how rapidly other loss functions are reduced? They\\npay special attention to a loss function that they call Gcdf,\\nwhich is motivated by \\\"wide minima\\\" considerations. In particular,\\nfor the Gcdf to be small, not only must training examples be\\nclassified correctly, but randomly perturbing the weights in\\nthe last layer should not change this correct prediction.\\n\\nI am convinced that Figures 6 and 12 of this paper show\\nthat optimization of the softmax with a larger step size\\nimplicitly optimizes a loss function that rewards robustness\\nto a greater extent than when a small step size is used.\\nI find this interesting.\\n\\nThe authors do a nice job of summarizing a lot of related work.\\n\\nThe Gcdf loss is similar to the ramp loss used in [1]\\n(see Section 3.1) and elsewhere, including to analyze\\ngeneralization in deep learning. It is also like the\\npotential function optimized by RobustBoost\\n(see (4) of [2]) -- the RobustBoost loss function does\\nnot scale by the norm of x, but since it is an ensemble\\nmethod, the role of x is played by the predictions of\\nmembers of the ensemble, which have a fixed scale.\\n\\nI assume that, when they evaluate the Gcdf loss for a deep\\nnetwork, they normalize by the norm of the last hidden\\nlayer. If this is true, it only captures \\\"wide minima\\\"\\nin the sense of being robust with respect to perturbations\\nof the output layer. (They seem to acknowledge this point\\nin their paper.)\\n\\nThe results in the paper are not described in enough\\ndetail to be reproduced. For example, I don't see\\nwhere they specify the architecture of the network\\nthat they used in Section 4.\\n\\nThe experiments are limited and narrow in scope.\\n\\nIn Figures 2 and 3 I don't see that they have adequately controlled\\nfor the effect of the learning rate on how fast the explicitly\\nminimized loss is reduced. Part of the effect observed is simply\\nthat, when a small learning rate is used, after a given number of\\nepochs, the weights are just not changed much, so that no loss is\\nreduced much. In Figure 2, I find it strange that they did not plot\\nthe values for T=1.\\n\\nFigure 6 is the most interesting to me. It seems to show that\\ntraining the same loss function with a larger learning rate\\neffectively optimizes the gcdf loss that rewards sacrificing\\ntraining error to achieve stronger robustness.\", \"small_point\": \"the first time I read Appendix A, I thought that the\\nresults were not there. It was only later that I saw the figures with\\nthe MNIST results. It would be helpful if the authors wrote\\n\\\"The results are in Figures 8-12\\\".\\n\\nWhile, as I wrote above, I did find Figure 6 interesting, I feel that\\nthe increment of this research over the large body of previous work\\non this topic is not enough to justify publication in ICLR.\\n\\n\\n\\n\\n[1] Bartlett, Peter L., Dylan J. Foster, and Matus\\nJ. Telgarsky. \\\"Spectrally-normalized margin bounds for neural\\nnetworks.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n[2] https://arxiv.org/pdf/0905.2138.pdf\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper want to show that minimizing cross-entropy loss will simultaneously minimize Hinge loss with different margins, cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. The main contribution is a new gcdf loss based on Gaussian-perturbed parameters. However, this loss can only be used with linear models. For deep models, the authors suggest that only measure this loss on the top layer of model.\\n\\nThe motivation is week. Seems most of these loss functions only depend on s_i - s_j, the difference between logits. And the optimization with cross-entropy loss wants to maximize this difference between logits corresponding to true labels and false labels, which is obviously minimize difference loss functions. So I do not feel surprise that optimizing the neural network with cross-entropy loss will minimize other kinds of losses.\\n\\nThe format is poor. The figures occupy most of the places and thus let me feel the contents of this paper is somewhat weak.\", \"detailed_comments\": \"1. In Sec 3.1, it will be better to mention that y belongs to +1 and -1. Also, in the last sentence in Sec 3.1, the meaning of x is normalized is ambiguous. I guess the authors want to say if x is unit norm?\\n2. Will adding the regularization of the feature map norm violate the performance?\\n3. What do the authors want to say in Sec 4.3? Does the relation of learning rate and convergence rate of gcdf loss indicate some non-trivial results?\\n4. I cannot understand the meaning of Figure 7. Obviously different learning rate may lead to different training process and thus the different solution and different s_i - s_j. But I think this is not related to the different losses. With simple calculation I think we can find all of these losses have some relation with s_i - s_j and thus we can directly say different learning rate lead to different s_i - s_j and there is no need to relate s_i - s_j to these losses.\\n\\nOverall I find the claims of this paper is somewhat weak. Gcdf loss seems related to some kinds of adversarial robustness that can be individual interest, but the current paper is still far from the standard of publication. Some more interesting and valuable directions can be the theoretically analysis of the equivalence between the objective optimization, e.g. minimizing cross-entropy loss is equivalent to minimizing gcdf loss for example, as well as the empirical comparison between the different losses, like selecting some losses as objective can boost the performance on some aspects.\"}" ] }
B1eYlgBYPH
A Deep Recurrent Neural Network via Unfolding Reweighted l1-l1 Minimization
[ "Huynh Van Luong", "Duy Hung Le", "Nikos Deligiannis" ]
Deep unfolding methods design deep neural networks as learned variations of optimization methods. These networks have been shown to achieve faster convergence and higher accuracy than the original optimization methods. In this line of research, this paper develops a novel deep recurrent neural network (coined reweighted-RNN) by unfolding a reweighted l1-l1 minimization algorithm and applies it to the task of sequential signal reconstruction. To the best of our knowledge, this is the first deep unfolding method that explores reweighted minimization. Due to the underlying reweighted minimization model, our RNN has a different soft-thresholding function (alias, different activation function) for each hidden unit in each layer. Furthermore, it has higher network expressivity than existing deep unfolding RNN models due to the over-parameterizing weights. Moreover, we establish theoretical generalization error bounds for the proposed reweighted-RNN model by means of Rademacher complexity. The bounds reveal that the parameterization of the proposed reweighted-RNN ensures good generalization. We apply the proposed reweighted-RNN to the problem of video-frame reconstruction from low-dimensional measurements, that is, sequential frame reconstruction. The experimental results on the moving MNIST dataset demonstrate that the proposed deep reweighted-RNN significantly outperforms existing RNN models.
[ "minimization", "learned variations", "optimization methods", "networks", "faster convergence", "higher accuracy", "original optimization methods", "line" ]
Reject
https://openreview.net/pdf?id=B1eYlgBYPH
https://openreview.net/forum?id=B1eYlgBYPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qn_a7pjqMj", "B1eJSH0sir", "SJl-lXCsor", "B1ezsbCiiB", "B1e9hGLn5r", "r1e_OXit9S", "ByebhFEatH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740715, 1573803319331, 1573802728795, 1573802394167, 1572786866446, 1572610927821, 1571797416692 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2107/Authors" ], [ "ICLR.cc/2020/Conference/Paper2107/Authors" ], [ "ICLR.cc/2020/Conference/Paper2107/Authors" ], [ "ICLR.cc/2020/Conference/Paper2107/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2107/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper2107/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper presents a novel RNN algorithm based on unfolding a reweighted L1-L1 minimization problem. Authors derive the generalization error bound which is tighter than existing methods.\\nAll reviewers appreciate the theoretical contributions of the paper, particularly the derivation of generalization error bounds. However, at a higher-level, the overall idea is incremental because RNN by unfolding L1-L1 minimization problem (Le+,2019) and reweighted L1 minimization (Candes+,2008) are both known techniques. The proposed method is essentially a simple combination of them and therefore the result seems somewhat obvious. Also, I agree with reviewers that some experiments are not deep enough to support the theory. For example, for over-parameterization (large model parameters) issue, one can compare the models with the same number of parameters and observe how they generalize. \\nOverall, this is the very borderline paper that provides a good theoretical contribution with limited conceptual novelty and empirical evidences. As a conclusion, I decided to recommend rejection but could be accepted if there is a room.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": [\"Thank you very much for your review and for recognizing the strengths of our work. Below, we would like to address your concerns:\", \"Regarding why reweighted l1 regularization is better than l1 regularization, we refer to the explanations by Cand\\u00e8s et al. (2008) and Luong et al. (2018). The l1 minimization is a relaxation of the L0 minimization problem for recovering a sparse signal. The l1 norm depends on the magnitude of the nonzero signal coefficients, while the L0 norm only counts the number of nonzero coefficients in the signal. Hence, as proposed by Cand\\u00e8s et al. (2008), the weights in the reweighted version of l1 minimization are designed to reduce the impact of the magnitude of the nonzero elements, thereby leading to a solution that approximates better the one obtained with L0 minimization.\", \"It is not clear to us what design differences exist between the l1-l1 RNN [Le et al., 2019] and a deep-unfolded l1-l1 network. We believe that if we omit the additional reweighted terms Z and g in our minimization problem (3), the problem boils down to l1-l1 minimization and the model resulting by applying deep unfolding will be the same as the l1-l1 RNN [Le et al., 2019]. We mention this in Section 2, in the text after Algorithm 1.\", \"We agree with your statement that our model size will increase when increasing the depth. However, the performance gain of the proposed model over the l1-l1-RNN model is not because of just adding extra model parameters in the latter. We start from the fundamental idea that by applying reweighting (Cande\\u0300s et al., 2008) the solution of Problem (3) is a more accurate sparse representation compared to the solution of the l1-l1 minimization problem in Le et al. (2019). The extra parameters in the Reweighted-RNN model are the result of performing deep unfolding of the reweighted minimization algorithm. The Reweighted-RNN model introduces the following innovations compared to the l1-l1-RNN model. Firstly, the Reweighted-RNN uses a different Z_l matrix per iteration/layer (l=1,...,d) to re-parameterize the update terms (see Step 6 in Algorithm 1). Due to Z_l (l=1,...,d), Reweighted-RNN has a different weight coupling per layer l compared to the l1-l1-RNN. Secondly, because of the learned set of vectors g_l (l=1,...,d), Reweighted-RNN applies a different proximal operator to each element u per iteration l. This translates to that the Reweighted-RNN applies a different nonlinearity to each activation in each layer, the form of which is learned from data. This is fundamentally different from the l1-l1-RNN model which applies the same nonlinear function to all activations in all layers. This is clarified in Section 2, in the text after Algorithm 1. Last but not least, the over-parameterization of Reweighted-RNN is supported by theory. The derived generalization error bounds (the first such bounds for deep RNN models and for deep RNNs designed by unfolding) show that the over-parameterization of the Reweighted-RNN helps to improve performance compared to other RNNs including deep unfolding ones.\"]}", "{\"title\": \"Response to Reviewer #5\", \"comment\": \"Thank you for the comments and suggestions on the manuscript. Please find below our responses to your points:\", \"major_points\": \"\", \"section_1\": \"Following this comment, we have revised the introduction and now mention the RNN model in the second paragraph. In addition, we have explicitly added Eq. (3) characterizing the considered RNN.\", \"section_2\": \"The vector g in Eq. (3) is our proposed reweighted parameter. The motivation is that by applying reweighting (Cande\\u0300s et al., 2008), the solution of Problem (3) is a more accurate sparse representation compared to the solution of the l1-l1 minimization problem in Le et al. (2019). After unfolding the reweighted minimization algorithms, g is also a trainable parameter in our RNN.\", \"section_3\": \"If we understand your comment correctly, you would like to know how we determined the network depth d for our architecture. d corresponds to the number of iterations in Algorithm 1, and we did not set a value for d but rather experimentally assessed our network with different network depths d (we refer to the experiments reported in Table 3). In case your comment refers to a better illustration of the proposed architecture, we have updated Figure 2 accordingly so as to show how the depth d of our network is developed.\", \"section_4\": [\"We agree that language understanding tasks are important applications of RNNs and should be considered when benchmarking a new RNN architecture. However, our minimization algorithm (which solves the reweighted l1-l1 minimization problem and which yields the proposed RNN model by deep unfolding) is formalized based on leveraging the specific structures present in video data (namely, the first l1 term for the sparsity in frame representation and the second term for the correlation of consecutive frame representations). Therefore, our model is better suited to applications in video. This does not mean that unfolded RNNs are not applicable to other types of data, but in such applications one would need to revise the minimization objective so as to accurately express the data structure. Motivated by this comment (and Question 3 of Reviewer 1), we report additional experiments of our RNN model in popular RNN tasks, namely, the pixel MNIST classification task, the adding task, and the copy task. We refer to Appendix E for further details.\", \"Indeed, in Table 3 we consider a stacked LSTM. Specifically, we stack all models (including LSTM) except unfolding-based ones in the same way as a stacked-RNN (i.e. replacing the vanilla RNN cell with the corresponding cell). Regarding the deep unfolding-based models, the underlying minimization algorithms determine the connections between the layers.\", \"Thank you for this comment. Adding bi-directional connections or attention mechanisms would indeed increase the effectiveness of an LSTM. However, we would argue that these additions could be applied to any RNN architecture (not only LSTM) to obtain better performance. In our experiments, we want to limit additional components in the benchmarked models (except those that stabilize training) so that we allow for a fair comparison between different RNN architectures.\"], \"appendix\": \"We thank the reviewer for this suggestion. We will plan to visualize these features in our subsequent work and also to focus further on the explainability aspects of the architecture.\", \"minor_points\": [\"In our experiments, the training time for Reweighted RNN with the default settings is 3,521 seconds, an increase in terms of time complexity compared to the baseline of l1-l1-RNN (Le et al. 2019) with 2,985 seconds. We wish to also refer to our answer to the 1st comment of Reviewer 1.\", \"Following this comment, we have clarified this aspect after Theorem 3.1 in Section 3, where the text reads as follows \\u201cThe generalization error in Eq. (14) is bounded by the Rademacher complexity, which depends on the training set S. If the Rademacher complexity is small, the network can be learned with a small generalization error.\\u201d\"]}", "{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for the comments and suggestions on the manuscript. Please find below our responses to your corresponding questions:\\n\\n1. The training time for the proposed Reweighted RNN model (with our default settings, namely, the compressed sensing rate of 0.2, d=3 hidden layers and h= 2^10 hidden units per layer) is 3,521 seconds, which is higher than that of the l1-l1-RNN (Le et al. 2019); for the same settings, the training time of l1-l1-RNN is 2,985 seconds. We note however that comparing computational times in our experiments is not accurately indicating complexity as execution times depend heavily on the implementation of the model. Specifically, we used the Tensorflow implementations provided by the authors of the Independent-RNN, Fast-RNN and Spectral RNN models. The rest of the models were implemented in Pytorch; among these models, the vanilla RNN, LSTM, and GRU cells are written in CuDNN (default Pytorch implementations), so they are significantly faster in training (294s, 799s, and 655s, respectively, with the default settings) than the other models. It is, however, worth noting that our reweighted RNN model uses significantly fewer trainable parameters (4.47M in the default settings) compared to the popular variants of RNN (the vanilla RNN: 5.58M, stacked LSTM: 21.48M, and stacked GRU: 16.18M).\\n\\n2. Thank you for this comment. Regarding the width of the network, the gain in performance is already quite minimal going from 2^11 to 2^12 neurons. Regarding the depth, we have trained a version of our network with 7 layers (with all other settings kept intact) and achieved a reconstruction performance of 38.5 dB in terms of PSNR. We can deduce that using a 6-layer version of the proposed Reweighted RNN model yields the best performance on Moving MNIST (with 8000 training samples).\\n\\n3. We agree with the reviewer that experiments on other datasets except for Moving MNIST would be needed to demonstrate the full potential of the proposed network. Due to time limitations, we are unfortunately not able to report further extensive experiments in this paper. Nevertheless, in the Appendix, we have added further experimental evaluations of our model in popular RNN tests, namely, the pixel MNIST classification task, the adding task, and the copy task (see Appendix E).\\n\\n4. At the moment, we are aware of the following limitations: (i) The extra trainable parameters present in Reweighted-RNN lead to an increase in the training time compared to the baseline l1-l1-RNN model (Le et al. 2019). (ii) Despite our goal of offering explainability in design, there are still several \\u201cblack-box\\u201d components in our network, including the optimal choices for the number of layers and number of hidden units per layer. So far, these are still determined by experiments. (iii) Regarding the theoretical aspects of the paper, the derived generalization bounds - while being the first for deep RNN models - still depend on the network depth and width. In effect, the current bound (see Eq. (15)) is in the order of the square root of the network depth d multiplied by the number of time steps T, and also depends on the logarithm of the number of hidden units h. When increasing the number of parameters and/or the number of time steps T, the bound would be increased following the increase of depth/width/the number of time steps.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a novel method to solve the sequential signal reconstruction problem. The method is based on the deep unfolding methods and incorporates the reweighting mechanism. Additionally, they derive the generalization error bound and show how their over-parameterized reweighting RNNs ensure good generalization. Lastly, the experiments on the task of video sequence reconstruction suggest the superior performance of the proposed method.\\n\\nI recommend the paper to be accepted for mainly two reasons. First, they derive a tighter generalization bound for deep RNNs; Second, the experiment results align with the theory and show the continuous improvements when increasing the depth of RNNs.\", \"questions\": \"1. How is the computation complexity of the proposed method when compared with other methods? Will the reweighting l1-l1 norm significantly increase the computation time?\\n2. The experiments show that increasing the depth and/or width of the networks yields better performance, however, is there a boundary for such performance gain? For example, if the depth continues increasing, will the proposed method suffer the similar problem as other methods (performance does not improve or even degrade)?\\n3. As the MOVING MNIST dataset is from a relatively simple and special domain, is it possible to reproduce the similar performance gain on other more realistic datasets?\\n4. Are there any known limitations of the proposed method?\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"Authors proposed a deep RNN via unfolding reweighted l1-l1 minimization, where reweighted l1-l1 minimization algorithms are applied to a video task.\\nOverall, the paper is well explained in a theoretical part and exhibits a good result compared with other conventional RNN methods in the experiment. In Section 3, authors formulate Rademacher complexities for both conventional and proposed method, which shows the generalization performance of the proposed method when d increases. And this is empirically highlighted in Table 3 in Section 4.\", \"major_points\": \"\", \"section_1\": [\"First part of the introduction can be confusing because Eq. (1) sounds like representing dictionary learning framework (plus DNN is immediately described after Eq. (1) instead of RNN) and RNN is not explicitly written. It should be clearly written and flow should be considered.\"], \"section_2\": [\"It is hard to get how parameter g in Eq. (3) derives.\"], \"section_3\": [\"How to build network depth d for the network? A figure should be required.\"], \"section_4\": [\"Even though previous papers (e.g., Wisdom et al. and Le et al.) just focus on single dataset like moving MNIST, I believe testing on language data is also quite important (this is a full paper and exhaustive experiments should be mandatory). For example, it may be good to use Penn TreeBank dataset to make a comparison.\", \"In Table 3, how did you set LSTM deeper? Is it a stacked LSTM?\", \"Existing RNN methods should include other variations of LSTM (in particular, SOTA methods are welcomed) such as bidirectional LSTM and LSTM with attention mechanism. It should be better to compare with these methods.\"], \"appendix\": [\"It would be helpful for readers to show interpretabilities of the model additionally. For example, visualizing features from each RNN model would be beneficial.\"], \"minor_points\": [\"After introduction of unfolding reweighted l1-l1 minimization, how did the computational cost increase compared to previous l1-l1 minimization?\", \"In Section3, for easiness to readers, it may be good to briefly summarize how does the predictor\\u2019s generalizability and Rademacher complexities relate.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"\", \"strength\": \"This paper proposes a new reweighted-RNN by unfolding a reweighted L1-L1 minimization problem. It develops an iterative algorithm to solve the reweighted L1-L1 minimization problem, where the soft-thresholding functions can be adaptively learned. This paper provides the generalization error bound for deep RNNs and shows that the proposed reweighted-RNN has a lower generalization error bound. In addition, the paper shows that the proposed algorithm can be applied to video-frame reconstruction and achieves favorable results against state-of-the-art methods. The paper is well organized, and the motivation is clear.\", \"weakness\": \"The effectiveness of the reweighted L1-L1 minimization method should be better explained and evaluated. It is not clear why the reweighted L1-L1 regularization is better than the L1-L1 regularization. In addition, the experimental evaluation does not support this claim well. The authors should compare the baseline method which uses the L1-L1 regularization in their framework instead of directly comparing the proposed algorithm with [Le et al., 2019] as there exist differences in the algorithm design. This is an important baseline. \\n\\nAs claimed by the authors, the proposed reweighted-RNN has different sets of {W_l;U_l} for each hidden layer. This will definitively increase the model size when the depth increases. The authors should clarify whether the performance gains due to the only use of large model parameters. \\n\\nOverall, this paper proposes an effective reweighted-RNN model based on the solver of a reweighted L1-L1 minimization. Theoretical analysis and experimental results are provided. I would be willing to increase the score if these problems are solved in the authors\\u2019 response.\"}" ] }
HygFxxrFvB
Differentially Private Mixed-Type Data Generation For Unsupervised Learning
[ "Uthaipon Tantipongpipat", "Chris Waites", "Digvijay Boob", "Amaresh Siva", "Rachel Cummings" ]
In this work we introduce the DP-auto-GAN framework for synthetic data generation, which combines the low dimensional representation of autoencoders with the flexibility of GANs. This framework can be used to take in raw sensitive data, and privately train a model for generating synthetic data that should satisfy the same statistical properties as the original data. This learned model can be used to generate arbitrary amounts of publicly available synthetic data, which can then be freely shared due to the post-processing guarantees of differential privacy. Our framework is applicable to unlabled \emph{mixed-type data}, that may include binary, categorical, and real-valued data. We implement this framework on both unlabeled binary data (MIMIC-III) and unlabeled mixed-type data (ADULT). We also introduce new metrics for evaluating the quality of synthetic mixed-type data, particularly in unsupervised settings.
[ "Differential privacy", "synthetic data", "private data generation", "mixed-type", "unsupervised learning", "autoencoder", "GAN", "private deep learning" ]
Reject
https://openreview.net/pdf?id=HygFxxrFvB
https://openreview.net/forum?id=HygFxxrFvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "-is9Sw_9Qr", "Hyxu1aV2iH", "B1l4NEaecS", "S1gIouZsYr", "SJl8-OvmKS" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "comment" ], "note_created": [ 1576798740685, 1573829856144, 1572029483642, 1571653790245, 1571153917885 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2106/Authors" ], [ "ICLR.cc/2020/Conference/Paper2106/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2106/AnonReviewer3" ], [ "~Lei_Xu4" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This provides a new method, called DPAutoGAN, for the problem of differentially private synthetic generation. The method uses private auto-encoder to reduce the dimension of the data, and apply private GAN on the latent space. The reviewers think that there is not sufficient justification for why this is a good approach for synthetic generation. They also think that the presentation is not ready for publication.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviews\", \"comment\": \"We thank the reviewers for their time and comments. We have made a careful editing pass on the paper to make the following improvements at the reviewers' suggestion:\\n1. Grammatical editing -- we caught many typos including those pointed out the the reviewers\\n2. Comparison to existing work -- we added a more explicit comparison to other works in this area, namely Xie et al. 2018, (above Figure 3) and Frigerio et al. 2019 (Table 2). We believe this will help highlight the performance improvements achieved by our methods.\\n3. Fixed the typo in Section 3 where we had swapped encoder/decoder when discussing the noise addition procedure.\\n4. Added a discussion of Figure 3, which was previously missing\\n5. Added additional details in the Appendix about the implementation of our results.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a new algorithm for synthetic data generation under differential privacy. The algorithmic architecture combines autoencoder and GAN in a way that it only needs to add the DP-SGD noise to the decoder of the autoencoder and the discriminator of the GAN. This seems to be a good idea to be explored further.\\n\\nThe authors claimed that the proposed new evaluation metrics are novel contributions of the paper but there is no discussion on why they are good metrics for evaluating the quality of synthetic datasets nor which metric should be used in what scenarios. \\n\\nIt is unclear how the experimental results (Figure 2, 3, 4 and Table 2, 3) are interpreted. The authors mentioned comparison with DP-GAN but it is not marked in the figures the performance of DP-GAN and how its results compared with DP-auto-GAN. Please clearly state what each figure means and why the results are significant.\\n\\nI wonder if it is possible to have a version of GAN that also predicts the labels of data so you can use classification task as evaluation metrics, which might be easier and more interpretable. \\n\\nIn Section 3.1, \\u201cnot adding noise to the decoder\\u201d should be encoder.\\nIn Section 3.3, \\u201cdo not add noise to decoder\\u201d should be encoder.\\n\\nThe presentation of the paper needs to be improved. There are too many typos and grammar mistakes. The labels of figures in the experiment section are too small. The paper does not have a conclusion section.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper considers synthetic data generation using deep GANs and autoencoders that can be shared for model training.\\nThe authors can generate mixed-type data and consider some additional metrics that allow the evaluation of the quality of synthetic data.\\n\\nWhile the problem raised in the paper is interesting, and there some insights on what kind of metrics one should use, the article now lacks conclusions and discussion of obtained results. \\nIn particular, there is a significant number of misprints and inconsistencies here and there (see more on this below).\\nMoreover, the experiments are irreproducible e.g. I could not found information about the value of reduced dimension q in the description of the experiments.\\nAlso, there is no comparison with previous approaches (e.g. [1, 2]), only results about the proposed one are presented.\", \"the_paper_will_also_benefit_from_additional_rounds_of_proofreading\": \"1. The algorithm $\\\\mathcal{M}$ is not defined. The range $Range(\\\\mathcal{M})$ is not defined.\\n1. a mixture of Gaussian distribution -> a\\nmixture of Gaussian distributions\\n2. comepare -> compare, matrics -> metrics, deceases -> decreases\\n4. should to minimize -> should minimize\\n5. In the formula \\\"(true) loss function\\\" subscript \\\"i\\\" should be dropped, as we talk about $x \\\\sim Z$, not $x_i$ here\\n6. Articles in many places can be improved (finding good autoencoder -> finding a good autoencoder)\\n7. It is possible, that in the paragraph after the formula (2) \\\"encoder\\\" should be replaced by \\\"decoder\\\".\\n8. the total number of samples the real\\ndata - > the total number of available real data samples \\n9. The axis labels are too small for Figure 2\\n10. No reference to Figure 3 in the text of the paper. For Figure 3 the most left plot has for some reason a smaller number of points. Why?\\n11. The selection of classifiers is not discussed. I.e. why in some cases authors use random forests (5.2), but in other logistic regression (5.1)? Also in my opinion mixing of R2 and F1 scores in one plot can be confusing.\\n12. No conclusion in the end\\n\\n\\n[1.] Xu et al. Modeling tabular data using conditional GAN. NeurIPS 2019\\n[2.] S.K.Lim et al. DOPING: Generative Data Augmentation for Unsupervised Anomaly Detection with GAN. IEEE ICDM 2019.\"}", "{\"comment\": \"The DP-auto-GAN model is a combination of MedGAN (Choi et al., 2017) and DP-SGD (Abadi et al. 2016). The method is neat and works well on MIMIC and ADULT datasets.\\n\\nBut I think the authors should compare DP-auto-GAN with multiple baselines, especially statistical methods. For example, on ADULT dataset, PrivBayes (Zhang et al., 2017) can achieve 80% accuracy with $\\\\eps = 1.6$. Please clarify if DP-auto-GAN can outperform PrivBayes, or there are some differences in settings.\", \"please_consider_citing_the_following_papers\": \"Park et al. Data synthesis based on generative adversarial networks. VLDB 2018\\nXu et al. Modeling tabular data using conditional GAN. NeurIPS 2019\\n\\n* In section 3.1, \\\"by not adding noise to the decoder\\\". I think it should be \\\"encoder\\\".\", \"title\": \"Comparing with existing methods\"}" ] }
SkeuexBtDr
Learning from Rules Generalizing Labeled Exemplars
[ "Abhijeet Awasthi", "Sabyasachi Ghosh", "Rasna Goyal", "Sunita Sarawagi" ]
In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision. We propose a rule-exemplar method for collecting human supervision to combine the efficiency of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning. We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. The denoised rules and trained model are used jointly for inference. Empirical evaluation on five different tasks shows that (1) our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and (2) the coupled rule-exemplar supervision is effective in denoising rules.
[ "Learning from Rules", "Learning from limited labeled data", "Weakly Supervised Learning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=SkeuexBtDr
https://openreview.net/forum?id=SkeuexBtDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "WTOjR_bWd", "BkxHJx_3ir", "BygrBzoKsB", "SygQVgiYoB", "HklrTqcYir", "ByxNcIqtsB", "SyebtskccS", "Byg7vz-pFr", "Bkxefw05KS", "SkleMEdwtB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1576798740656, 1573842908931, 1573659197136, 1573658666850, 1573657277491, 1573656204167, 1572629368763, 1571783258830, 1571641095870, 1571419143878 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2105/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2105/Authors" ], [ "ICLR.cc/2020/Conference/Paper2105/Authors" ], [ "ICLR.cc/2020/Conference/Paper2105/Authors" ], [ "ICLR.cc/2020/Conference/Paper2105/Authors" ], [ "ICLR.cc/2020/Conference/Paper2105/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2105/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2105/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2105/Authors" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper addresses the problem of costly human supervision for training supervised learning methods.\\nThe authors propose a joint approach for more effectively collecting supervision data from humans, by extracting rules and their exemplars, and a model for training on this data.\\nThey demonstrate the effectiveness of their approach on multiple datasets by comparing to a range of baselines.\\n\\nBased on the reviews and my own reading I recommend to accept this paper.\\nThe approach makes intuitively a lot of sense and is well explained.\\nThe experimental results are convincing.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for answering the questions.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for constructive and insightful feedback on our work. We have addressed each reviewer\\u2019s comments/questions individually.\\n\\nBased on the reviewers\\u2019 feedback, we have made the following changes in the revised version:\\n1. Added missing captions to figures and tables\\n2. Modified section numbering in supplementary\\n3. Fixed notational and typing errors\\n4. Combined the tables of hyperparameters in supplementary for better appearance\"}", "{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for providing valuable feedback on our work. We have addressed your comments/questions below:\\n\\n> Some remarks on how the paper could become stronger: The type of problem that can be assessed with the proposed method seems to be fairly specific: most tasks studied are classification of natural language utterances. That is a natural class of tasks, since it is easy to imagine how labellers can formulate rules. However, it would have been very interesting if the authors had found ways to allow for more diversity here. In general, I have the impression that there are more interesting ideas and results to be found in the direction explored by this paper - what about, e.g., allowing the classifier to add rules of its own?\\n\\nExtending this work to more diverse tasks is our goal in future research effort. Allowing the classifier to add rules of its own seems an interesting direction.\\n\\n> The paper would benefit from some general editing with regards to appearance; for example, the supplementary material sections continue the regular section numbering, instead of having their own; the images are missing captions, and sometimes have somewhat unorthodox axis tick labelling.\\n\\nWe have fixed these in the revision.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for providing valuable feedback on our work. We have addressed your comments/questions below:\\n\\n> Minor Problems\\n\\nWe have fixed the notations and other typing errors in the revised version.\\n\\n> Since each rule can be regarded as experts or weak learners, how is this work related to learning strong learners from weak learners (boosting/ensemble)?\\n\\nOne big difference is that most rules cover only a small number of examples, and for a given example only a small number of rules cover them (Table 1). This is unlike the setting of typical \\u201cweak to strong learners\\u201d framework where all weak learners predict labels for all examples. \\n\\n> Is it possible that the algorithm can incorporate more information of the rules, for example, the structure of the logical formulas?\\n\\nIn this work, we wanted to treat rules as black-box functions. Exploiting the logical formulas should help but will entail more application-specific encodings. \\n\\n> Is it possible to generalize the idea to RL?\\n\\nYes, our method can be extended to RL in the imitation learning setting in order to reduce the amount of human-generated data used in imitation learning. A rule in this scenario will be a partial policy, i.e., it will map some states to an action but may not cover other states. During training, the state space could either be sampled completely randomly or by following the policy formed by applying the current rule weights to each rule, or a combination.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for providing valuable feedback on our work. We have addressed your comments/questions below:\\n\\n> First, although the intuition of this model makes a lot of sense to me, the construction of the loss function is quite heuristic, with a lot of terms simply summing together, making it hard to judge which components are most important for the final results. A more principled and integrated framework like EM could be more convincing to me.\\n \\nOur first attempt was indeed a principled EM framework which we called the Posterior regularization (PR) method that we derive in detail in the Supplementary. Unfortunately, we found that this EM formulation performed worse than the only-L baseline in two of the five datasets (Table 2). That is what led us to our current non-EM formulation which provided higher accuracy while being (frustratingly :) simpler. Also, it was more robust to hyper-parameter selection and initializations than the EM formulation. Finally, if you compare our ImplyLoss objective (Eq 5) with EM\\u2019s objective (Eq 14), the two labeled loss terms are identical. The only difference is in the term involving unlabeled data. In EM, the KL term biases the model to match the estimated distribution by the teacher which is changing and could be incorrect. In contrast, the ImplyLoss does not introduce any such bias as we discuss in Section 2.1 (below Table 1). \\n\\n> Second, it seems the unlabelled data is only used in the causal constraint term (the last term in Eqn 5) and it is controlled by a coefficient \\\\gamma. It is a bit unclear to me whether the unlabelled data is fully utilized while it only constraints the causal relation, as one can also use labeled data for constraining the causal relation. \\n\\nYes, unlabelled data is only used in the last term of Eqn 5. This term allows full utilization of the correctly generalized unlabelled data as follows: When P(r_j =1|x_i) is close to one, the causal term reduces to log-likelihood of labeling x_i as l_j, allowing us to treat x_i as a labeled instance. The same causal term allows us to ignore wrongly generalized instances as follows: when P(r_j =1|x_i) is close to zero, the gradient on the label parameters (\\\\theta) is vanishingly small.\\n \\n> Also, why not include labeled data for this constraint regularization?\\n\\nEmpirically, we found no difference between including labeled data in the causal term or not.\\nThe causal term when fitted with the clean labels for \\u2018y_i\\u2019 and r_{ji} (when available) reduce to the likelihood term LL(\\\\theta). We did not want to double count labeled instances and distort the training distribution. \\n\\n> Another minor question is after the two networks are trained, will you only use the learned classifier for test data, or, do you also use the conditional distribution in the testing phase and compute an expectation of the predicted class? and why?\\n\\nWe only use the classification network (parametrized by \\\\theta) for test data. The conditional distribution is only applicable when a rule covers the instance. However, an example in the test set may not be covered by any of the rules (For one of our datasets coverage of rules is only 14% ). We also tried to predict the 'y' that minimizes a joint score over y and r_ji conditionals. But in practice we did not see much difference.\\n\\n> Also, what's the purpose of section 6 in the appendix?\\n\\nSection 6 describes our attempt of an EM-based framework (Algorithm 1, Pg. 13) for this task. We compare implication loss with this alternative approach of jointly learning the classifier and rule network by imposing the same causal constraints in a different way.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper addresses the problem that labelled data is often unavailable in the quantities required to train effective models. It deals with classification problems, and proposes a method to obtain more (but weaker) labels data with minimal involvement from human labellers, by asking them to generalize their labelling decisions into rules and then learning restrictions on those rules to avoid learning incorrectly generalized labels. The motivating observation is that human labellers are often able to make such generalizations in much less time than it would take them to apply that rule to a large dataset themselves. This is an interesting idea, especially for cases where labelling capacity is limited. The point being made about the labelling noise not being random in this situation is an interesting one - it might be worth exploring this notion further on its own also in contexts where the source of the noise is unknown.\\n\\nThe presentation of the implementation the authors choose for their proposed approach is clear, and the implementation is sensible. The experimental section includes comparisons to a number of alternative methods, and the authors find that their method outperforms all others, including recent methods for combining (noisy) rule-based labels and (clean) human-sourced labels.\\n\\nI would argue for accepting this paper. It studies an interesting question, which if answered has the potential to make access to machine learning solutions to certain types of problem significantly cheaper and therefore more widespread. The experiments are well-chosen and show that, depending on the data available and the task, significant gains can be made using the proposed method.\", \"some_remarks_on_how_the_paper_could_become_stronger\": \"The type of problem that can be assessed with the proposed method seems to be fairly specific: most tasks studied are classification of natural language utterances. That is a natural class of tasks, since it is easy to imagine how labellers can formulate rules. However, it would have been very interesting if the authors had found ways to allow for more diversity here. In general, I have the impression that there are more interesting ideas and results to be found in the direction explored by this paper - what about, e.g., allowing the classifier to add rules of its own?\\n\\nThe paper would benefit from some general editing with regards to appearance; for example, the supplementary material sections continue the regular section numbering, instead of having their own; the images are missing captions, and sometimes have somewhat unorthodox axis tick labelling.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel semi-supervised learning paradigm where the algorithm learns from both clean instance-level labels and noisy rule-level labels, and also a simple but effective algorithm as solution. The proposed algorithm employs a set of latent coverage variables to bridge two kinds of supervisions and uses a soft causal constraint on the coverage variables to denoise the noisy labels. Empirically the paper demonstrates the effectiveness of the proposed algorithm with consistent improvements over several baselines on a wide range of classification tasks.\\n\\nThe idea of using macro-level noisy labels as part of the supervision is novel, and it could potentially trigger a paradigm shift on many research areas in machine learning. The proposed methodology is clean but effective, with extensive experimental support. Therefore I vote for accepting this submission.\\n\\nMinor problems\\n\\n(1) Abuse of notation \\\\phi in section 2.\\n(2) \\\"... from traing the classifier ...\\\" in page 4.\\n\\n\\nMore (further) questions\\n\\n(1) Since each rule can be regarded as experts or weak learners, how is this work related to learning strong learners from weak learners (boosting/ensemble)?\\n(2) Is it possible that the algorithm can incorporate more information of the rules, for example, the structure of the logical formulas?\\n(3) Is it possible to generalize the idea to RL?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In case of a lack of labeled data, human-designed rules can be used to label the unlabelled data. This paper proposes a better rule-based labeling method by restricting the coverage of the rule, which is based on the assumption that the rules can be applied to a local region but can not be 'over-generalized' to the whole sample space. The coverage of the rule is represented by a conditional distribution, which is parameterized as a neural network and jointly learned with the classifier network.\\n\\nI think this paper is tackling an important problem in machine learning, and the proposed idea is novel and interesting. I vote for weak acceptance because there are still some technical points that are not well-addressed enough:\\n\\nFirst, although the intuition of this model makes a lot of sense to me, the construction of the loss function is quite heuristic, with a lot of terms simply summing together, making it hard to judge which components are most important for the final results. A more principled and integrated framework like EM could be more convincing to me.\\n\\nSecond, it seems the unlabelled data is only used in the causal constraint term (the last term in Eqn 5) and it is controlled by a coefficient \\\\gamma. It is a bit unclear to me whether the unlabelled data is fully utilized while it only constraints the causal relation, as one can also use labeled data for constraining the causal relation. Also, why not include labeled data for this constraint regularization?\\n\\nAnother minor question is after the two networks are trained, will you only use the learned classifier for test data, or, do you also use the conditional distribution in the testing phase and compute an expectation of the predicted class? and why?\\n\\nAlso, what's the purpose of section 6 in the appendix?\\n\\nI general I think the idea of learning a conditional distribution to constrain the use of rules is an interesting and novel idea. The paper can be further improved if the algorithm can be more principled.\\n\\n\\n---- after reading the response ---\\n\\nThanks for answering the questions. I believe some of these explanations can be added to the final version to improve clarity. My score does not change, but overall I advocate to accept this paper.\"}", "{\"comment\": \"https://github.com/iclrLFRGLE/iclrLFRGLE\", \"title\": \"Link to anonymized code\"}" ] }
rkxdexBYPB
Group-Transformer: Towards A Lightweight Character-level Language Model
[ "Sungrae Park", "Geewook Kim", "Junyeop Lee", "Junbum Cha", "Ji-Hoon Kim Hwalsuk Lee" ]
Character-level language modeling is an essential but challenging task in Natural Language Processing. Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance. However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources. In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies. Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost. As a result, Group-Transformer only uses 18.2\% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8. When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance. The implementation code will be available.
[ "Transformer", "Lightweight model", "Language Modeling", "Character-level language modeling" ]
Reject
https://openreview.net/pdf?id=rkxdexBYPB
https://openreview.net/forum?id=rkxdexBYPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "5C4cMWL6ve", "rJxrktS3oS", "BJlvg1B2ir", "SygU4SVhsB", "B1l51SNniS", "ByeW0XEhoH", "ByxM6fN2or", "ryloNM9diB", "B1g4xYYOoH", "Hyx8TztusS", "Sylk5ztuiH", "r1xkV0ATtB", "Syx8D1tKFr", "B1e_weqbKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740626, 1573832924877, 1573830383287, 1573827886359, 1573827810150, 1573827528702, 1573827257602, 1573589554568, 1573587179697, 1573585598389, 1573585542799, 1571839527394, 1571553117976, 1571033184111 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/Authors" ], [ "ICLR.cc/2020/Conference/Paper2104/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2104/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2104/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes using a lightweight alternative to Transformer self-attention called Group-Transformer. This is proposed in order to overcome difficulties in modelling long-distance dependencies in character level language modelling. They take inspiration from work on group convolutions. They experiment on two large-scale char-level LM datasets which show positive results, but experiments on word level tasks fail to show benefits. I think that this work, though promising, is still somewhat incremental and has not shown to be widely applicable, and therefore I recommend that it is not accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Answers to the quick questions\", \"comment\": \"Thank you for your response!\\n\\n1. What do you mean by \\\"a common method used to compare model efficiencies (Bai et al., NIPS-19)\\\". Can you briefly describe it?\\n\\n- As you know, the point of this paper is to see if our methodology is efficient at making the lightweight character-level language model. Since there is no lightweight model in the field of the character-level language model, it was comparable to a large size model (Transformer XL). In order to know the efficiency of converting to a lightweight model in the process, it is reasonable to reduce the size of the model in various ways without changing the architecture of the large size model and compare it with our current methodology. The experimental model of the referred paper proceeds similarly to ours. If we scale-up our proposed model to the same parameter size of the original model, we need several ablation tests for various scale-up models as well as consider some risks about overfitting due to increased hidden dimension, and these results will be out of points the context of our paper. I apologize if you don't like the expression \\\"a common method\\\".\\n\\n2. In this new figure 2, the best baseline under 3M parameters has a performance around 1.36bpc, but table 2 has a baseline of 1.336 bpc with 2.9M parameters. What I'm missing here?\\n\\n- Table 2 shows the baseline with the same number of parameters, and Figure 2 compares three scale-down approaches. The model that you point out in Table 2 differs from the best baseline under 3M parameters in the new figure. Specifically, the two models have different hidden dimensions, 152 for the former and 144 for the later.\\n\\n3. You said \\\"multi-head attention creates multi-attention scores within the same input information\\\", but so does your approach too. It creates multiple version of the input information, which essentially multiplies the number of attention scores. This is why I asked for comparisons with a baseline with more heads, which I don't find in the updated paper.\\n\\n- In our model, the fixed number of multi-heads, $H_{model}=8$. For example, if the number of groups is four, $G=4$, each split input state will have two isolated heads, $H=H_{model}/G=2$. This ensures that the entire connection between input and multi-head will always match the total number of heads, regardless of the number of groups, and the same number of multi-head will not have the effect of more increasing the number of multi-head due to multiple version of the input.\"}", "{\"title\": \"few quick questions\", \"comment\": [\"What do you mean by \\\"a common method used to compare model efficiencies (Bai et al., NIPS-19)\\\". Can you briefly describe it?\", \"In this new figure 2, the best baseline under 3M parameters has a performance around 1.36bpc, but table 2 has a baseline of 1.336 bpc with 2.9M parameters. What I'm missing here?\", \"You said \\\"multi-head attention creates multi-attention scores within the same input information\\\", but so does your approach too. It creates multiple version of the input information, which essentially multiplies the number of attention scores. This is why I asked for comparisons with a baseline with more heads, which I don't find in the updated paper.\"]}", "{\"title\": \"Supplementary for 1. and 3.\", \"comment\": \"1. We conducted an additional experiment supporting the response to m1. In the attention module, we applied group operations on query, key, and value individually and evaluated all combinations (See Appendix F). As we mentioned earlier, more than one component identified with group operations only slightly reduce the number of parameters, but with a significant drop in performance. This experiment supports the previous response to m1.\\n\\n3. As we mentioned, we conducted additional experiments on the number of groups (See Appendix E). Model variations are categorized into [6M-7M], [4M-5M], and [2M-4M], according to the number of parameters. In these experiments, Group-Transformer outperforms the baselines in all parameter size categories. Interestingly, the 4 Group-Transformer performs better than others when the number of parameters is over 4M. However, the best performer is changed when the number of parameters becomes below 3M.\"}", "{\"title\": \"Supplementary for the response to Q1.\", \"comment\": \"The experiment comparing the number of groups is finished (See Appendix E). Model variations are categorized into [6M-7M], [4M-5M], and [2M-4M], according to the number of parameters. In these experiments, Group-Transformer outperforms the baselines in all parameter size categories. Interestingly, the 4 Group-Transformer performs better than others when the number of parameters is over 4M. However, the best performer is changed when the number of parameters becomes below 3M.\"}", "{\"title\": \"Major changes of the paper\", \"comment\": \"We are very excited to have been given the opportunity to revise our manuscript in ICLR2020 OpenReview. We carefully considered those offered by the three reviewers. Herein, we explain how we revised the paper based on those comments and recommendations.\\n\\n1. Comparison with the baseline models (Improved Table 2 and Figure 2)\\nWe provide additional baseline models to be compared to our models under the same number of parameters. In the previous version, the baseline was set by reducing the number of layers or the hidden dimension size into certain values. In order to provide clearer comparisons, we identified the baseline models holding the same number of parameters. Additionally, we combined the previous results (prev.Table 2 and prev.Figure 3) into a single Figure that compares our grouping method with other scale-down methods (adjusting the model hyperparameters). The updated Table 2 and Figure 2 show the effectiveness of our approach more clearly. \\n\\n2. The 8 Group-Transformer (Table 1, Table 2, Table 3)\\nThrough this paper, we added the performances of ``8 Group-Transformer``. Since our baseline holds 8 heads at the multi-head attention module, each group in the ``8 Group-Transformer`` has a single head at each group attention module. The model has a much lower number of parameters, but shows a large margin of performance degradation, even though the model still shows better performance than baselines. \\n\\n3. Description of the resource requirement (Revised Section 3.4)\\nWe appreciate the valuable comments about Section 3.4. We recognized an issue in the section and revised the whole section. \\n\\n4. Three additional experiments (Appendix D, Appendix E, Appendix F)\\nTo respond to the individual comments, we conducted additional experiments and found that the results were worth to be shared with all readers. While appreciating the reviewers\\u2019 comments, we added the extra results in Appendix. \\n(Appendix D) Experiments on word-level language models\\n(Appendix E) Ablation study: the number of groups\\n(Appendix F) Ablation study: group operations on query, key, and value.\"}", "{\"title\": \"Supplementary for m1\", \"comment\": \"m1. (Ablation study: group operations on the components of the attention module)\\nWe conducted an additional experiment supporting the response to m1. In the attention module, we applied group operations on query, key, and value individually and evaluated all combinations (See Appendix F). As we mentioned earlier, more than one component identified with group operations only slightly reduce the number of parameters, but with a significant drop in performance. This experiment supports the previous response to m1.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"First of all, we thank you for your valuable review. One of our main motivations was to provide what happens if the group strategy (popularly used in vision domain) is applied to the Transformer. The character-level language modeling task requires a lightweight model without any additional consideration on the embedding parameters, so we tested the group strategy on the task. The followings are the responses to your comments.\\n\\n==== Responses to your major comments ====\\n\\n1. (About applying group strategy on the key and value) \\nInvestigating all the grouping methods for each possible case revealed that only the query was grouped for the best performance on the toy task, and there was a significant performance reduction if additional grouping was applied to the key and value. We also observed similar performance when applying a single grouping method to key or value. Nevertheless, we took into account the scalability of language models for models with encoder and decoder, such as sequence-to-sequence models, so we chose query as the application of group methods. That is, from the decoder point of view of the S2S model, key and value can be defined in a different source domain (encoder), whereas query always has the same modality (only decoder path). \\n\\n2. (Problems on section 3.4) \\nBecause the group strategy was not applied on the key and value, the big O operation should be changed to (2+2/G)*O(D_{model}^2) for group attention and (3/G)*O(D_{model}*\\\\bar{D}_{model}), where \\\\bar{D}_{model} is the bottleneck dimension. We will more clearly specify the resource requirement in Section 3.4. \\n\\n3. (About comparison on the number of groups under the same parameters) \\nThank you for your experimental suggestion. In our internal experiment under the same numbers of parameters, the 4 group model was better than 2 and 8 group models when the hidden dimension is relatively bigger, but the 2 group was the best when the hidden dimension is small. We agree that the analysis of the number of groups under the same parameters will improve our paper. We made a plan for the additional experiment and the results will be added in the paper. \\n\\n4. (FLOPs comparison from the LSTM model) \\nWe chose LSTM 1800 units by Mujika et al., 2017 that provides 1.40 bpc on enwik8 only with 14M parameters, which is the lowest number among the LSTM based models. The LSTM architecture used three stacked LSTM with 1800 hidden dimension and its FLOPs is 79.7B when generating a 512-length sequence. The FLOPs is about 8 times larger than Transformer-XL 9L in Table 2. This analysis reveals that LSTM uses many parameters due to its large hidden dimension and the number of FLOPs is also high for the same reason.\\n\\nMujika et al., Fast-slow recurrent neural networks, NIPS-17.\\n\\n5. (On a word-level language model task - wt103) \\nThank you for your recommendation to compare our models under the experimental settings of Bai et al. As you suggested, we conducted additional experiments on WT103 by restricting the number of model parameters to be close to 4.5M. Since we did not find the experimental settings of the paper you mentioned, we performed experiments with various settings to make the scale as similar as possible. The results show that Group transformer provides promising improvements, and we updated the results in Appendix D. \\n\\nBai et al., Deep Equilibrium Models, NIPS-19.\\n\\n==== Responses to your minor comments ====\\n\\nm6-10. (typos and clearer description) We really thank you for your comments. We fixed the typo and fixed the description. \\n\\nm6. (Large size group transformer) As you know well, training a large size character-level language transformer takes quite a while. We are currently training the model in your proposed size, and we will try to inform you as soon as the training is over.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your valuable review. Your considerate suggestions improve our paper. The followings are responses for your comments. To address your concerns, we had to do a few more experiments, and we are sorry that the reply is late, given that character level experiments take a long time.\\n\\n1. (About applications to other benchmarks) \\nWe agree that the group transformer can be applied to other benchmark domains. However, the lightweight version of character-level LM is more likely to be applied in a real-time environment, and our proposed method focuses on observing the impact on model structure rather than on the representation efficiency of the embedding layer. For this reason, we set character-level LM as the main task of this paper. As you noted, we reported a simple application result on word-level LM in Appendix D, but we recognize that there was a hindrance to compare the results from the baselines. To show a clearer comparison, we conducted various experiments on WT103 by restricting the number of model parameters to be close to 4.5M (please, check the updated Appendix D). Our grouping approach shows promising improvements from multiple baselines.\\n\\n2. (About adding 1 group transformer in Table 1) \\nThank you for your suggestion. The comparison from \\u201c1-Group Transformer without inter-group interaction\\u201d was already in Table 2, but we agree that the model should be added in Table 1. We fixed Table 1 and descriptions about that in section 4.2. \\n\\n3. (About Appendix C) \\nAppendix C conveys importance comparisons between simple parameter reduction methods, but we found that the figure was a little confusing, as you mentioned. We conducted additional experiments on the baseline methods, improved the figure with the exact numbers, and added it on the main paper (See section 4.3). We thank you for your valuable contribution to improve the paper. \\n\\n==== Response to your additional questions ====\\n\\nQ1. (About the number of groups) \\nWhen increasing the number of groups, the performance degradation is increasing but the number of parameters and computational cost is decreasing. The degree of performance degradation is highly related to the hidden dimension. In our experiment, applying group strategy on a wider Transformer (large dimensional hidden space) shows minor performance degradation even if the number of groups is increasing. However, the application on a thin Transformer (small dimensional hidden space) provides major performance degradation when increasing the number of groups. We hope to provide the results of this ablation study, but sub experiments are not finished yet. So the results will be updated in the paper.\\n\\nQ2. (Comparison from Quaternion Transformer) \\nFor Quaternion Transformer(QT), it seems to factorize the transformer network in a similar way to Group transformer(GT). However, GT deals with the connection of factorized embedded component independently, while the QT uses a combination of factorized embedded component connections.\"}", "{\"title\": \"Response to Reviewer 3's concerns (part 1)\", \"comment\": \"First of all, thank you for your contribution to the conference. We are happy that you agree with the importance of a lightweight transformer for real-world applications. To address your concerns, we had to do a few more experiments, and we are sorry that the reply is late, given that character level experiments take a long time.\\n\\n==== To major concerns ====\\n\\n1-1. (about the baseline transformers) \\nI\\u2019m sorry that I can\\u2019t understand your point in the first question. First of all, the Transformer XL we compared is not a vanilla Transformer. The point of our paper is to present an efficient methodology for converting the Transformer XL from the structural point of view into a lightweight model, so we do not understand why it should be compared with the current sota. In fact, our methodology can be applied to any model of the Transformer family. We chose Transformer XL because it reports the best performance in character-level language modeling tasks among peer-reviewed papers.\\n\\nRegarding the fairness of model comparison, to the best of our knowledge, there has been no reported baseline transformer that can be compared on a small scale under 10M parameters. In this situation, our comparison method is not a simple performance comparison, but a common method used to compare model efficiencies (Bai et al., NIPS-19). Also, our group strategy shows, as shown in Figure 2 of the updated pdf file (Old pdf ver.: Appendix, C figure), it is clearly effective, so we hardly agree with the term \\u201cmarginally better.\\u201d\\n\\nWe set baselines for each purpose in each session. The original version of the Transformer XL is shown in Table 1 and fully compared with our proposed models. Our purpose in Table 2 is to show the effectiveness of the group strategy, so we reduce the parameter size of the original Transformer model to a comparable size and then demonstrate the effectiveness of our method through comparison with various variants. From this point of view, We think our baseline, which is marked as \\u201cBase\\u201d in Table 2, is fair. \\n\\nBai et al., Deep Equilibrium Models, NIPS-19.\\n\\n1-2. (about a fair comparison in Table 2) \\nAll experiment in table 2 was conducted under the same settings except for the hyper-parameters directly affecting the parameter size by referring Dai et al. (https://github.com/kimiyoung/transformer-xl). As we mentioned in the previous answer, we decided that a direct comparison with the Transformer XL, which can be seen as the baseline model, was not fair, for example, because of the number of parameters. So, in order to claim the effectiveness of our method at a similar number of parameters, we compared the original Transformer XL models with similarly reduced parameters in various ways without changing the architectural concept (Please, check the updated Table 2). Additionally, the results of the various Transformer XL reduced parameters had already been reported in Appendix C results we submitted. To clarify the paper's argument, we will move this figure to Section 4.3 and show the results compared to the group strategy. This will make it clear that our methodology is effective (see Figure 2 in the current version).\\n\\nDai et al., Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context, ACL-19 \\n \\n1-3. (advanced method compressing a model) We agree that there are advanced methods such as the weight-sharing approach or knowledge distillations, but these methodologies are beyond the thesis of the paper we are trying to argue. Our primary focus lies in scaling down the network parameters from the perspective of network design. \\n\\n2-1. (about the relationship between the original attention module and the group attention) \\nThe main difference is that multi-head attention creates multi-attention scores within the same input information, but group strategy does not invade the role of multi-head while splitting the capacity of input information given to multi-head. In other words, the role of the grouped embedding components is different because it induces the efficiency of parameter size through group strategy while maintaining the advantages of multi-head.\\n\\n2-2. (about the total number of heads) \\nFor fairness, we set the total number of heads to be the same as the original model throughout all experiments. For example, if the total number of the head is set to 8, the 2 and 4 group transformers use 4 heads and 2 heads in each group to keep the total head number the same. By keeping the total number of the head, our group strategy can be applied to control group-wised behaviors of the multiple attention maps (See Figure 3 on page 8). We will more clearly specify the relationship between the group numbers and the head numbers in Table 2.\"}", "{\"title\": \"Response to Reviewer 3's concerns (part 2)\", \"comment\": \"3-1. (about section 3.4 - summary of the required resources)\\nWe found that there were some problems in section 3.4. Because the group strategy was not applied on the key and value, the big O operation should be changed to (2+2/G)*O(D_{model}^2) for group attention and (3/G)*O(D_{model}*\\\\bar{D}_{model}), where \\\\bar{D}_{model} is the bottleneck dimension. \\n\\n3-2. (about the size of the bottleneck dimension) \\nThrough all experiments, the M was set as D_{model}/G, so the number of parameters in the first layer of the feedforward submodule depends on G. The setting was already described in Appendix B.2 but missed in the paper. We fix the description in section 3.4 and clarify what M is (please, check the section 3.3 on page 5). \\n\\n==== To minor concerns ====\\n\\nm1. (About group operations on key and value) \\nInvestigating all the grouping methods for each possible case revealed that only the query was grouped for the best performance on the toy task, and there was a significant performance reduction if additional grouping was applied to the key and value. We also observed similar performance when applying a single grouping method to key or value. Nevertheless, we took into account the scalability of language models for models with encoder and decoder, such as sequence-to-sequence models, so we chose query as the application of group methods. That is, from the decoder point of view of the S2S model, key and value can be defined in a different source domain (encoder), whereas query always has the same modality (only decoder path). \\n\\nm2. (About typos and grammar errors) We will fix the typo and grammar errors.\\n\\nm3. (About the batch size of 22) We followed most of the training settings of Dai et.al. They used 22 batch size for training transformer-xl (base). \\n\\nDai et al., Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context, ACL-19 \\n\\nm4. (About the figure 1c) Thank you for your suggestion. We fix Figure 1c. \\n\\nm5. (About the mention about a bottleneck layer in the introduction) In the introduction, we mentioned, \\u201cWe added two inter-group operations that share a common feature over groups for the group attention layer and linking features in different groups for the group feed-forward layer.\\u201d. We improved the introduction. Please, check the introduction.\\n \\nm6. (About the mention in the introduction about large size transformers) We did not argue that the transformers only work with large sizes. We mentioned that the small size transformer has not yet been explored well in the char-level LM domain in the introduction section. \\n\\nm7. (About group embedding) Yes, the group embedding can be developed with the way of splitting a word embedding. We wanted to emphasize the feature spaces are split and each split embeddings are group-wisely processed.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a way of reducing the number of parameters in Transformer by splitting some of the linear layers into multiple separate groups. Additional mixing linear layers are added so information can pass from one group to another. In the feedforward submodule, a bottleneck layer is also added to reduce the number of parameters. Small scale experiments on language modeling compared the proposed model to vanilla transformers.\\n\\nI think the motivation of the paper is good and important for real-world applications of Transformers. However there are several major problems with the paper.\\n\\nFirst, the proposed method is only marginally better than a vanilla transformer of similar size, and both are much worse than the current sota. Also, the baseline transformer experiment is done by the authors themselves, and it\\u2019s not clear how well they tuned it. From table2, it seems they simply reduced the hidden size, but that\\u2019s not the only way of reducing parameters. \\n\\nThe second problem is that the authors completely ignore the fact that the multi-head attention is already doing grouped attention. It splits the hidden states into multiple parts, and each head performs attention separately. In thinking this way, the proposed \\u201cgroup attention\\u201d feels more like multiplying the number of heads. This also means the authors should compare their model to a vanilla transformer with 2x more heads.\\n\\nAnother problem is section 3.4. Here the authors claim their model has O(D_m^2/G) parameters, but it\\u2019s not true because key and value projections are not grouped and their size is O(D_m^2). Also, the number of parameters in the first layer of the feedforward submodule depends on M rather than G (if I understand it correctly). Despite this, I can\\u2019t find the exact value of M in the paper.\", \"other_minor_comments_are\": [\"I don\\u2019t understand the reasoning behind not grouping key and value projections because \\u201cthey can come from other source domain\\u201d. What does it mean and why it prevents grouping? In any case, the experiments only use characters so why not group them as well?\", \"The paper has many typos and weird english such as \\u201cnatural language process model ...\\u201d, \\u201cincreased training complexity issues ...\\u201d, \\u201cwhere also requires \\u2026\\u201d, \\u201cgets incredible achievements\\u201d, \\u201c... performance compare to Transformers\\u2019. \\u201d, \\u201c... rational behinds.\\u201d, \\u201cheavy weights\\u201d, \\u201c... how high similarity ...\\u201d, \\u201c... since Transformer raises.\\u201d\", \"Why a batch size of 22? It\\u2019s much smaller than Dai et.al, so shouldn\\u2019t the learning rate need to be adjusted accordingly?\", \"The figure 1c is not exactly consistent with the text. According to eq3, there should be a summation before ReLU. I know a summation of two linear layers can be written as concat + linear, but it would be easier to understand if the figure was consistent with the text.\", \"Maybe make it clear in the introduction that a bottleneck layer is also used.\", \"The introduction suggests that Transformers only works with large sizes, which is bit misleading. Yes, one needs huge models for reaching SoTA, but there is nothing in the Transformer architecture that requires large sizes. In fact, the experiments in the paper show that a small vanilla transformer outperforms much larger recurrent networks.\", \"The group embedding in section 3.1 doesn\\u2019t make sense. Embedding simply assigns a vector to each token, so grouping dimensions here doesn\\u2019t change anything. It\\u2019s the same as having a simple embedding, then splitting it into multiple parts.\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a lightweight Transformer model (Grouped Transformer) for character level LM tasks. The key idea to reduce model complexity (in terms of the number of parameters) is the idea of grouped computation, i.e., splitting the embedding into groups, applying functions group-wise and then learning some inter-group aggregation. The end result is a model that reduces parameter cost by the number of groups.\\n\\nOverall, the idea is an incremental one, although interesting largely based on the fact that this works. It mainly involves the application of group-wise paradigm to Transformers which enables parameter savings in the attention and feed-forward layers. I like the direction that this work is pushing for and I feel that the development of efficient Transformers is indeed a compelling direction. I am voting for weak accept.\\n\\nThe perhaps most limiting factor in this work lies in the execution. Personally, I find the experiments a little lacking and it is particularly puzzling to me why the authors restricted the scope of this work to only character level LM tasks. It would be interesting to know how the proposed method works on the standard MT benchmarks or other tasks where Transformers are state-of-the-art. (I note that there are some negative results on word-level LM in the appendix section)\\n\\nAnother particularly peculiar point in comparison with the standard Transformer model. Are the experiments (Table 1) really fair? Why do the authors not compare with the Transformer-XL with the same setting, i.e., number of layers (9 in theirs)? The authors should provide a direct comparison (some form of \\\"1-Group Transformer\\\" without inter-group interactions). \\n\\nThe charts in section C of the appendix are highly confusing, it would be better to just observe the effect of certain direct hyperparameters (number of groups, layers etc), instead of being hidden behind the number of parameters. I would be happy to see a distinct table or chart for every hyperparameter. This is the appendix so I don\\u2019t think space is an issue. \\n\\nI have some additional follow up questions\\n\\n1)\\tWhat happens beyond 2 and 4 groups? What is the maximum number of groups before performance degradation becomes too much?\\n2)\\tMy understanding is that each linear layer gets parameter saving relative to the number of groups. Is this correct? The overall parameter cost is divided by the number of groups? If so, the extent of parameter savings is very similar to the Quaternion Transformer paper already cited in this paper? The grouped Transformer also splits embeddings into multiple groups, which draws parallels with the component-wise splitting in Quaternion Transformers. Should this be discussed or compared with given the striking similarities?\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: This paper proposes a lightweight alternative to the design of self-attention based Transformers on character-level language modeling (LM). The approach was motivated by the similar technique that has been applied on group convolutions, but with a few notable differences too, such as inter-group mixing and low-rank approximation (which also appeared in ConvNets before, but this still strkes me as a difference in the Transformer context). Via experiments on two large-scale char-level LM datasets as well as a relatively extensive set of ablative experiments, the authors demonstrated the effectiveness of their approach.\", \"pros\": \"+ A very well-written paper. Most of the math symbols in the paper come with clear dimensionalities, which make it very easy to follow. The description for the methodology is also pretty clear. \\n+ Well-designed experiments. Enwik-8 and text8, while widely used to benchmark Transformers these days, are still very challenging large-scale tasks. The authors also provide a series of ablative studies comparing the group-transformer with the original transformer in Table 3.\\n+ Table 2 and Figure 3 (in the Appendix) are pretty strong proof of the effectiveness of the approach (at least on character-level language modeling). \\n\\n================================\\n\\nA few questions/issues/comments:\\n\\n1. For the key/value computation, why did you still keep the \\\"more complex/expensive\\\" $D_\\\\text{model}^2$ design? You explained in the paper that they could \\\"come from other source domain\\\", but in the specific case of character-level language modeling (in which you are just using a decoder Transformer without encoder-decoder attention), I don't think this is a problem. Why not make $\\\\mathbf{k}_{gh}$ and $\\\\mathbf{v}_{gh}$ something similar to how you compute the query? Or alternatively, why don't you make them low-rank too, as in the feed-forward layer? This difference in design seems strange to me.\\n\\n2. In Section 3.4, you mentioned that the Group-Transformer (I'll call it GT for simplicity below) has resource complexity $O(D_\\\\text{model}^2/G)$ whereas the original Transformer has complexity $O(D_\\\\text{model}^2)$. However, this is not true by your design of the key/value module, and by your own analysis in Appendix B.1, where you still have a $2 D_\\\\text{model}^2$ term. Therefore, I suggest reworking on Section 3.4, as the big-O complexity of the parameter space should be the same. (This again makes me curious about question (1) above...)\\n\\n3. Section 4.1 says that you only explored group size from {2, 4}. How did you pick this number? Why not 8 groups or more? As the 2-group option only saves about 10%-15% of the parameters (according to your analysis in Appendix B), it's actually not a large difference. Meanwhile, it seems 2-group is always better than 4-group. While I guess the 8-group option would certain make the model size very small, I'm very curious to see how good/bad it is when you match the # of parameters of an 8-group GT with a {2,4}-group GT.\\n\\n4. As the \\\"lightweight\\\" property of GT is what you are focusing on, could you also show/approximate the number of FLOPs used by LSTMs in Table 1? While LSTMs use more parameters, they don't use as much computation as do the Transformers (which has needs to form a $O(L^2)$ matrix in the self-attention module, where $L$ is the sequence length). Also, I think it's important to show the actual (wall-clock) runtime comparison of GT with Transformer-XL and the best LSTM model(s).\\n\\n5. I find it a bit strange (and slightly disappointing) that this method does not generalize that well to word-level language modeling, as none of the designs introduced in the paper are specific to \\\"character\\\"-level modeling alone. How's the performance of GT if you forget about the word embedding compression for a while (e.g., use a large embedding size, such as 500 like in prior works)? Some recent work [1] seems to suggest that very small Transformer-XL (only 4M parameters + a normal embedding) can achieve a perplexity around 35, too.\\n\\n------------------------------------\", \"some_issues_that_did_not_really_affect_the_score\": \"6. In Secton 3.2 (currently at the bottom of page 3), maybe add the dimensionality of $\\\\mathbf{x}$ (which should be $D_\\\\text{model}$) just for clarity, as you are omitting the \\\"time\\\" dimension (of a sequence) and only considering a single time step.\\n\\n7. Right after Eq. (2), the first $\\\\mathbf{W}_{gh}^\\\\text{m-intra}$ should be $\\\\mathbf{W}_{gh}^\\\\text{o-intra}$.\\n\\n8. In Eq. (4) (and the sentence following it), $W_{hg}^\\\\text{f2}$ shouldn't have a reference to $h$, as the reference to heads should only be in the self-attention.\\n\\n9. Eq. (7) intra -> inter.\\n\\n10. Some descriptions in Appendix A are confusing. For instance, you didn't really define function $\\\\text{Shuffle}(\\\\cdot)$, and it took me a while to realize you mean transposing the 0th and 2nd dimension of a $G \\\\times M \\\\times G$ matrix. Similarly, the $\\\\text{Concat}(\\\\cdot)$ function in Eq. (7) is \\\"undefined\\\", in the sense that its input is already a $G \\\\times M$ matrix (each row is a $1 \\\\times M$ vector). I think what you want is to vectorize it to shape $1 \\\\times (M * G)$ , and $\\\\mathbf{W}_g^\\\\text{intra[2]}$ should have shape $(M * G) \\\\times \\\\bar{D}_\\\\text{group}$. I suggest you revise an clarify this part.\\n\\n\\n6. I'm curious (and wonder if you've tried this): What if you increase the model size of the Group-Transformer to be as large as the original Transformer on enwik-8 and text8 (e.g., 40M)? How does the GT perform? While Table 3 is indeed convincing, the result obtained by GT is still far from the actual SOTA (e.g., obtained by Child et al. [2] with a much larger model). Would be interesting to compare how a model \\\"as large\\\" would do.\\n\\n------------------------------------\\n\\nOverall, I think this is a promising strategy that seems to work very well on character-level language modeling. My only major concerns are some of the specifics of the design of the methodology (e.g., the key/value part) and the failure of the approach to generalize to the very-relevant domain such as word-level LM.\\n\\n[1] https://arxiv.org/abs/1909.01377\\n[2] https://arxiv.org/abs/1904.10509\"}" ] }
HylvleBtPB
Language-independent Cross-lingual Contextual Representations
[ "Xiao Zhang", "Song Wang", "Dejing Dou", "Xien Liu", "Thien Huu Nguyen", "Ji Wu" ]
Contextual representation models like BERT have achieved state-of-the-art performance on a diverse range of NLP tasks. We propose a cross-lingual contextual representation model that generates language-independent contextual representations. This helps to enable zero-shot cross-lingual transfer of a wide range of NLP models, on top of contextual representation models like BERT. We provide a formulation of language-independent cross-lingual contextual representation based on mono-lingual representations. Our formulation takes three steps to align sequences of vectors: transform, extract, and reorder. We present a detailed discussion about the process of learning cross-lingual contextual representations, also about the performance in cross-lingual transfer learning and its implications.
[ "contextual representation", "cross-lingual", "transfer learning" ]
Reject
https://openreview.net/pdf?id=HylvleBtPB
https://openreview.net/forum?id=HylvleBtPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "0vWK7djX8k", "HkgCAIXniB", "SyxGx3e6YH", "ryexDZ32KB", "BklxL4DDYr" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740593, 1573824214306, 1571781609782, 1571762519575, 1571415111824 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2103/Authors" ], [ "ICLR.cc/2020/Conference/Paper2103/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2103/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2103/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a method to learn cross-lingual representations by aligning monolingual models with the help of a parallel corpus using a three-step process: transform, extract, and reorder. Experiments on XNLI show that the proposed method is able to perform zero-shot cross-lingual transfer, although its overall performance is still below state-of-the-art jointly trained method XLM.\\n\\nAll three reviewers suggested that the proposed method needs to be evaluated more thoroughly (more datasets and languages). R2 and R4 raise some concerns around the complexity of the proposed method (possibly could be simplified further). R3 suggests a more thorough investigation on why the model saturates at 250,000 parallel sentences, among others.\\n\\nThe authors acknowledged reviewers' concerns in their response and will incorporate them in future work.\\n\\nI recommend rejecting this paper for ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author response to reviews\", \"comment\": \"Thanks for the comments from all reviewers!\\n\\nWe acknowledge that one of the main weaknesses of this paper is in its evaluation, which is only performed on a single pair of language and on a single task. We are working on more experiments to strengthen the evaluation:\\n\\n- Evaluation on more languages, such as German\\n\\n- Evaluation on more tasks, especially more complex tasks such as reading comprehension. Although NLI is the most commonly used task (recently) to evaluate cross-lingual representations, it does not fully take advantage of the language-independent aspect of our proposed method. While the successful zero-shot transfer of ESIM model partially demonstrated the use of language-independent cross-lingual representations, evaluation on more complex NLP tasks could reveal situations where the language-independency of representation is more critical to zero-shot transfer performance.\\n\\nUnfortunately, we are not able to update on new results here yet. We really appreciate the suggestions by the reviewers which we would take to further improve our work.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a method to learn language-independent cross-lingual contextual representations by mapping the representations of a monolingual model in one language to the representations of a monolingual model in another language. The proposed approach consists of three steps: 1. A transformation is learned that minimizes the distance between the contextual word representations of the two models of a sentence and its translation in a parallel corpus. 2. Each sentence is summarized as a sequence of key-point tokens based on phrase alignment. 3. The contextual word vectors are reordered based on an order prediction model.\\nThe authors perform experiments on intrinsic tasks and on XNLI, mapping between an English and a Chinese BERT model. They outperform multilingual BERT (mBERT) on the latter.\\n\\nOverall, the approach is novel, but the steps seem overly complicated. The extrinsic evaluation is the weakest point of the paper. Because of this, I tend to a Weak Reject. I would be willing to increase my score if additional languages are added to the evaluation and if the steps are better motivated and compared to simpler alternatives.\\n\\nThe high-level steps of the approach (transform, align, reorder) make sense. They seem to be inspired by classical phrase-based MT pipelines, but this connection is not made clear. In particular, some of the steps seem unnecessarily complicated and I am wondering whether the authors tried or compared to simpler alternatives. As a parallel corpus is used in the first step, word alignment could be automatically obtained by FastAlign without the use of attention matching or using a phrase table that is learned in an unsupervised way as in recent work in NMT (https://arxiv.org/pdf/1804.07755.pdf). For the second step, embeddings of aligned phrases could be averaged or the head of a phrase could be used instead of predicting key-points with a separate network.\\n\\nThe data requirements between the steps seem somewhat inconsistent. A parallel corpus is used in the first step, but explicitly not used in the second step. While I agree that language-independent cross-lingual representations should not use a parallel corpus, it would have been good if the first step could have also been performed without one (or with a smaller size).\\n\\nThe extrinsic evaluation of the paper could be improved. The accuracy on each intrinsic step is evaluated. Without any reference or comparison, I found it hard to tell how good these numbers are so this evaluation did not add much for me. The main extrinsic evaluation of the paper is on XNLI but only employs an English and a Chinese model. When training cross-lingual models, the aim is to train approaches that not only work for one or two languages but for many. In light of this, I find it hard to tell from the results on a single language pair how well the approach will generalize to other languages, particularly as mBERT's performance on zh XNLI is comparatively weak (https://arxiv.org/pdf/1901.07291.pdf). There are other publicly available BERT models such as for German (https://deepset.ai/german-bert) on which the approach could be tried. If compute is a concern, then the approach could be applied to ELMo representations. This would also enable a comparison to Schuster et al. (2019; https://arxiv.org/abs/1902.09492).\\n\\nFinally, the data requirement of 250k parallel sentences is prohibitive for many language pairs where this approach would be valuable. It would be good to see a chart on how model performance develops with the number of parallel sentences to see if a smaller number of parallel sentences would be viable in practice.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper presents a new method to obtain cross-lingual contextual embeddings by aligning monolingual ones through 3 steps: transform each token individually, merge them as needed to obtain a uniform granularity across languages, and reorder them.\", \"While I think that the proposed method has some interest, and the extensive ablation experiments are useful to better understand its behavior, I do not think that the it makes enough merits to be accepted in the conference. I feel that the paper tends to overly complicate things, and it is often difficult to extract any clear idea from it. The proposed method is also much more complicated than previous approaches, yet it does not perform better than them (XLM has better absolute results and exactly the same cross-lingual transfer gap, while being substantially simpler). More concretely:\", \"The paper tends to overly complicate things. For instance, Section 2.1 and 2.2 try to mathematically formalize very basic intuitions. Unless the formalization is important for clarity (which is not, as these are obvious ideas) or necessary later in the paper (which is not either), there is no point in doing that. This only makes the paper more difficult to follow than what it should.\", \"The only extrinsic evaluation is in XNLI, where the authors evaluate the zero-shot cross-lingual transfer performance from English into Chinese. However, the proposed method does not bring any improvement over the current state-of-the-art in this setup. The proposed method gets 80.2% and 71.7% accuracy in English and Chinese, respectively, leaving an absolute transfer gap of 8.5%, while the XLM model from Lample and Conneau (2019) obtains substantially better results (85.0% and 76.5%) with the exact same transfer gap of 8.5%. This could still be good enough if the proposed method had some other advantage over the previous SOTA, but I cannot find any and, in contrast, I do find some disadvantages (see below).\", \"In addition to being more complicated than previous approaches, the proposed method also introduces new hyperparameters and seems more difficult to train. For instance, the authors need to incorporate annealing to train the transformation module, and the model seems quite sensitive to the corresponding hyperparameter (Table 1).\", \"While both multilingual BERT and XLM simply fine-tune a pre-trained BERT model to perform some downstream task, the proposed method is used as a feature extractor, and the authors train an ESIM (LSTM) model on top. The reported experiments do not control this factor (i.e. what would happen if one learns an ESIM model on top of XLM)?\", \"The authors highlight that their system can be trained in \\\"less than 5 hours, on a single GPU\\\", while \\\"XLM uses much more data and training time\\\", but this is quite deceptive. Your approach also requires training a monolingual BERT model for each language, which is even more expensive than training a single joint model as XLM does. It is true that one could potentially use publicly available monolingual models, but most pre-trained models in languages other than English are already multilingual, anyway, so I do not see a strong practical justification for this.\", \"The proposed model and the ones it is compared to do not use the same training data. This can have some justification (it might be computationally prohibitive for the authors to pre-train their own models, which might be the reason why they use public models trained on different data) but they should be more upfront about this. In relation to this, it is unfair to remark that the proposed method uses less parallel data than XLM, while not even mentioning that it uses more monolingual data (if I am not wrong, XLM was only trained in Wikipedia, while BERT also used a book corpus at least for English). To make things worse, the authors claim that \\\"XLM uses much more data and training time than our approach\\\", which seems wrong.\", \"The authors criticize XLM because it \\\"does not produce the same representation for different languages, so there is no guarantee of the performance in transfer learning\\\". This might be true, but is it anyhow different for your proposed method? Your method is not better empirically, and it does not have any theoretical guarantee either.\", \"As the authors themselves acknowledge, the proposed method is similar in spirit to Schuster et al. (NAACL'2019) -which is also much simpler- but they do not compare to it in their experiments.\", \"All in all, I think that the paper tends to overly complicate things, and ultimately fails to answer a simple central question: why should one prefer your approach over previous methods like XLM, or what is it that it makes it otherwise interesting or relevant?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new approach to learning contextualised cross-lingual representations based on a three-step procedure called: transform-extract-reorder. The proposed model is tested on zero-shot cross-lingual transfer for one language pair: English-Chinese, where the gains are reported over the initial multilingual BERT model which does not rely on any bilingual signal. I like the main modeling idea, and the actual model is described quite nicely and in an intuitive way. However, I believe that much more work is needed in terms of proper experiments including: a) additional strong and insightful baselines; b) experiments with more language pairs; c) experiments across other transfer tasks. The paper also does not include nor compares to all relevant previous work. My main comments and remarks are as follows:\\n\\n*Related work and Novelty* The topic of learning contextualised cross-lingual representations has become quite popular since the rise of ELMo and BERT. However, the authors cite only one paper that proposed such contextualised xling representations (the work of Schuster of al.). However, they never compare to that model, which is quite weird given the fact that the main goal of all xling representation models is to enable cross-lingual transfer in the first place. A comparison to the cited work of Schuster et al., plus additional comparisons to other (non-cited) models are required to clearly understand the main empirical benefits of the proposed three-step framework. Other relevant papers which are not cited nor compared against: Mulcaire et al. (NAACL 2019); Aldarmaki and Diab (NAACL 2019).\\n\\nThe authors should really stress the key novelty of their approach in comparison to the existing literature (which is mostly based on simpler projection-based methods).\\n\\n- Note that there are more papers on the subject getting published soon such as Mulcaire et al. (CoNLL 2019) and Liu et al. (CoNLL 2019); while making comparisons to models introduced in those papers is not required, it would be nice to also briefly mention that latest work in the related work section once it gets published.\\n\\n*Comparisons to XLM* Based on the limited set of results reported in the paper, it seems that the proposed model cannot match the performance of the XLM model (Lample and Conneau, NeurIPS 2019). The authors try to justify the usage of their method over XLM by emphasising the fact that their method produces language-independent xling representations, which XLM cannot do. However, there are multiple issues here: 1) it is not clear why exactly this requirement of language-independence is needed for a task such as zero-shot XNLI; 2) the paper does not demonstrate the usefulness of that language-independence property empirically at all. To me, it just seems like an unsuccessful attempt to make a conceptual distinction between the proposed model and XLM given the fact that XLM seems to significantly outperform the proposed model.\\n\\nPage 8 (\\\"Training cost and beyond\\\"). The authors claim that 250k pairs of parallel sentences is enough to learn a near-optimal cross-lingual model and that the model saturates with more parallel sentences. I don't see it as a positive aspect of the model as claimed by the authors. This means that the model cannot optimally encode knowledge about variety of cross-lingual contexts during its training, i.e., it stops learning earlier than expected. Why does that happen? A plot showing transfer learning results of XLM versus the proposed model versus some static word embedding model that also exploits parallel data would be very useful. The authors also do not even speculate why their model saturates already with 250k sentence pairs. Given the limited set of results (only one task and only language pair) it also remains unknown how general this phenomenon is and whether the same point is hit with some other language pairs and with other tasks. This definitely requires further and more thorough investigation.\\n\\n*Other Comparisons and Experiments*\\nAs mentioned, the paper is very limited when it comes to proper thorough evaluation. Most experiments focus on intrinsic/internal evaluations of different steps of the proposed framework, or some probing tests, which can also be seen simply as diagnostic experiments. The actual downstream experiments are conducted on only one language pair and for only one (zero-shot) transfer task. This is definitely not sufficient to draw any generalisable claims, and it prevents us to dig deeper into other interesting aspects of the model. I would suggest the authors to maybe run the model across the same range of tasks as done in the XLM work of Lample and Conneau, and definitely for more languages (ideally diverse target languages).\\n\\nAlso, besides XLM and translation-based baselines reported by multilingual BERT's github page, there is actually no other baselines, which really makes it hard to put this work in context, and understand its usefulness. For instance, static cross-lingual word embeddings could be added to ESIM and used to evaluate on XNLI as well (see, e.g., the work of Glavas et al. (ACL 2019)). It would be interesting to report the benefits of replacing such static vectors with truly contextualised representations. Also, as mentioned before, comparisons to other models that learn contextualised cross-lingual representations are definitely something that should be included in the paper.\\n\\n*Other Comments*\\n- Based on the result from Table 4, it seems that, in order to enable fully language-independent representations, one must sacrifice some of the monolingual performance, as the numbers drop from T over T+E to T+E+R variant. Is the same pattern visible for other languages monolingually? Why does this happen? \\n- As mentioned before, the whole emphasis is of language-independence is somewhat oversold in the whole paper without providing sufficient empirical evidence that this is crucial for transfer performance.\"}" ] }
r1lPleBFvH
Understanding the Limitations of Conditional Generative Models
[ "Ethan Fetaya", "Joern-Henrik Jacobsen", "Will Grathwohl", "Richard Zemel" ]
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.
[ "Conditional Generative Models", "Generative Classifiers", "Robustness", "Adversarial Examples" ]
Accept (Poster)
https://openreview.net/pdf?id=r1lPleBFvH
https://openreview.net/forum?id=r1lPleBFvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "M6gEmVoYAJ", "SyxZWEY3oH", "HJlNVQYnsB", "SkxwWQt2sr", "S1eyhzF2jH", "rJx8deVBcS", "Sklh8s_atB", "S1ljBraStH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740563, 1573848057174, 1573847852183, 1573847807395, 1573847719471, 1572319341593, 1571814227523, 1571308867115 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2102/Authors" ], [ "ICLR.cc/2020/Conference/Paper2102/Authors" ], [ "ICLR.cc/2020/Conference/Paper2102/Authors" ], [ "ICLR.cc/2020/Conference/Paper2102/Authors" ], [ "ICLR.cc/2020/Conference/Paper2102/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2102/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2102/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents theoretical results showing the conditional generative models cannot be robust. The paper also provide counter examples and some empirical evidence showing that the theory is reflected in practice. Some reviewers doubt how much of the theory holds in reality, but still they think that this paper could be a useful for the community. After the rebuttal period, R2 increased their score and it seems that with the current score the paper can be accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply Blind Review #2\", \"comment\": \"We thank the reviewer for the detailed comments, we have updated the paper accordingly and believe your comments were very helpful to improve the manuscript.\", \"regarding_specific_points\": \"\", \"q\": \"Minor comments\", \"a\": \"Thank you for the minor comments, we will fix the mistakes\\n\\n[1] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, \\u201cTowards deep learning models resistant to adversarial attacks,\\u201d in International Conference on Learning Representations, 2018.\"}", "{\"title\": \"Reply Blind Review #3\", \"comment\": \"We thank the reviewer for the detailed comments. We have incorporated them into the manuscript.\", \"regarding_specific_points\": \"\", \"q\": \"Clarify the padded uniform noise\", \"a\": \"By padding the data x, with extra dimensions of uniform noise, our data is now (x, u), this defines a distribution p(x, u) = p(x)p(u) and since we are using uniform noise p(u)=1, so p(x, u) = p(x). Now, our generative model is modeling p(x, u), so if the marginal p(u) under our model is not perfectly uniform, then this equality no longer holds. However, this is not a problem for our evaluation since all models compared in our work use the same amount of noise and padding.\"}", "{\"title\": \"Reply Blind Review #1\", \"comment\": \"We thank the reviewer for the effort and remarks. We incorporated the suggested changes and believe that they have significantly improved the manuscript.\\n\\nRegarding the two \\u201ccons\\u201d:\", \"q\": \"Datasets sufficient to support our claims?\", \"a\": \"MNIST and CIFAR are standard datasets for adversarial attacks so experimenting on them is aligned with common practice. Furthermore, while the first robust models on MNIST have recently been suggested, CIFAR10 is still very far from being solved. Our paper aims to shed light on this discrepancy from a conditional generative modeling perspective. We believe this is important to understand, as this model class has shown great promise and the state-of-the-art for robust classification on MNIST is a conditional generative model as well. However, we also added our new BG-MNIST dataset that combines MNIST and CIFAR10 and gives another point of view. Regarding text based generative models, as text is discrete the concept of epsilon perturbation isn\\u2019t appropriate and it very common in the literature to only study robustness on images. Therefore, we leave this type of data for future work.\\n\\nTo increase the value of the BG-MNIST experiments, we have added some more interpolation results and discussion on them in the manuscript. The results highlight that the difference between CIFAR10 and MNIST is quite subtle in parts, as the surprising interpolation behaviour on CIFAR10 can not just be explained by excessive class-unrelated entropy or poor likelihoods of the model.\"}", "{\"title\": \"Revision Summary\", \"comment\": \"We thank the reviewers for their time and comments.\\nFollowing their insights and suggestions, we have thoroughly revised the draft.\", \"we_have\": \"=> Improved the experimental section to rely less on the appendix and make it is easier to follow.\\n=> Added suggested references and discussion.\\n=> Updated and added results to the BG-MNIST section and discussed them. They show how the difference between MNIST and CIFAR10 is difficult to attribute to a single property of these datasets. \\n=> Discussed that these new results again highlight how likelihood fails to give us the full picture. A model trained on BG-MNIST, a similarly challenging density modeling task as CIFAR10, can perform considerably worse than a CIFAR10 model in terms of likelihood on in-distribution data, but still behave more favourably in terms of the interpolations through ambiguous inputs. This suggests a complex interplay between difficulty of discriminative and generative parts of the objective.\\n=> Clarified other points raised by the reviewers.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"paper summary:\\nThe authors claim that likelihood based generative models are not as robust to noise as general consensus claims them to be. To prove this authors make use of adversarial, ambiguous and incorrectly labeled in distribution inputs. Authors address issues regarding robustness in near perfect conditional generative models as well as assess the robustness of the likelihood objective.\", \"pros_of_the_paper\": \"1) Authors make well motivated arguments about how a near perfect generative model is also susceptible to attacks by providing examples that are adversarial, and have high likelihood and yet are incorrectly labeled.\\n2) They also demonstrate how class conditional generative models have poor discriminative power.\", \"cons\": \"1) The experiments section is written very poorly. This section relies heavily on the supplement making it hard to read due to the constant back and forth between the results and details of the experiments.\\n2) Experiments seem largely limited. Comparisons on image data sets such as MNIST and CIFAR10 alone are not convincing enough to establish generalizability of the proposed theory. For example, the hypothesis could completely fail on text based generative models.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Update: I thank the authors' for their response, and have read the other reviews.\\n\\nThis paper demonstrates some theoretical and practical limitations on the use of likelihood based generative models for detecting adversarial examples. They construct a simple counterexample showing that there are adversarial examples for an arbitrarily accurate model (as measured by KL) that are not detectable by diminished likelihood of the model (as the dimension increases). Extending the work of Gilmer et al, this proves that there can be no general robustness guarantee for conditional generative models (Bayes classifiers). They provide compelling empirical evidence that while conditional normalizing flows trained on MNIST can be effective in detecting and defending adversarial attacks, these models trained on CIFAR10 are not. Surprisingly, it is shown that linear interpolations between images of different classes yield higher likelihoods for the CIFAR10 models and that class has little impact on model likelihoods. This goes some way in explaining why the detection is not effective on CIFAR, but questions still remain.\\n\\nThe paper makes fairly modest claims, but does a good job at demonstrating them and shedding some light on the issue. The experiments are thorough and fit into a growing body of evidence that the likelihoods of normalizing flows and other image based likelihood models may not be that informative or well calibrated, where past work has focused on out-of-distribution detection. My only major complaint with the paper is that it is not clear to what extent the theoretical and practical problems are related. As mentioned in the paper, the counterexample construction depends on the geometry of the data rather than the learning model. It could be that for both the MNIST and CIFAR10 datasets, the geometry is such that robustness garauntees are possible, and that the discrepancy in detection and interpolation arises because the normalizing flow has modeled the MNIST distribution much better than the CIFAR10 distribution. In this case we might hope that using conditional likelihood models for adversarial detection can be made effective, but that effort needs to be placed into improving the modeling capability. It's not obvious how to probe this distinction, but it would be good if this was given some thought in the paper. Also it would be good to see the attack detection numbers on BG-MNIST.\", \"comments\": \"\", \"difficulty_in_training_conditional_generative_models\": \"I believe in the two papers you cite the models do not use the label as input, but rather there is a separate model for each class? The overfitting is likely why the models had slightly lower conditional likelihood. As an aside, there are a couple of other examples of conditional normalizing flow models on images that use a mixture of Gaussians in the latent space [1], [2].\\n\\neq. 4: In the paper it is said that the second term in eq 4 is at most log(C), because the uniform distribution would have this value and that therefore this is negligibly small in comparison to the other term. Why exactly is this the case, couldn't the data entropy term be smaller in principle even if it's larger in practice? Or is the argument that the data entropy term scales with the dimensionality, but the label term does not leading to an imbalance? This could use some clarification.\", \"a3\": \"What is meant by \\u2018While the ground-truth likelihoods for the padded and un-padded datapoints are the same due to independence of the uniform noise and unit density of the noise\\u2019 in the appendix section A3? Wouldn't the ground truth negative log likelihoods would increase by the entropy of the uniform noise? Also, then in the bits per dimension calculations is the dimension the number of unpadded dimensions or the padded dimensions?\\n\\n\\n[1] Izmailov, Pavel, et al. \\\"Semi-Supervised Learning with Normalizing Flows.\\\" Workshop on Invertible Neural Nets and Normalizing Flows, International Conference on Machine Learning. 2019.\\n[2] Atanov, Andrei, et al. \\\"Semi-Conditional Normalizing Flows for Semi-Supervised Learning.\\\" Workshop on Invertible Neural Nets and Normalizing Flows, International Conference on Machine Learning. 2019.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Post rebuttal:\\n\\nThank you for your response. I appreciate the authors add an experiment on BG-MNIST, which shows the intermediate trend of MNIST and CIFAR-10.\\n\\nAs the authors mentioned, the reweighting scheme could be a simple yet effective way to address the problem of current likelihood-based models. While there is room for improvement to further develop the method, the current version of the paper would be a good contribution to the community.\\n\\nHence, I raise my score from 3 to 6.\\n\\n----------------------------------------\", \"summary\": \"This paper investigates some limitations of the conditional generative models (or generative classifiers). First, the authors present a counter-example that a good generative classifier fails to detect adversarial attacks. Second, the authors claim that the marginal and conditional terms of the likelihood objective are the source of the problem. Finally, the authors demonstrate some experiments on adversarial attacks, out-of-distribution (OOD) samples, and noisy labels.\", \"pros\": [\"While generative classifiers are believed to be more robust than the discriminative counterparts [1], the authors present a counter-example that it may not be true.\", \"The authors investigate the marginal and conditional terms of the likelihood objective and demonstrate empirical results that the model fails to capture the outliers.\"], \"cons\": \"1. The imbalance issue of the likelihood objective is not surprising.\\n\\nAs the data x is far complex than the class y, it is expectable that the penalty from modeling p(x) is larger than the penalty from classifying p(y|x). As mentioned in Table 1 and Appendix A.2, balancing two terms indeed improves the classification performance. However, to meet the high standard of ICLR, the authors should propose an alternative or modification of the likelihood which resolves the existing limitations. For example, [2] decomposes the semantic and background parts to improve the OOD detection using likelihood models.\\n\\n2. The experiments are not extensively studied.\", \"the_authors_conduct_experiments_on_two_datasets\": \"MNIST and CIFAR-10. The authors may present more results on other datasets (e.g., SVHN or CIFAR-100) and convince if their findings are consistent. Also, some observations seem to be an inheritance of the datasets, e.g., Figure 4 is natural since MNIST has disjoint support and CIFAR-10 has a continuous one.\", \"minor_comments\": [\"On page 8, ',' should be moved after (Azulay & Weiss, 2018).\", \"On page 8, (Schott et al.) should be changed to '\\\\citet' format.\", \"PixelCNN++ is doubly cited.\", \"[1] Li et al. Are Generative Classifiers More Robust to Adversarial Attacks? ICML 2019.\", \"[2] Ren et al. Likelihood Ratios for Out-of-Distribution Detection. NeurIPS 2019.\"]}" ] }
HJewxlHFwH
Skew-Explore: Learn faster in continuous spaces with sparse rewards
[ "Xi Chen", "Yuan Gao", "Ali Ghadirzadeh", "Marten Bjorkman", "Ginevra Castellano", "Patric Jensfelt" ]
In many reinforcement learning settings, rewards which are extrinsically available to the learning agent are too sparse to train a suitable policy. Beside reward shaping which requires human expertise, utilizing better exploration strategies helps to circumvent the problem of policy training with sparse rewards. In this work, we introduce an exploration approach based on maximizing the entropy of the visited states while learning a goal-conditioned policy. The main contribution of this work is to introduce a novel reward function which combined with a goal proposing scheme, increases the entropy of the visited states faster compared to the prior work. This improves the exploration capability of the agent, and therefore enhances the agent's chance to solve sparse reward problems more efficiently. Our empirical studies demonstrate the superiority of the proposed method to solve different sparse reward problems in comparison to the prior work.
[ "reinforcement learning", "exploration", "sparse reward" ]
Reject
https://openreview.net/pdf?id=HJewxlHFwH
https://openreview.net/forum?id=HJewxlHFwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "YGG3wayhq-", "Hylrs_c3jS", "rJgtIu9niH", "r1eBrOq3or", "SylErwc2jS", "r1gP38qnsr", "H1lXWHd45H", "SJl_0LyRtr", "H1gZ0XznYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740531, 1573853341298, 1573853264753, 1573853244660, 1573852988059, 1573852847512, 1572271355222, 1571841743552, 1571722185462 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2101/Authors" ], [ "ICLR.cc/2020/Conference/Paper2101/Authors" ], [ "ICLR.cc/2020/Conference/Paper2101/Authors" ], [ "ICLR.cc/2020/Conference/Paper2101/Authors" ], [ "ICLR.cc/2020/Conference/Paper2101/Authors" ], [ "ICLR.cc/2020/Conference/Paper2101/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2101/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2101/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"While the reviewers generally appreciated the ideas presented in the paper and found the overall aims and motivation of the paper to be compelling, there were too many questions raised about the experiments and the soundness of the technical formulation to accept the paper at this time, and the reviewers did not feel that the authors had adequately addressed these issues in their responses. The main concerns were (1) with the correctness and rigor of the technical derivation, which the reviewers generally found to be somewhat questionable -- while the main idea seems reasonable, the details have a few too many question marks; (2) the experimental results have a number of shortcomings that make it difficult to fully understand whether the method really works, and how well.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"Thanks for your comment on our paper. We will try to improve the quality of the clarity in the next version.\"}", "{\"title\": \"Re: Official Blind Review #1. Part 2\", \"comment\": \"--- 8. In Figure 5, why does there seem to be discrete jumps in the learning curves for \\\"DoorOpen Coverage\\\"? \\n\\nThe discrete jumps happen in the coverage curve of the door opening experiment using Skew-Fit. The coverage measures how many states the agent has discovered at different iteration. In the door opening environment, the state has five dimensions: x, y, z of the gripper, gripper opening distance, and the door opening angle. The agent could quickly explore the full range of the first four dimensions (in the curve, the coverage increases rapidly at the beginning of the training). Then the increment of the coverage will slow down until the agent learned how to grab the door handle and open the door, and started to explore the last dimension of the state space. \\nDue to the nature of the door environment that the door opening angle dimension needs to be explored through a \\\"grab the door handle\\\" motion, which serves as a narrow passage in the state space. In order to explore states with different door opening angles, the goal proposing module must propose goal states with door opening angle larger than 0. However, at the early stage of the training process, we may not have enough goals proposed in those areas. \\nIn our setup, every iteration, we collect trajectories with 25 goal states. Even though we skewed the goal distribution aggressively, most of the proposed goals are still located in the areas that can be reached without touching the door handle, which has little contribution to the coverage. When a promising goal is proposed, the agent has the chance to explore areas that have not been discovered yet. However, due to the limitation of the goal-conditioned policy used in Skew-Fit, the agent can only explore areas that are very close to the proposed goal, which contributes a small increment to the coverage curve. Then, the learning curve will be flat again until the next promising goal state is proposed. The whole process is represented as the \\\"discrete jumps\\\" in the coverage curve. \\nWe do not observe similar patterns in our method because we allow the agent to have a broader exploration around the proposed goal. Once the robot grasps the door handle and starts to open the door, it will not just stop at a given angle but will try to move back and forth to open the door at different angles. The agent could explore the entire range of the door opening angle dimension in a few trials. \\n\\n--- 9. Discuss more explicitly under what assumption they expect for this method (with a Gaussian KDE) to work well.\\n\\nThere are two places in our method that need density estimation,1) estimating the density of visited states p(s) to update the novelty frontier, and 2) computing log p_z(s) - log p_\\\\tao(s) as the intrinsic reward. When we estimate the visited state density, we could replace KDE to other density estimation methods (such as VAE in Skew-Fit or flow-based methods), which scale well to high dimensional inputs. When we compute the intrinsic reward, we do not need an accurate density estimation. The key point is to construct an intrinsic reward that encourages the agent to reach the goal state but not staying at the goal state. \\nWe could apply KDE on the lower-dimensional latent space we obtained while learning p(s). In this case, the learned distribution N on the raw space may not be a Gaussian distribution, but some other distribution centered at z. However, as long as the learned N has higher variance than the distribution learned with goal-conditioned policy (Dirac delta function), we always gain a positive h(S|Z) - h(Z|S), which gives additional power for exploration.\"}", "{\"title\": \"Re: Official Blind Review #1. Part 1\", \"comment\": \"Thank you for the detailed review and comments. Below, we address your questions.\\n\\n--- 1. Why studying the quantity H(S | Z) - H(Z | S) is particularly important.\\n\\nThe main difference between our work Skew-Explore and the prior work Skew-Fit is the behavior of the policy to be learned. In Skew-Fit, the policy is a goal-conditioned policy, meaning that, p(s|z) is a Dirac delta function centered at z. The entropy of the visited states equals the entropy of the proposed goals: h(S) = h(Z). In our work, we model the behavior of the policy given a sampled state p(s|z) as a Gaussian distribution centered at z. The entropy of the visited states is then: h(S) = h(Z) + h(S|Z) - h(Z|S). \\nComparing to the prior work, for the same goal proposing distribution, we gain an extra amount of entropy, which equals h(S|Z) - h(Z|S) = h(S) - h(Z) = I(S;Z) >=0. It means that, in every iteration, our method could visit more states, which leads to higher h(Z) and higher h(S) in the next iteration. Studying the quantity of h(S|Z) - h(Z|S) shows why our method could achieve better performance than the prior method. \\n\\n--- 2. Can Table 1 be replaced with the learning curves\\n\\nThanks for your comment. We will try to improve the quality of the paper with a learning curve for representation of this experiment.\\n\\n--- 3. Can the authors summarize the difference between coverage and entropy in the main paper?\\n\\nThanks for your comment on this point. There is not enough space in the main paper to clearly describe this point. We added a few sentences now in the appendix to explain their difference and how we computed them. In the next version, we will try to refine the paper by finding a proper place in the main paper to discuss this issue.\\n\\n--- 4. How sensitive is the method to the hyperparameter alpha? How was it chosen? Is it the same alpha chosen for Skew-Fit?\\n\\nWe use alpha value as -1.1 for both Skew-Fit and our Skew-Explore experiments. Alpha equals -1 means that the skewed goal distribution is close to a uniform distribution over the visited state support range. Since the objective of this work is to maximize exploration, we choose a more aggressive strategy (alpha less than -1) for goal posing.\\n\\n--- 5. How was N chosen for the door environment?\\n\\nIn the door environment, we model N as a Gaussian distribution with a standard deviation of 1.5.\\n\\n--- 6. Is Figure 7 (left) showing the performance on the simulated or real-world robot? \\n\\nFigure. 7 shows the performance in the simulation. \\n\\n--- 7. If it was done on the real-world robot, were there any important details in getting sim-to-real-world to work? \\n\\nWe selected the policy with the best performance in the simulation and replayed the learned trajectory on the real robot. In order to make the sim-to-real to work, we need an accurate simulator of the Yumi robot and need to make the real scene close to the simulated scene as much as we can.\"}", "{\"title\": \"Re: Official Blind Review #4. Part 2\", \"comment\": \"--- Fig 5 How are entropy and coverage computed? What are the maximum possible values for both of these quantities? What precisely does the X axis correspond to?\\n\\nWe added the computation of entropy and coverage to Appendix B. \\nWe estimate the entropy and the coverage via uniform discretization over the environment state spate. We do not check the validity of the discretized states, and there will be states that are not reachable from the initial state (such as the obstacle area in PointMaze environment). The maximum possible value for both entropy and coverage are unknown. \\nThe X axis represents the training iteration. One iteration contains 5,000 steps. \\n\\n--- Table 1 -- How did the baseline algorithms perform on this task?\\n\\nThe results of the baseline methods are not reported because they could not even discover all 5 target states.\\n\\n--- Fig 7 -- How did the baseline algorithms perform on this task? If the reward were sparse, shouldn't the Y axis be in the interval [0, 1]?\\n\\nThe baseline methods also failed on this task. The Y axis is the extrinsic reward times w_ext, which is in the interval [0, 10].\\n\\n--- \\\"As a consequence, the entropy of the distribution of these points is also maximized\\\" -- I believe that a finite number of points in a discrete space have measure zero, so they have zero entropy, regardless of the position of the points.\\n\\nWe treat the points as samples from a distribution, and measure the entropy of this distribution that we draw samples from.\\n\\n--- \\\"What is the difference between *S* (in bold) and S_t?\\n\\nS_t is a finite collection of sampled history states in a continuous space, and S is a random variable with distribution estimated using techniques like weighted KDE from S_t. We can say that S_t contains samples from S.\\n\\n--- \\\"truncated Gaussian function with a narrow range\\\" -- Can you explain precisely what this is?\\n\\nIn probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both)[1]. In our case, the normal is bounded as \\\\mu-\\\\epsilon<x<\\\\mu-\\\\epsilon, where \\\\ epsilon is an arbitrary small number. https://en.wikipedia.org/wiki/Truncated_normal_distribution\\n\\n--- In equation 2, I think it'd be clearer to write p^(1+\\\\alpha).\\n\\nThanks for your comment. In equation 2, we would like to show that the p^(\\\\alpha) is acting as a weight on the p.\"}", "{\"title\": \"Re: Official Blind Review #4. Part 1\", \"comment\": \"Thank you for the detailed review. According to your comments and suggestions, we have added references, fixed typos, and removed/revised sentences with confusion or lack of citation/proof. Below, we address your questions.\\n\\n--- It seems like, if S is a finite collection of states in a continuous space, then it has measure zero, so its entropy should be zero. Can you explain why this is not the case?\\n\\nIn our formulation, S_t is a finite collection of sampled history states in a continuous space, and S is a random variable with distribution estimated using techniques like weighted KDE from S_t.\\n\\n--- If I'm not mistaken, in equation 2, if we take (say) alpha = -0.5, then w_i is proportional to sqrt(p(s_i)), so w_i is an increasing function in p(s_i), not a decreasing function.\\n\\nThe weight w_i does not need to be a decreasing function of p(s_i).In practice, there is a trade-off between exploration efficiency and learning efficiency which is controlled by w_i through parameter alpha. Proposing states with lower density is important for exploration, however, lower density also indicates that the agent are less trained on those states and may not know how to explore around them. Depending on the difficulty of the task and the environment, we could adjust alpha to decide how much we would like to emphasize the exploration.\\nIn our experiments, we set alpha as -1.1. \\n\\n--- Can you discuss how you might scale a KDE to high dimensions?\\n\\nThere are two places in our method that need density estimation,1) estimating the density of visited states p(s) to update the novelty frontier, and 2) computing log p_z(s) - log p_\\\\tao(s) as the intrinsic reward. When we estimate the visited state density, we could replace KDE to other density estimation methods (such as VAE in Skew-Fit or flow-based methods), which scale well to high dimensional inputs. When we compute the intrinsic reward, we do not need an accurate density estimation. The key point is to construct an intrinsic reward that encourages the agent to reach the goal state but not staying at the goal state. \\nWe could apply KDE on the lower-dimensional latent space we obtained while learning p(s). In this case, the learned distribution N on the raw space may not be a Gaussian distribution, but some other distribution centered at z. However, as long as the learned N has higher variance than the distribution learned with goal-conditioned policy (Dirac delta function), we always gain a positive h(S|Z) - h(Z|S), which gives additional power for exploration. \\n\\n--- \\\"If the distribution has a larger range, the entropy is larger as well.\\\" -- Technically, this is not correct. You can construct distributions with larger ranges but smaller entropies.\\n\\nThis sentence is related to the previous sentence. \\u201cThe entropy of a continuous uniform function $U(p,q)$ is $\\\\ln(p-q)$, and if the distribution has a larger range, the entropy is larger as well.\\u201d As a consequence, the distribution we meant here is continuous uniform distribution.\\n\\n--- I think that Equation 3 should be the KL divergence between state marginal distributions, not between trajectories. If it were the KL between trajectories, it would include actions and policy terms.\\n\\nThe equation 3 indicates the KL divergence between the distribution formed by an actual trajectory, and the desired distribution modeled by N. Since we choose N to be a Gaussian distribution centered at a given reference state, we penalize states which are too far from the reference state, or states that the agent has stayed for too long. \\n\\n--- How are w_int and w_ext chosen? It seems like the method depends critically on the balance between these hyperparameters. Is w_int decayed over time? If not, why does the policy stop exploring once it has found the goal?\\n\\nIn our experiments, we manually selected w_int and w_ext to adjust the average intrinsic return to be around 1 and the extrinsic return to be 10. The w_int and w_ext are fixed.\\nThe proposed z only affects the intrinsic reward, however, the overall objective is to optimize the return of the combined reward. Since the extrinsic return is much larger than the intrinsic return, once the agent has found the state with large extrinsic reward, the policy will eventually ignore the given goal but always go to the state with extrinsic reward.\\n\\n--- What policy is used at convergence? It seems like the policy is conditioned on Z_t, so how is the Z_t chosen for evaluation?\\n\\nIn the evaluation, we randomly sample a z from the Z_t and pass it to the policy. Since the policy has converged to a solution that always go to the state with extrinsic reward, it does not matter which z we send to the policy.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper studies the problem of exploration in reinforcement learning. The key idea is to learn a goal-conditioned agent and do exploration by selecting goals at the frontier of previously visited states. This frontier is estimated using an extension of prior work (Pong 2019). The method is evaluated on two continuous control environments (2D navigation, manipulation), where it seems to outperform baselines.\", \"Overall, I like that the proposed method integrates some notion of novelty with the language of mutual information and density estimation. While this idea has been explored in prior work (as noted in the related work section), the proposed idea seems like a useful contribution to the literature. The use of convolutions to obtain a lower bound on mutual information seems neat. The experimental results are quite strong.\", \"My main concern with the paper is a lack of clarity. I currently have enough reservations and questions (listed below) about the experimental protocol that I am learning towards rejecting this paper. However, if the paper clarified the concerns below, I'd be willing to increase by review.\", \"Questions / Concerns:\", \"\\\"[Prior work on mutual-information cannot] guarantee that the entire state space can be covered\\\" -- In theory, I think these prior methods should cover the entire state space. Take the DIAYN objective, I(s, z) = H[s] - H[s | z]. This objective is maximized when p(s) is uniform over the entire state space and p(s | z) is a Dirac.\", \"\\\"It is time consuming to collect enough samples to estimate an accurate state entropy\\\" -- Can you provide a citation/proof for this? Why should we expect the proposed method to require fewer samples?\", \"\\\"...entropy itself does not provide efficient information to adjust the action at each step.\\\" -- Can you provide a citation / proof for this? Also, what does \\\"efficient information\\\" mean?\", \"It seems like, if S is a finite collection of states in a continuous space, then it has measure zero, so its entropy should be zero. Can you explain why this is not the case?\", \"If I'm not mistaken, in equation 2, if we take (say) alpha = -0.5, then w_i is proportional to sqrt(p(s_i)), so w_i is an increasing function in p(s_i), not a decreasing function.\", \"Can you discuss how you might scale a KDE to high dimensions?\", \"\\\"If the distribution has a larger range, the entropy is larger as well.\\\" -- Technically, this is not correct. You can construct distributions with larger ranges but smaller entropies.\", \"I think that Equation 3 should be the KL divergence between state marginal distributions, not between trajectories. If it were the KL between trajectories, it would include actions and policy terms.\", \"How are w_int and w_ext chosen? It seems like the method depends critically on the balance between these hyperparameters. Is w_int decayed over time? If not, why does the policy stop exploring once it has found the goal?\", \"What policy is used at convergence? It seems like the policy is conditioned on Z_t, so how is the Z_t chosen for evaluation?\", \"Fig 5 -- How are entropy and coverage computed? What are the maximum possible values for both of these quantities? What precisely does the X axis correspond to?\", \"Table 1 -- How did the baseline algorithms perform on this task?\", \"Fig 7 -- How did the baseline algorithms perform on this task? If the reward were sparse, shouldn't the Y axis be in the interval [0, 1]?\", \"\\\"using coverage only during training is not suitable\\\" -- Can you provide a citation/proof for this?\", \"\\\"As a consequence, the entropy of the distribution of these points is also maximized\\\" -- I believe that a finite number of points in a discrete space have measure zero, so they have zero entropy, regardless of the position of the points.\", \"Other comments\", \"\\\"What is the difference between *S* (in bold) and S_t?\", \"I would recommend using some notation other than p(s) to denote the smoothed/convolved density.\", \"\\\"history states\\\" -- I was confused about what this meant until it was introduced two sections later.\", \"\\\"assimilate the definition of curiosity in psychology\\\" -- I think that others (e.g., Oudeyer 2007, Pathak 2017) have noted the similarities between curiosity in humans and RL agents.\", \"Check for backwards quotes in the related work section.\", \"\\\"Self-Goal Proposing\\\" -- Some more related works are [Florensa 2017, Savinov 2018]\", \"\\\"space associated environment\\\" -- I don't know what this means.\", \"\\\"disc rewards\\\" -- I'd recommend spelling out discounted\", \"\\\"truncated Gaussian function with a narrow range\\\" -- Can you explain precisely what this is?\", \"In equation 2, I think it'd be clearer to write p^(1+\\\\alpha).\", \"For the experiment on the effect of variance, I'd recommend making a plot instead of just listing the values.\", \"In Section 4.3, it's unclear whether the physical robot was successful at solving the task.\", \"\\\"We rewrite the equation\\u2026\\\" -- This paragraph is repeated.\", \"Double check that \\\\citet and \\\\citep are used properly\", \"--------------UPDATE AFTER AUTHOR RESPONSE------------------\", \"Thanks for answering many of my questions. This was helpful for clarifying my understanding. However, since a large fraction of my concerns were not addressed, so I am inclined with stick with my original vote to reject the paper. Nonetheless, I should emphasize that I think this paper is on the right track and the empirical results seems strong. With a bit more work on writing, I think it would be a fantastic paper at the next conference.\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new exploration algorithm by proposing a new way of generating intrinsic rewards. Specifically, the authors propose to maintain a \\\"novelty frontier\\\" which consists of states that have low-likelihood under some likelihood model trained on their replay buffer. The authors propose to sample from the novelty frontier using a scheme similar to a prior method called Skew-Fit, but replace the VAE with a kernel-based density model. To construct an exploration reward, the authors estimate the KL divergence between the resulting policy state distribution and the desired `state distribution, where the desire state distribution is a Gaussian centered around a point sampled from the novelty frontier.\\n\\nOverall, the paper tackles an important questions of exploration, and while the concept of a frontier set is not novel, the authors propose a concrete instantiation that has promising results on continuous state spaces. I'm skeptical that this exact algorithm would work on domains with complex state spaces (e.g. images), where adding Gaussian noise to your state won't produce reasonable nearby states. That said, the general idea of fitting a new model to the latest trajectory and using KL as reward seems like a promising principle that could on its own scale. However, the theory seems a bit off and there are a few experimental details that make me hesitant to increase my score.\", \"in_details\": \"\", \"theory\": \"I found the proof surprisingly long given that it amounts to saying that if (1) S = Z + N and (2) Z and N are independent, then\\n H(S) >= H(N)\\nand so\\n H(S | Z) - H(Z | S) = H(S) - H(Z) >= H(N) - H(Z)\\nPerhaps more worrisome is the statement, \\\"we consider to maximize h(S|Z) - h(Z|S)\\\". Unless I misread the paper, the authors do not maximize this quantity. Instead, they *fix* this quantity by choosing a fixed entropy of N. Worse yet, this quantity is actually minimizes since, while h(N) is fixed for the duration of the experiment, h(Z) is maximized (\\\"To increase h(Z), we need to add...\\\"). It would be good for the authors to address this concern, given that the claim of the paper is that they are \\\"maximizing the entropy of the visited states.\\\" It seems like a simple answer is the following: given that S = Z + N, if N is fixed to some Gaussian distribution, then the authors simply need to maximize H(Z), which they are already doing. I'm not sure why the authors need to reason about H(S | Z) - H(Z | S).\", \"experiments\": \"Can Table 1 be replaced with the learning curves? The numbers 90% success and standard deviation of 3% seem like arbitrary numbers. It doesn't preclude the possibility that (e.g.) Skew-Fit or RND receives a 99% success rate with standard deviation of 3.1%. Figure 11 and 12 of the Appendix don't convince me that threshold at 90% and 3% is a particularly good choice.\\n\\nCan the authors summarize the difference between coverage and entropy in the main paper? It seems like an important distinction. Given that the authors did not use all 8 pages, it would be good to explain it there rather than in the Appendix.\\nHow sensitive is the method to the hyperparameter alpha? How was it chosen? Is it the same alpha chosen for Skew-Fit?\\nHow was N chosen for the door environment?\\nIs Figure 7 (left) showing the performance on the simulated or real-world robot?\\nIf it was done on the real-world robot, were there any important details in getting sim-to-real-world to work?\\nIn Figure 5, why does there seem to be discrete jumps in the learning curves for \\\"DoorOpen Coverage\\\"?\", \"i_would_be_inclined_to_raise_my_score_if\": \"1. The authors clarify why studying the quantity H(S | Z) - H(Z | S) is particularly important.\\n2. Address the concerns raised over the experiments.\\n3. Discuss more explicitly under what assumption they expect for this method (with a Gaussian KDE) to work well\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose an exploration objective for solving long-horizon tasks with sparse rewards. While the paper is heavily influenced by the recent Skewfit work (Pong et al, 2019), it aims to solve a different problem (that of exploration for sparse reward RL), since Skewfit is interested in exploration and goal sampling for self-supervised (i.e. no external rewards) RL. The central idea in this paper is to encourage exploration by maintaining a distribution over the \\\"current frontier\\\", i.e. the set of points in the state space that are closed to the \\\"edge\\\" of what is explored by the policy , sampling points around this frontier, and encouraging the policy to reach these sampled points. The paper compares to RND (a method that is representative of state-of-the-art in exploration bonuses, I believe) and also to Skewfit, and outperforms these methods in two non-standard environments: door opening and point-mass navigation.\\n\\nWhile the I think the empirical contributions are solid, and authors provide code to reproduce the results, I found the paper a bit hard to follow and understand, and different subsections in the technical section (Section 3) did not seem to have a unified narrative running through them. I will wait to see if other reviewers agree or disagree with me on this front, and if they do agree, then I think the paper will need substantial edits to improve its clarity before it is ready for publication.\"}" ] }
rkgIllBtwB
Exploring the Correlation between Likelihood of Flow-based Generative Models and Image Semantics
[ "Xin WANG", "SiuMing Yiu" ]
Among deep generative models, flow-based models, simply referred as \emph{flow}s in this paper, differ from other models in that they provide tractable likelihood. Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~(OoD) inputs since they do not discard any information of the inputs. However, it has been observed that flows trained on FashionMNIST assign higher likelihoods to OoD samples from MNIST. This counter-intuitive observation raises the concern about the robustness of flows' likelihood. In this paper, we explore the correlation between flows' likelihood and image semantics. We choose two typical flows as the target models: Glow, based on coupling transformations, and pixelCNN, based on autoregressive transformations. Our experiments reveal surprisingly weak correlation between flows' likelihoods and image semantics: the predictive likelihoods of flows can be heavily affected by trivial transformations that keep the image semantics unchanged, which we call semantic-invariant transformations~(SITs). We explore three SITs~(all small pixel-level modifications): image pixel translation, random noise perturbation, latent factors zeroing~(limited to flows using multi-scale architecture, e.g. Glow). These findings, though counter-intuitive, resonate with the fact that the predictive likelihood of a flow is the joint probability of all the image pixels. So flows' likelihoods, modeling on pixel-level intensities, is not able to indicate the existence likelihood of the high-level image semantics. We call for attention that it may be \emph{abuse} if we use the predictive likelihoods of flows for OoD samples detection.
[ "flow-based generative models", "out-of-distribution samples detection", "likelihood robustness" ]
Reject
https://openreview.net/pdf?id=rkgIllBtwB
https://openreview.net/forum?id=rkgIllBtwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "1PhjIJC0KT", "SkgdyLFhjr", "rJee5Bdnor", "BkgVYpD3or", "rylqlowniH", "H1g_p5VriH", "HkekeBfNsB", "Skl8OrlNjr", "B1eJNHl4iS", "BygO8NNRFH", "Syx5mzW0YS", "r1goAJZSKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740503, 1573848544121, 1573844359644, 1573842300113, 1573841650182, 1573370559536, 1573295335494, 1573287278455, 1573287207413, 1571861583976, 1571848737991, 1571258322839 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2099/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2099/Authors" ], [ "ICLR.cc/2020/Conference/Paper2099/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2099/Authors" ], [ "ICLR.cc/2020/Conference/Paper2099/Authors" ], [ "ICLR.cc/2020/Conference/Paper2099/Authors" ], [ "ICLR.cc/2020/Conference/Paper2099/Authors" ], [ "ICLR.cc/2020/Conference/Paper2099/Authors" ], [ "ICLR.cc/2020/Conference/Paper2099/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2099/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2099/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper discusses the (lack of) correlation between the image semantics and the likelihood assigned by flow-based models, and implications for out-of-distribution (OOD) detection.\", \"the_reviewers_raised_several_important_questions\": \"1) precise definition of OOD: definition of semantics vs typicality (cf. definition in Nalisnick et al. 2019 pointed by R1)\\nThere was a nice discussion between authors and the reviewers. At a high level, there was some agreement in the end, but lack of precise definition may cause confusion. I think adding a precise definition will add more clarity and improve the paper.\\n\\n2) novelty: similar observations have been made in earlier papers cf. Nalisnick et al. 2018. R3 also pointed a recent paper by Ren et al. 2019 which showed that likelihood can be dominated by background pixels. Older work has shown that the likelihood and sample quality are not necessarily correlated. The reviewers appreciate that this paper provides additional evidence, but weren't convinced that the new observations in this paper qualified for a full paper.\\n\\n3) experiments on more datasets\\n\\nOverall, while this paper explores an interesting direction, it's not ready for publication as is. I encourage the authors to revise the paper based on the feedback and submit to a different venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response.\\n\\nI'm glad we agree. I think this was a fruitful conversation overall, and I hope it will improve the paper going forward.\\n\\nTo me it seems that by out-of-distribution you mean something like \\\"the set of all images x for which P(y=dog | x) < epsilon\\\". But in any case, it's good to be precise, mathematically or otherwise.\\n\\nI don't have any other comments. Thank you again for the discussion.\"}", "{\"title\": \"Responses\", \"comment\": \"Thank you for your comments.\\n\\nThough you will maintain your score, I am still glad that all our discussions get much clear. I also basically agree with all the points in your responses. With only one exception about the precise mathematical definition of \\\"out-of-distribution\\\". \\n\\nBy \\\"out-of-distribution\\\", I insist on that it is rooted in humans' perception about the high-level information e.g. semantics, which also fits the requirements of downstream applications. Also the typical classification task in ML-community aims also for high-level semantics recognition. So this almost make it impossible to give a precise mathematical definition. That is also why I think out-of-distribution $\\\\neq$ typicality you introduced.\\n\\nI agree mathematical rigor is preferable. But I think it's hard for this. Do you have any further comments?\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response.\\n\\nIf I understand correctly, your main point is that the joint probability of pixels is not an appropriate quantity to use for certain downstream tasks. The reasons being: (a) it is not invariant to semantic-preserving transformations, and (b) it is not suitable for out-of-distribution detection.\\n\\nI completely agree with the above point. In fact, my review makes exactly the same point: issue #1 explains why (a) is the case, and issue #2 explains why (b) is the case.\\n\\nIn addition, my review explains that (a) and (b) above are in fact properties of the true data distribution, and not properties of a particular model (flow or otherwise). A model that approximates the data distribution will naturally inherit properties (a) and (b). I think this is an important point that I don't think the paper makes clear. Instead, the paper focuses on flow-based models, and gives the impression that the problem lies with the model, whereas in reality the problem is that the data distribution itself doesn't have the properties that you desire (what you refer to as \\\"type II\\\").\\n\\nIn the following, I'd like to clarify a few more points:\\n\\nThe term \\\"likelihood\\\" has a very specific meaning in machine learning and statistics: it is the probability of the observed data as a function of the model parameters. See for example:\\n- The Wikipedia article: https://en.wikipedia.org/wiki/Likelihood_function\\n- Page 29 of MacKay's book: https://www.inference.org.uk/itprnn/book.pdf\\nTherefore, it makes sense to talk about \\\"the likelihood of a model\\\" but it doesn't make sense to talk about \\\"the likelihood of an image\\\". Moreover, defining the likelihood to have potentially different meanings (like you did in your response above) may confuse some readers and make it hard for them to understand your point. If you'd like to discuss models that assign various types of \\\"scores\\\" to an image other than the joint probability of its pixels, then I suggest that you define precisely what you mean, and be careful with the terminology used.\\n\\nIn \\\"issue #2\\\", I explained how atypical examples can have higher probability than typical examples. This is an established mathematical fact, and is not up for debate. The concept of a \\\"typical set\\\" has a precise mathematical definition, see for example:\\n- Section 4.4 of MacKay's book: https://www.inference.org.uk/itprnn/book.pdf\\n- Section 2 of Nalisnick et al.'s paper: https://arxiv.org/pdf/1906.02994.pdf\\nIf you'd like to see in full mathematical detail why the mean of a high-dimensional Gaussian is not in the typical set even though it has the highest density, please work through Exercise 6.14 in MacKay's book.\\n\\nThe reason I brought up typicality in the \\\"out-of-distribution\\\" discussion is that because I believe that typicality is a suitable formalization of the concept of \\\"out-of-distribution\\\". I think there is a growing realization in the machine-learning community that the two are closely related, see for example Nalisnick et al.'s paper https://arxiv.org/pdf/1906.02994.pdf on this exact subject.\\n\\nIf you believe that out-of-distribution and typicality are not related, then that's fine. Unlike typicality, \\\"out-of-distribution\\\" doesn't have a precise meaning, and is therefore up for interpretation. However, if you want to make precise statements about the notion of \\\"out-of-distribution\\\", I strongly suggest that you give it a precise mathematical meaning first. Talking about dogs and cats can be helpful and intuitive, but it doesn't formalize the concept.\\n\\nAt the end of your response, you suggest modelling images in an alternative space, e.g. a higher-level representation instead of pixels. I think that's a good idea: I encourage you to pursue this direction, and I welcome more papers on this topic. As I said earlier, if the conclusion of the paper is that for many downstream tasks the joint probability of pixels isn't appropriate, then I agree.\\n\\nIn conclusion, I think that the paper as is currently written is not ready for publication. The conclusion that the joint probability of pixels is not invariant to semantic-preserving transformations should be fairly obvious to people familiar with probabilistic modelling, and as reviewer 3 also pointed out, is not enough for a full paper. Moreover, there is serious misuse of terminology that can easily mislead and confuse readers (as evidenced by this entire discussion). Finally, the paper makes statements about imprecise notions (such as the concept of out-of-distribution) that don't hold up to scrutiny. In my opinion, a paper that warns about the misuse of likelihood-based models should be careful and precise when dealing with nuanced notions of probability theory, otherwise it may do more harm than good, especially for inexperienced readers. For these reasons, I will maintain my score, but I will also encourage the authors to reflect carefully on our discussion and take it into account when revising the paper in the future.\"}", "{\"title\": \"Revision uploaded based on all reviews, please check\", \"comment\": \"Thank you all reviewers for your time. I really appreciate your efforts.\", \"to_reviewer_1\": \"Though some statements in your review looks a little bit hash, I find it is very helpful for me get a different picture of this topic, and make a clarification. Please do check my responses.\\n\\nThank you for pointing out some inaccurate statements in this paper. \\n\\nTo reviewer 2&3:\\n\\nDiscussions with related works you proposed are added in Section 5 Discussions and Conclusions.\", \"to_all\": \"Some minor issues are updated accordingly, please check the revision.\"}", "{\"title\": \"Responses\", \"comment\": \"We are grateful for your comments.\", \"i_think_what_you_point_out_boils_down_to_the_fundamental_problem\": \"how to evaluate generative models (GAN, VAE, and Flows, etc), which is still an open problem.\\n\\nHere, we focus on Flow-based generative models since they provide exact likelihood evaluation of samples. Let's do some quick Q&A:\\n\\n(1) Is likelihood a good measure for evaluation of generative models? \\n\\nThe answer is NO.\\n\\nThe first thing we care about generative models is the quality of the generated samples. We may implicitly think that good likelihood implies good generation quality, which theoretically is simply not True. In practice maximizing the likelihood is always a way, but hardly the best way, to improve the samples' quality. (Maximizing the data likelihood is equivalent to minimizing the KL-divergence between real data distribution $p_{data}$ and our model distribution $p_{model}$. KL is not a good measure, but is computationally possible or affordable. [1] also discusses the differences of optimizing different measures, e.g. MMD, JSD.)\\n\\n[1] has made a very clear argument about this by theoretical analyses, (we also mentioned and discussed in this paper):\\n\\n\\\"Good likelihood is neither necessary nor sufficient to good generation quality (i.e. plausibility of samples)\\\"\\n\\nThe observations in this paper provide more experimental evidences for this simple but important argument.\\n\\nThen get back to Glow. There is no doubt that simple interpolations in the latent spaces of Glow still give impressive (semanticaly meaningful) high-resolution images. But it is not clear how the quatitative likelihood values of thses interpolations vary, up or down? Following the procedures proposed in this paper, we may still get high-quality images with much lower likelihoods (pixel-shifts or adding small noises) or much higher likelihoods (zeroing preceding latents).\\n\\n(2) Is there an universal measure for evaluation of generative models? (Or should we aim to find the universal one?). \\n\\nThe answer is probably not. \\n\\nEvaluation of generative models depends heavily on different uses of them. This is also pointed out by [1] and also mentioned in Ian Goodfellow's Deep Learning Book [2] (section 20.14, page 717-719). \\n\\nA typical example is exactly Glow. If we only care about quality of generated images, it seems there is nothing wrong about Glow. However when it comes to deploy Glow for OOD detection, we see counter-intuitive behaviours. If we only focus on the quantitative likelihood values of flows without asking or exploring the messages behind, we may get in trouble when they are deployed on downstream tasks.\\n\\n\\nWe will include discussion of Nalisnick\\u2019s paper in our revisions.\\n\\n[1] [A note on the evaluation of generative models](https://arxiv.org/pdf/1511.01844.pdf) by Theis, Lucas and Oord, Aaron van den and Bethge, Matthias.\\n\\n[2] Deep Learning, Ian Goodfellow and Yoshua Bengio and Aaron Courville.\"}", "{\"title\": \"Responses\", \"comment\": \"We are grateful for your comments.\\n\\n\\nThis paper is not to reintroduce the observations already made in [1] [2]. And our work is based on their observations, I agree that we should include more discussions about this paper and [1] [2]. \\n\\nThis paper is not to show that some \\\"surprising\\\" observations we made about flows. You point out: (1) the observation likelihoods of flows can be influenced by adding noises or pixel-shifts are unsurprising, which I completely agree, and I also think that trivial observations like these should not be accepted by ICLR; (2) observations made about multi-scale architecture of Glow are interesting, but not enough for a full paper, which I also agree. \\n\\nHowever, this paper is: (1) to demonstrate that what is normal and unsurprising observations for flows can be problematic when applied to downstream tasks like OOD detection; (2) to call for attention the gap between what likelihoods of flows actually are and what we expect likelihood-based models to be. Our experiments and analyses motivate that we may not restrict the likelihood of an image to be a pixel-level definition we have now, i.e. the joint probability of all pixels. We should explore more robust likelihood-based models, i.e. fit humans' intuitions, e.g. naturally assign low likelihoods to out-of-distribution samples. For example, we may model the likelihood of an image on its high-level representations, so that the likelihoods could have higher correlations to the high-level information of image, e.g. image semantics. (See also our responses to reviewer 1 for more details. ) \\n\\nResearch on Flow-based models is a rapidly evolving field. But most works focus on the design of the bijective transformation layers to achieve lower bits-per-dim (BPDs) or equivalently higher likelihoods on standard datasets. Much less attention has been paid to explore the behaviours and properties of the likelihoods of flows reported, as well as their applicabilities to the downstream tasks like OOD detection. So you may have underestimated the messages we want to send to the community.\", \"other_questions_and_concerns\": \"1. We introduce \\\"semantic-invariant transformation\\\" deliberately to distinguish the transformations used in this paper from the concept \\\"data augmentation\\\". \\\"Data augmentation\\\", from our perspective, aims to improve the generality of classifiers at inference, which are quite different from what we do here. We don't do this to earn inappropriate credit. \\n\\n2.3.4 We will revise.\\n\\n5. We will add details of the discriminative classifiers, and improve.\\n\\n6. We'd like to do that if we have enough time. \\n\\n7. We will add more discussions. \\n\\n\\n\\n[1] Nalisnick, Eric, et al. \\\"Do deep generative models know what they don't know?.\\\" arXiv preprint arXiv:1810.09136 (2018).\\n\\n[2] Ren, Jie, et al. \\\"Likelihood Ratios for Out-of-Distribution Detection.\\\" arXiv preprint arXiv:1906.02845 (2019).\"}", "{\"title\": \"Responses (part 2/2)\", \"comment\": \"Further issues:\\n\\n\\\"Flows can roughly be divided into two categories [coupling flows and autoregressive flows]\\\"\\n\\n(1) This paper only focus on Flow-based generative models (their likelihood behaviours), not the general flow models. (2) We divide the flows based on the granularity, i.e. coupling (2 parts) or autoregressive (# pixel parts). We will update and clarify.\\n\\n\\\"We find that the semantic object of a test image depends heavily on the last factored latent zL, rather than the preceding factors\\\"\\n\\nThis is exactly supported by the examples provided. The image semantics keep unchanged, even if we zero the preceding factors, as long as we keep the last factored latent still)\\n\\n\\\"PixelCNN is more sensitive to the noises, because its pixel-wise modeling quickly augment and propagate the influences of the added noise\\\"\\nThis is not speculation. PixelCNN is pixel-wise autoregressive (fine-grained), while Glow bases on coupling layers(coarse-grained).\\n\\n\\\"In terms of image generation, we expect that every single generated pixel in a image is the most likely one\\\"\\n\\nWe mean here that very single likely generated pixel hinges on the contextual pixels, i.e. the joint distribution over pixels. \\n\\nWill revise for clarification on these issues.\", \"final\": \"You said \\\"(a) It begins with flawed assumptions about how a likelihood-based model is expected to behave.\\\" which is not True. Do you think it is a flawed assumption that generative model $g_{dog}$ should assign higher likelihood to a dog image $x_{dog}$ than a cat image $x_{cat}$?\\nFlows define likelihood as the joint probability of all image pixels. However, is this the only way that the \\\"likelihood\\\" of an image should be defined, considering flows' counter-intuitive behaviours ? A simple alternative way is to model the image likelihood on its high-level representations. For example, [3] models class conditionals on logits of a discriminative classifier, i.e. conditional likelihood, and performs very well on OOD detection. Another similar example is [4].\\n\\nAll in all, this paper aims to show the gap between flows and likelihood-based models we expect on OoD detection. We call for attention that the community should rethink what is exactly the \\\\emph{likelihood} of an image except for joint probability of pixel-level intensities.\\n\\n[3] Lee, Kimin, et al. \\\"A simple unified framework for detecting out-of-distribution samples and adversarial attacks\\\", NIPS 2018\\n\\n[4] Nilesh A. Ahuja, et al. \\\"Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection\\\", https://arxiv.org/abs/1909.11786\"}", "{\"title\": \"Responses (part 1/2) [You made a factual mistake about out-of-distribution, which is critical]\", \"comment\": \"We are grateful for your detailed and instructive reviews.\\n\\nBefore we start make responses to the issuses you raised, I have to (informally) define two types of likelihood-based models:\", \"type_i\": \"Exactly the current flow-based models (e.g. PixelCNN, Glow or many other variants), modeling precisely the likelihood of an image $x$ with the joint product of all the pixels of $x$.\", \"type_ii\": \"Models whose likelihoods are robust, and fit humans' intuitions, e.g. naturally assigning lower likelihoods for out-of-distribution samples (this is critical for potential likelihood-based models to be applicable to downstream tasks like OOD detection. This is also something publicly recognised , see abstracts of [1] [2]).\\n\\n Your statement that our paper is \\\"full of flaws, incorrect statements, ... \\\", is probably (we think) because you limit the likelihood-based models to Type I models, and implicitly rule out any possibility of the existence of Type II models (examples of Type II are provided at the bottom).\", \"issue_1\": \"You say that our observations (pixel shifts, adding Gaussian noises) lower the likelihoods of samples are actually expected. I completely agree. But note this applied only to Type I models.\\n\\nThis paper is NOT to show some surprising (actually expected) observations of Type I models, BUT to show the gap between Type I (what we got) and Type II models (what we expect) in terms of OOD detection.\", \"issue_2\": \"The typicality you introduced and OoD are \\\\emph{unrelated} concepts. Thus the two examples you proposed are irrelevant and misleading. \\n\\nThere is a clear fact about in-distribution and OoD is that they are perceptually two \\\\emph{different} distributions. Typical examples, proposed in [1] [2], are in-out distribution pairs: FashionMNIST(in)- MNIST(out), CIFAR10(in)-SVHN(out). \\n\\nIn our examples, typical or atypical outcomes are all discussed within the SAME distribution proposed. So there are no OoD samples at all. The atypical samples have nothing to do with OoD samples.\\n\\nA simple example, for a generative likelihood model $g_{dog}$ trained on only dog images, then a cat image $x_{cat}$ is OoD sample (thus no way $g_{dog}$ is supposed to assign high likelihood to $x_{cat}$).\\n\\nA mistake in your \\\"Gaussian variables\\\" example: all-zero is the highest point of pdf of the resulting Gaussian, so no way it is OoD. If you are talking about the probability, all-zero is a point of for a continuous random variable, whose probability is exactly zero, so do any other points. This explains the atypicality you raised.\", \"please_reconsider_this_statement\": \"\\\"Fundamentally, I think the issue is that the paper incorrectly assumes that all images with the same semantics (e.g. all images of the digit 3) must be in-distribution. However this is not necessarily true.\\\" This is very true in terms of OOD detection.\\n\\nAnd we show that semantic-invariant transformations (SITs) are inherently different from adversarial examples (AEs) in that: (1) AEs are specially crafted, while SITs are not; (2) AEs can reduce classifiers' accuracies to ~0%, while SITs hardly do so.\\n\\n\\n[1] Nalisnick, Eric, et al. \\\"Do deep generative models know what they don't know?.\\\" arXiv preprint arXiv:1810.09136 (2018).\\n\\n[2] Ren, Jie, et al. \\\"Likelihood Ratios for Out-of-Distribution Detection.\\\" arXiv preprint arXiv:1906.02845 (2019).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies the correlation between likelihood of flow-based generative models and image semantic information, and shows that even small perturbations, like a few pixel translations or noise applied to background, significantly affect models\\u2019 likelihoods, which signals that these likelihood models cannot be used for out-of-distribution data detection. However, very similar observations were made in prior works [1] and [2]. In particular, the paper [2] showed that likelihood of PixelCNN is dominated by background pixels which makes the observations in section 4.2 (applying noise to background) unsurprising. The sensitivity of Glow model to even 1-2 pixel translations (section 4.1) and exploiting multi-scale structure of Glow (zeroing latent variables in section 4.3) are interesting, but I believe, not enough for a full paper. Thus, due to the limited novelty, I recommend a weak reject.\", \"other_questions_and_concerns\": \"1. The author claim to introduce \\u201csemantic-invariant transformation\\u201d. I believe this can be called \\u201cdata augmentation\\u201d, why introduce a new term?\\n2. The last bullet point in the introduction is not clearly written.\\n3. Equation 1: variable u wasn\\u2019t introduced. Paragraph after equation 4: please fix the comma.\\n4. The clarity of figure / table captions can be improved, as well as their references in the main text.\\n5. The section 4.4 is confusing. Which discriminative classifiers are considered? How are they trained? The Table 1 is not referenced in the main text and the results are not explained or discussed. \\n6. The experiments are only performed on MNIST / FashionMNIST datasets. It would help to see experiments on other datasets, e.g. CIFAR-10, SVHN.\\n7. Related work section can be elaborated: please, discuss how the observations made in the paper are different from / consistent with [1] and [2].\\n\\n\\n[1] Nalisnick, Eric, et al. \\\"Do deep generative models know what they don't know?.\\\" arXiv preprint arXiv:1810.09136 (2018).\\n[2] Ren, Jie, et al. \\\"Likelihood Ratios for Out-of-Distribution Detection.\\\" arXiv preprint arXiv:1906.02845 (2019).\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper raises a problem of the robustness of (log) likelihood computed by invertible flows. The authors show that the changes of likelihood of an image computed by flow-based image generative models have surprisingly weak correlations with semantic changes of image. The flow likelihoods are sensitive to very small changes of pixels that do not affect the semantics of an image. And the likelihoods are less robust against out-of-distribution inputs where we expect strong robustness compared to discriminative models.\\nThese claims are validated and supported by several simple experimental results.\", \"this_is_an_interesting_paper\": \"it warns the abuse of likelihoods computed by flow models with several numerical experiments, which are simple but clearly designed to support the claims.\\nThese experimental results convince me that the likelihoods of flow-based image generative models are joint distributions of pixel intensities and it is natural such likelihoods are apt to be sensitive in pixel intensity changes, even if they are semantically meaningless. \\n\\nDesigns of experiments are apparently similar to those of (Nalisnick+, 2018) at the first glance. I think there is a room to improve the manuscript to clarify the difference from the Nalisnick+\\u2019s work. My understanding is that (Nalinsnick+, 2018) is interested in the OOD likelihood behaviors when datasets are swapped, and explains the behaviors of OOK likelihood based on the variance and the curvature of the dataset. This paper directly manipulates the pixel intensities so small that the statistics of the images would not change. \\nBTW, the Nalisnick\\u2019s paper is accepted and published in ICLR 2019. \\n\\nIt is a well-known fact that the flow (glow) models can generate natural and high-resolution images by interpolating ``latent hidden vectors\\u2019\\u2019. This indicates the latent representations are robust against perturbations while pixel intensities are not. So, when this transition of robustness occurs? This seems an interesting problem for me. I\\u2019m happy to hear the authors\\u2019 opinions about this issue. \\n \\nSummary\\n+ Good research question concerning the robustness of flow models\\n+ Simple and understandable claims supported by simple experiments\\n+ Easy to read\\n- Can be improved more to clarify the difference from the previous work that study the flow model\\u2019s likelihoods.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThe paper studies likelihood-based models of images, such as Glow and PixelCNN. The paper shows empirically that image transformations that preserve semantics (e.g. translations by a few pixels) produce images that have lower probability (density) under such models.\", \"decision\": \"The paper is studying an important topic, which is how to use likelihood-based models correctly, and it warns against the misuse of such models. I agree with the paper's conclusion that we should be careful when using likelihood-based models.\\n\\nNonetheless, in my opinion the paper is a clear reject. The paper is full of flaws, incorrect statements, poorly constructed arguments, speculative explanations, and superficial descriptions of previous work.\\n\\nBroadly, the main issue with the paper is the following:\\n(a) It begins with flawed assumptions about how a likelihood-based model is expected to behave.\\n(b) It tests two likelihood-based models experimentally and finds that they don't behave according to the assumptions.\\n(c) It concludes that we need to be cautious when using likelihood-based models.\\n\\nAs I said, I agree that we should be careful when using likelihood-based models, but I worry that the way the paper reaches this conclusion can mislead and misinform readers. In what follows, I elaborate on specific issues with the paper in more detail.\\n\\nIssue #1:\\n\\nThe main flaw of the paper is the assumption that semantic-preserving transformations shouldn't reduce the likelihood of the model (beginning of section 4). This is incorrect. To see why, consider a semantic-preserving transformation x' = T(x). As defined in the paper, a semantic-preserving transformation is one that doesn't change the label y of an image x. This can be formalized as:\\n\\np(y | x') = p(y | x)\\n\\nBy Bayes' rule, from the above it follows that:\\n\\np(x' | y) p(y) / p(x') = p(x | y) p(y) / p(x) \\n=> p(x' | y) / p(x') = p(x | y) / p(x)\\n\\nClearly p(x') can be different from p(x), as long as p(x' | y) is different from p(x | y) by the same factor. Hence, it doesn't follow that if p(y | x') = p(y | x) then p(x') = p(x), which is what the paper incorrectly assumes. To be clear, in the above expressions, p() refers to the true data distribution and not to a model that approximates it.\\n\\nFor example, consider images of digits, where the digit is generally in the centre of the image. Moving a digit to the corner will result in a less likely image, because it's unlikely that digits appear in corners. However, it won't change the classification of that digit, since all digits are less likely to appear in corners in exactly the same way. In fact, this is exactly why we see in section 4.1 that the likelihood of the model decreases as the image is translated to the left; the model is behaving exactly as it is supposed to.\\n\\nSimilarly, in section 4.2 where noise is added to the image, the model is again behaving exactly as it is supposed to. Adding Gaussian noise to the image results in a distribution p(x') that is equal to convolving the original distribution p(x) with the noise distribution p(noise) which is an isotropic Gaussian. As a result, p(x') will be a more diffuse version of p(x), hence samples x' will have on average low probability (density) under p(x), exactly as expected, and exactly as the experiment observes.\\n\\nIssue #2:\\n\\nThe paper incorrectly assumes that out-of-distribution examples should have low probability (density). This is incorrect, and a common misconception that results from confusing high probability with typicality. In fact, out-of-distribution examples can have high probability (density). Here are two examples that illustrate that:\\n\\nSuppose you flip a bent coin a million times, with 10% probability of the coin coming up heads. The in-distribution samples (the typical set) are those sequences of coin tosses that have roughly 100 thousand heads. However, the most likely outcome is the all-tails sequence. This outcome is clearly atypical, and many people would agree that it's out-of-distribution, but it has the highest probability.\\n\\nConsider one million independent Gaussian variables, each with mean 0 and variance 1. Due to the law of large numbers, a typical draw of these variables will have average squared value very close to 1, hence the outcome of all variables being zero is very atypical and many people would agree it's out-of-distribution. However, the all-zero outcome is in fact the one with the highest probability density.\\n\\nGiven the above, the following two statements copied from the paper are flawed, and potentially misleading:\\n\\n\\\"The foundation of using likelihood-based models for OoD detection is that they are supposed to assign much lower likelihoods for OoD samples than in-distribution samples\\\"\\n\\\"In OoD detection, we assume that a sample with a higher likelihood indicates that it is more likely to be an in-distribution sample\\\"\\n\\nFundamentally, I think the issue is that the paper incorrectly assumes that all images with the same semantics (e.g. all images of the digit 3) must be in-distribution. However this is not necessarily true. For example, the true data-generating process of MNIST images (i.e. asking people to write down a digit, scanning it, denoising it, cropping it and centring it) is unlikely to produce images where the digit is not in the centre or the background is noisy. Hence, the images considered in sections 4.1 and 4.2 are indeed out-of-distribution with respect to the true data distribution of MNIST, and are not adversarial examples of the models as the paper suggests.\", \"further_issues\": \"The paper is ostensibly about flow models, but in fact very little is specific to flow models, and most of the discussion, where correct, applies to likelihood-based models in general. In fact, PixelCNN is not a flow model, even though the paper misleadingly describes it as such. PixelCNN can be used to model discrete random variables, whereas flow models are used for continuous random variables (flows for discrete random variables exist, but they are different from PixelCNN). That said, if PixelCNN is used to model continuous random variables then by reparameterization it can be viewed as a flow model with one layer, but that would be an unusual way to present it. Same for WaveNet.\\n\\n\\\"It is also believed that flows can be used to detect out-of-distribution(OoD) samples by assigning low likelihoods on them.\\\"\\nBelieved by whom? A citation is needed here.\\n\\n\\\"Flows can roughly be divided into two categories [coupling flows and autoregressive flows]\\\"\\nThere are several flows that fall in neither of these categories, such as linear flows, residual flows, planar flows, radial flows, Sylvester flows, neural ODEs, FFJORD, and many others.\\n\\n\\\"The autoregressive property of an autoregressive layer is enforced by masking.\\\"\\nThere are other ways of enforcing the autoregressive property (e.g. RNNs); masking is just one of them.\\n\\nEq. (5) is not correct in general. The bits per dimension should be approximated by:\\n\\nBPD = (NLL - log|B|) / ((h x w x c) * log2)\\n\\nwhere |B| is the quantization volume, provided |B| is small. That is, if each pixel with range [0, 1] is quantized into 10 bins, then |B| = 0.1 ^ (h x w x c).\\n\\n\\\"This surprising difference can be attributed to the difference of their architectures\\\" (beginning of page 5)\\nThe explanation that follows is speculative, but is not presented as such, which can be misleading.\\n\\n\\\"We may reasonably suspect that flows\\u2019 counter-intuitive likelihood assignment is dominated by the inherent differences of pixel-level statistics associated to the image semantics\\\"\\nThis is also speculation.\\n\\n\\\"PixelCNN is more sensitive to the noises, because its pixel-wise modeling quickly augment and propagate the influences of the added noise\\\"\\nAlso speculation.\\n\\n\\\"We find that the semantic object of a test image depends heavily on the last factored latent zL, rather than the preceding factors\\\"\\nAs far as I can see, there isn't evidence in support of that statement in the paper.\\n\\n\\\"Considering the weak correlation between flows\\u2019 likelihoods and image semantics, it is inappropriate to use them for OoD samples detection\\\"\\nGiven the flawed assumption about the role of image semantics, I don't think there is evidence for that.\\n\\n\\\"In terms of image generation, we expect that every single generated pixel in a image is the most likely one\\\"\\nThis is inaccurate; when generating images from a model, we don't get the most likely pixels, but samples from the joint distribution over pixels.\"}" ] }
r1xHxgrKwr
Anomaly Detection Based on Unsupervised Disentangled Representation Learning in Combination with Manifold Learning
[ "Xiaoyan Li", "Iluju Kiringa", "Tet Yeap", "Xiaodan Zhu", "Yifeng Li" ]
Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems. In this paper, we present a novel deep anomaly detection framework named AnoDM (standing for Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning). The disentanglement learning is currently implemented by beta-VAE for automatically discovering interpretable factorized latent representations in a completely unsupervised manner. The manifold learning is realized by t-SNE for projecting the latent representations to a 2D map. We define a new anomaly score function by combining beta-VAE's reconstruction error in the raw feature space and local density estimation in the t-SNE space. AnoDM was evaluated on both image and time-series data and achieved better results than models that use just one of the two measures and other deep learning methods.
[ "anomaly detection", "disentangled representation learning", "manifold learning" ]
Reject
https://openreview.net/pdf?id=r1xHxgrKwr
https://openreview.net/forum?id=r1xHxgrKwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "LTYVxGqw2O", "Hke85R3jsB", "rJgerS3ooH", "ryxGQXnoiB", "Skxsb0gioB", "r1xgoEtysH", "rJenPMF1jB", "Hkxee0dysS", "r1xQXHqitr", "HkeKsEc7tr", "BygqC2QXKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740474, 1573797518094, 1573795128332, 1573794585900, 1573748226884, 1572996248509, 1572995684290, 1572994536430, 1571689754692, 1571165344965, 1571138770157 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2098/Authors" ], [ "ICLR.cc/2020/Conference/Paper2098/Authors" ], [ "ICLR.cc/2020/Conference/Paper2098/Authors" ], [ "ICLR.cc/2020/Conference/Paper2098/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2098/Authors" ], [ "ICLR.cc/2020/Conference/Paper2098/Authors" ], [ "ICLR.cc/2020/Conference/Paper2098/Authors" ], [ "ICLR.cc/2020/Conference/Paper2098/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2098/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2098/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper presents AnoDM (Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning) that combine beta-VAE and t-SNE for anomaly detection. Experiment results on both image and time series data are shown to demonstrate the effectiveness of the proposed solution.\\n\\nThe paper aims to attack a challenging problem. The proposed solution is reasonable. The authors did a job at addressing some of the concerns raised in the reviews. However, two major concerns remain: (1) the novelty in the proposed model (a combination of two existing models) is not clear; (2) the experiment results are not fully convincing. While theoretical analysis is not a must for all models, it would be useful to conduct thorough experiments to fully understand how the model works, which is missing in the current version. \\n\\nGiven the two reasons above, the paper did not attract enough enthusiasm from the reviewers during the discussion. We hope the reviews can help improve the paper for a better publication in the future.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Revision submitted\", \"comment\": \"We have uploaded a better version to address all reviews' comments.\"}", "{\"title\": \"Regarding value of alpha\", \"comment\": \"We have uploaded a revised paper with major clarifications and changes highlighted in red colour. Since large beta value would increase the overlap of clusters in the latent space, our experimental results show that optimal value of beta is small, in interval (0,1], which leads to good separations of clusters in latent space and stable model learning, while not strong enough to push the inference distribution too close to N(0,I).\\n\\nRegarding value of alpha, we added Appendix F in the new version. We replaced the distance score in the combined anomaly score function with a distance score normalised by average distances among training samples in t-SNE map, and found that the optimal alpha value became much smaller. It implies that t-SNE is essentially needed in AnoDM. Please see Appendix F and Figure 8 for details.\"}", "{\"title\": \"Heatmaps in Figure 2; Analysis for small beta values\", \"comment\": \"Thank you for considering our feedback. Now we just uploaded a revision where major changes are highlighted by red colour. We now use 2D heatmaps in Figure 2 (thanks for this suggestion!) which better show that small beta values in (0,1] give optimal results (note the upper right area of a heatmap). It can be explained that, according to (Mathieu et al. 2019), larger beta value increases the overlap of clusters in latent space. Thus small beta values would encourage separation of clusters, but beta=0 could make model learning highly unstable because it loses control to the variance in latent variables.\\n\\nWe also replaced the distance score in the combined anomaly score function with a distance score normalised by average distances among training samples in t-SNE map, and found that the optimal alpha value became much smaller. It implies that t-SNE is essential in AnoDM. Please see Appendix F and Figure 8 for details.\\n\\nHope the revised paper could convince you further.\"}", "{\"title\": \"Post-Reviews Update\", \"comment\": [\"Thank you for clarifying my concerns.\", \"I understand that the problem setting is unsupervised.\", \"A simple combination is fine if its effectiveness is convincing and carefully evaluated, while currently it is not the case due to the lack of theoretical analysis.\", \"I also understand that \\\\alpha = 1 is not the optimal. I strongly recommend to improve visualization in Figure 2. I think 2D plots are easier to see.\", \"I see that \\\\beta = 0 is not the optimal, while it is still true that quite small \\\\beta values give the optimal results. More careful evaluation is required.\", \"I increase my score to weak reject, while I still think the paper is sill below the acceptance threshold.\"]}", "{\"title\": \"Right extent of disentanglement is essential; ablation studies show t-SNE is critical\", \"comment\": \"Thanks for the questions regarding disentanglement and t-SNE. We properly address them below. Hopefully it will convince you for a better rating.\\n\\nThe right extent of disentanglement is essential in beta-VAE for anomaly detection. If the value of beta is not inappropriately large, the divergence between the inference distribution and prior N(0,I) is always greater than 0. That\\u2019s why we searched the value of beta, and interestingly found that best performance was surprisingly (because existing work focused on beta>1) obtained when beta<1, that is the push/approximation from the inference distribution towards N(0,I) is actually less than vanila VAE. A possible future work is to follow Mathieu et al. (2019) which redefined disentanglement as decomposition rather than independent through a structured prior. However, it may be practically challenging, but very interesting.\\n\\nAs the discussion with Reviewer #3, in Figure 2, we can see the best results are reached before alpha=1. First, we clarify that, it does not mean that the reconstruction of beta-VAE play a dominant role and t-SNE useless; the main reason is that the normalised reconstruction error in input space and the distance in t-SNE map have very different magnitudes (as mentioned in Section 3.5). This can be evidenced from the scales of all t-SNE plots in the paper. Second, from Table 1, we can see that the performances of AnoDM without using t-SNE (alpha=1) are worse than that using t-SNE (alpha<1). Third, it is very important to mention that, from Table 2 (appendix), we can see that 37 out of 40 optimal results were obtained when alpha<1. The three cases where alpha=1 only occurred on the Small-Norb data where convolutional generative models could not learn good latent representations. In summary, t-SNE does play an essential role in AnoDM. This is an interesting point to have identical number of dimensions in latent space and t-SNE map. Unfortunately, t-SNE can only map to 2 or 3 dimensions, and if we reduce the latent dimensionality to 2 or 3, it would certainly affect reconstruction quality in input space.\"}", "{\"title\": \"AnoDM is unsupervised; the two components mutually complement; best results were obtained with alpha<1 and beta>0\", \"comment\": \"Thank you for your time to review our paper and raise thoughtful questions for discussion. In the following, we address the three major concerns. We will add them in the revised paper. Better score please :)\\n\\nOur AnoDM framework is unsupervised. It means that beta-VAE is trained to model the distribution of the \\u201cnormal\\u201d data points, and then used as a reference to identify out-of-distribution data points through reconstruction error and distance. The soul is similar to traditional statistical methods, but we apply deep models and representation learning for complex data. It is not supervised, because normal and anomalous samples are not treated equally as in two-class classification. In the whole training process, we didn\\u2019t use any labels of data, except when plotting the t-SNE graphs. \\n\\nYes, it is relatively simple to combine two existing complementary methods together to obtain promising results for anomaly detection. Isn\\u2019t it the right need in real applications? Theoretically we highlight that the two methods complement each other: reconstruction error provides useful information in input space, while distance in t-SNE map contribution anomaly score from latent space. Furthermore, this paper contributes a comprehensive empirical study about the impact of extent of disentanglement for anomaly detection. Although the AnoDM framework is technically straightforward, but it is practically valuable. It paves the road for more sophisticated studies and improvement on AnoDM. The two components could be replaced with new disentanglement learning and manifold learning algorithms. As future work, the two components can be combined in a single objective to learn a single model, rather than a two-phase framework. \\n\\nThanks for opening this interesting discussion. In Figure 2, we can see the best results are reached before alpha=1. First, we clarify that, it does not mean that the reconstruction of beta-VAE play a dominant role and t-SNE useless; the main reason is that the normalised reconstruction error in input space and the distance in t-SNE map have very different magnitudes (as mentioned in Section 3.5). This can be evidenced from the scales of all t-SNE plots in the paper. Second, from Table 1, we can see that the performances of AnoDM without using t-SNE (alpha=1) are worse than that using t-SNE (alpha<1). Third, it is very important to mention that, from Table 2 (appendix), we can see that 37 out of 40 optimal results were obtained when alpha<1. The three cases where alpha=1 only occurred on the Small-Norb data where convolutional generative models could not learn good latent representations (CapsNet does). In summary, t-SNE does play an essential role in AnoDM. It is also important to clarify that best results were achieved with 0<beta<1. We will improve the visualisation in Figure 2.\"}", "{\"title\": \"Thanks for your support\", \"comment\": \"Thanks for your support. We will do the following to improve this work. (1) We will clarify that since existing work uses CNN/LSTM-VAEs (e.g. Park et al. 2017) which are special cases of beta-VAE, thus as long as best performance is achieved when beta is not 1 and alpha is not 1, it means the AnoDM framework outperforms SOT methods on the time-series data. (2) A pseudo-code algorithm is actually provided in appendix. (3) We will carefully go through the paper to correct typos and grammatical errors. (4) We actually shorten the paper from 16 to 10 pages before submitting the first version. It is difficult to shrink it again because lots of key information have to stay. We will try again though.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a novel deep anomaly detection model. It combines two existing models: B-VAE and t-SNE. The B-VAE is trained unsupervised and learns an encoder and decoder which provide both an embedding and a reconstruction. Using t-SNE to reduce its dimensionality, the embedding is projected into a 2 dimensional space. An anomaly score function is defined that combines the reconstruction error and the distance in t-SNE space to the K nearest neighbor(s). Experiments are conducted with several image datasets (MNIST,FMNIST,CIFAR10,SmallNORB) and one timeseries dataset (Arrhythmia). For the image sets, the B-VAE model is implemented with a CNN, while for timeseries, a TCN is used. Comparisons are conducted showing the approach to be beat other SOT unsupervised methods, AnoGAN and ADGAN, by 63% and 22% respectively for MNIST and 8% and 2% for FMNIST (in terms of error reduction). For CIFAR-10 and FMNIST it is even demonstrated to beat a supervised SOT method CapsNET. Another experiment shows that t_SNE dramatically improves the performance over B-VAE alone. For the timeseries, the approach is not compared to other SOT approaches as the authors only provide an experiment showing that TCN beats CNN and LSTM for the implementation of the B-VAE. In addition the authors study the effect of the various parameters of the system, in particular the effect of the B in B-VAE and of alpha, the mixing factor between reconstruction error and kNN distance in t_SNE. 3D plots give a good idea on how to select optimal values for the various datasets. The impact of B is also shown on the t-SNE map for MNIST. Finally an ablation studies compares on MNIST the performance of the approach with t-SNE alone, reconstruction alone, and latent distance. On average over 4 digits taken as anomaly, the proposed approach dramatically outperforms the others.\", \"pros\": [\"The proposed approach improves over the SOT of competitive recent methods for anomaly detection on four image datasets.\", \"The authors make an effort to abstract the approach into a framework where other deep learning models and dimensionality reduction techniques can be used. They illustrate this by using a TCN instead of a CNN for the timeseries example.\", \"The parameter studies and ablation studies are informative and answer many of the questions i had as i read the paper.\", \"The paper is relatively clearly written (at least sufficiently to easily understand the technical details).\"], \"cons\": [\"The novelty of the paper is limited as it is mostly a combination of 2 existing methods.\", \"The timeseries dataset is not compared to SOT methods (although the authors claim SOT in the conclusion).\", \"A pseudo-code algorithm is not provided, making it unlikely someone can reproduce the method.\", \"There are many typos an grammatical errors\", \"The paper could have been shortened. 10 pages is too long.\", \"Overall, because of the good performance and thoughtful ablation studies, and despite the limited novelty, I think the paper makes a good contribution to anomaly detection.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to combine beta-VAE and t-SNE for anomaly detection.\\nAlthough the problem and the proposed approach is relevant, I have the following concerns.\\n\\n- The problem setting is not well explained.\\n In particular, it is not clear whether the setting is unsupervised or not.\\n It seems that the proposed method is for unsupervised anomaly detection.\\n However, the authors mention that beta-VAE is trained on normal data, which means that it is not unsupervised.\\n Please clarify this point.\\n- The originality and the technical quality of the proposed method is not high as it is a straightforward combination of two existing method.\\n If the proposed combination has some theoretical advantage for anomaly detection, the paper becomes more interesting.\\n However, there is no theoretical analysis of the proposed method, hence the significance of the contribution is hot high in its current state.\\n- Experimental results are not convincing.\\n * The authors argue that the t-SNE step is important as the proposed method is better than the naive beta-VAE.\\n However, Figure 2 shows that in most cases the score becomes better as \\\\alpha gets larger and shows the best score if alpha = 1.\\n From Equation (2), this means that the t-SNE step does not contribute to the anomaly detection performance.\\n This inconsistency should be carefully discussed.\\n * Also, Figure 2 shows that beta should be small and the best score is achieved when beta = 0 in most cases.\\n This means that representation learning is not meaningful and the raw representation (feature vectors) may be already effective for anomaly detection.\\n Hence the significance of the proposed method is not convincing.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In a paper a new way to compute anomality score (for a test point) is suggested. A paper is purely experimental, based on existing techniques to dimension reduction (beta-VAE and t-SNE). Given trained beta-VAE, latent vectors, obtained for training set, are feed into t-SNE algorithm. The overall anomality score for a test point is combined from 1-NN distances on t-SNE plot and reconstruction error of beta-VNE.\\n\\nThere is substantive question naturally appears from the application of t-SNE to obtained latent vectors. By construction, beta-VAE tries to make latent vector to be distributed according to N(0,I). By definition, it is very hard to project such a distribution on a plane, even by non-linear methods, such as t-SNE. Yet at the second step of paper's approach, this vectors are feed into 2-dimensional t-SNE. \\n\\nThis aspect makes me to think that optimal alpha in paper's anomality score should be close to 1. This would imply that t-SNE step is not needed at all. So, I am curious what actual optimal alpha was in experiments? \\n\\nOr, how would results change if t-SNE mapping would be set to Identity tranformation (into space whose dimension is the same as latent space), but formula for anomality remains the same?\"}" ] }
HyerxgHYvH
Neural Arithmetic Unit by reusing many small pre-trained networks
[ "Ammar Ahmad", "Oneeb Babar", "Murtaza Taj" ]
We propose a solution for evaluation of mathematical expression. However, instead of designing a single end-to-end model we propose a Lego bricks style architecture. In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick. More difficult or complex task can then be solved using a combination of these smaller network. In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations (such as 1 digit multiplication, addition, subtraction, sign calculator etc). These fundamental operations are then learned using simple feed forward neural networks. We then shows that different operations can be designed simply by reusing these smaller networks. As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product. This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers. Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.
[ "NALU", "feed forward NN" ]
Reject
https://openreview.net/pdf?id=HyerxgHYvH
https://openreview.net/forum?id=HyerxgHYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "jB_WnSCRH", "H1gznbUMqr", "H1luNTN3tS", "BygZ_Hp9FS" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740443, 1572131242390, 1571732784428, 1571636585336 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2097/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2097/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2097/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes to train and compose neural networks for the purposes of arithmetic operations. All reviewers agree that the motivation for such a work is unclear, and the general presentation in the paper can be significantly improved. As such, I cannot recommend this paper in its current state for publication.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Writing\\nOverall, the readability of this paper is far from the acceptance criteria of ICLR. there are just way too many grammatical errors or typos throughout the entire paper that prevent me from understanding this paper, such as\\nSec.1\\nAlthough\\u2026, however\\u2026\\nIn neural network -> neural networks\\nThey lack understanding -> it lacks understanding\\nNumerical and quantitative reasoning is their r fundamental capability -> are .. capabilities\\u2026\\n...\\nJust too many to print all of them here. Please proofread your paper before submission.\\n\\nThe introduction is poorly written that I cannot get a full picture of what goals this paper tries/has achieved after reading it.\\n\\nAlgorithm 1 and 2 seem to be very poorly formatted but only illustrate minimal useful information.\\n\\n\\nMotivation\\nI don\\u2019t see clear usage nor convincing results based on the current shape of this paper -- what application could this work enable or what theoretical insights it reveals?\\n\\nMethod\\nThis paper proposes to use neural nets to do arithmetic operations (though I don\\u2019t see convincing motivations to do so). A new idea the paper proposes is to train a few networks to first learn/fit basic operations, and then use these trained NNs to assemble large NNs which are supposed to form more complex arithmetic operations. Unfortunately, the writing of this paper prevents me from fully understanding the technical details of this paper.\\n\\nResults\\nThe results in this paper are currently minimal. Many details about the experiment setup or how different methods are compared are not clear.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a method to design a NN based mathematical expression evaluation engine. I find that the paper could benefit a lot from some rewriting as it is not very clear and over claiming at points.\\nThe introduction states that almost all ANNs lack generalization, this is in my opinion an overstatement. Domain shift and adaptation are techniques to cope with situations where the test data distribution is not coherent with the training data distribution. If this would be true in general we would have not seen such a resurgence and widespread use of ANN in the past years.\\n\\nThe paper lacks also proper citations to previous work and I find the background section and motivation rather weak. \\n\\nThe fundamental operations presented in section 3 do not involve any learning at all, contrary to the referenced work of Trask et al where parameters are actually learned (see relaxation of sign function with tanh etc.). I therefore find the use of ANN as basic constituent of the block to be wrong, each network has fixed hand-crafted weights. If I were to replace ANN with the ordinary corresponding function nothing would in the presented framework.\\n\\nMultiplication and division as explained in the algorithms do not require learning at all. I am afraid the ML contribution of this work is in my opinion almost non existent. I see every component as being scripted rather than learned from the data, which would be of course much more interesting.\\n\\nExperiments are not clear at all, setup and explanation of results are not sufficient and I find them not thoroughly executed. Table 1 mentioned classification but the task is never clearly explained. Experiment 2 compares to NALU but in the proposed work nothing is learned, unless I misunderstood the work entirely.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to use neural networks to evaluate the mathematical expressions by designing 8 small building blocks for 8 fundamental operations, e.g., addition, subtraction, etc. They then design multi-digit multiplication and division using these small blocks.\\n\\nThe motivation of this paper is not very clear to me, i.e., why do you want to mimic the arithmetic operations using the logic networks, what is the real use case here. In the introduction, the paper motivates by pointing out the limitation of neural networks, which is memorization based and they want to generalize by understanding the inherent rules. However, if you look at the way the fundamental building blocks are designed, and how the multiplication model works, the rules are injected based on human knowledge, e.g., the way signal digit multiplication extends to multi-digit multiplication, there is simply no understanding by the model itself. Besides, the whole process has no training, i.e., the weights of the small networks are fixed, and what is the trainable parts? \\n\\nThe whole paper has many spelling and grammar errors, which hinders the reading. And the writing needs to be significantly improved.\"}" ] }
rkxNelrKPB
On Stochastic Sign Descent Methods
[ "Mher Safaryan", "Peter Richtárik" ]
Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we perform a general analysis of sign-based methods for non-convex optimization. Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients. Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. We validate our theoretical findings experimentally.
[ "non-convex optimization", "stochastic optimization", "gradient compression" ]
Reject
https://openreview.net/pdf?id=rkxNelrKPB
https://openreview.net/forum?id=rkxNelrKPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "_6gQVmLRxq", "HkemzUr2iS", "rJgpnv5SjS", "SyxoFPcHor", "B1gVTLqrjS", "BJl9kDNE5H", "HJeobaJAFS", "rJxqaMwTtS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740411, 1573832202590, 1573394357161, 1573394307515, 1573394107729, 1572255458071, 1571843331148, 1571807937850 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2096/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2096/Authors" ], [ "ICLR.cc/2020/Conference/Paper2096/Authors" ], [ "ICLR.cc/2020/Conference/Paper2096/Authors" ], [ "ICLR.cc/2020/Conference/Paper2096/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2096/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2096/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes an analysis of signSGD in some special cases. SignGD has been shown to be of interest, whether because of its similarity to Adam or in quasi-convex settings.\\n\\nThe complaint shared by reviewers was the strength of the conditions. SGC is really strong, I have yet to see increasing mini-batch sizes to be used in practice (although there are quite a few papers mentioning this technique to get a convergence rate) and the strength of the other two are harder to assess. With that said, the improvement compared to existing work such as Karimireddy et. al. 2019 is unclear.\\n\\nI encourage the authors to address the comment of the reviewers and to submit an improved version to a later, or perhaps to a more theoretical, convergence.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to authors\", \"comment\": \"I thank the authors for their response and I provide further remarks below:\\n\\n1. Lemma 2 is more clear for me now, when the authors make the connection with SGC. I also agree that it is a very strong assumption under which people can actually prove very strong results for SGD. However, what is not clear to me is that under this assumption, $\\\\tau$ depends on the constant $C$ of SGC. I think it would be clearer if the authors discuss how to estimate this constant.\\n\\n2. Thank you for the clarification. However, the implication of this discussion in practice is still not very clear.\\n\\n3. I think this part needs to be stated very clearly in the paper. To me, having a mini batch size depending on $K$ and step size depending on $K$ are very different cases which have to be treated separately.\\n\\n4. I understand the difference but the scaling is done locally, during compression. Is there a drawback of scaling that I am missing? Second, I think storing one more vector is not a very big problem in practice. I think it would help to compare with this alternative approach in practice as well.\\n\\n10. I accept this explanation as an intuition of course, but what I wanted to ask is that is this something the authors can show rigorously?\\n\\nThe significance of the characterization of SPB, given that it is also mentioned in the paragraph under Lemma 1 in Bernstein et. al. 2019 (that I also pointed out in my original review), is still not enough for me. I think that the authors need more motivating cases in addition to Lemma 1, 2 and 3 which are either known (using big minibatch sizes or unimodal symmetric noise) or are very strong assumptions (SGC).\"}", "{\"title\": \"Thank you for the review!\", \"comment\": \"We provided 3 different setups, where assumption 1 is satisfied:\\n1) Unimodal and symmetric noise setup (Lemma 1). As noted in (Bernstein et al., 2018), it is backed up by central limit theorem when training neural networks.\\n2) Strong growth condition with fixed mini-batch size (updated Lemma 2). This setup corresponds to over-parameterized deep network learning, where the model can fit the training data completely.\\n3) Adaptive mini-batch size setup (Lemma 3), which guarantees converge merely by choosing appropriate mini-batch size.\\n\\nNote that, while sign matching SPB assumption is quite intuitive in sign based methods, it is not assumed or somehow claimed that it holds automatically for simple problems. Even in one dimensional regression problem SPB assumption might fail, as much as signSGD might fail to converge. Furthermore, the SPB assumption describes the convergence of sign descent methods, which is known to be problematic (see e.g. (Balles & Hennig, 2018), section 6.2 Results).\"}", "{\"title\": \"Thank you for the detailed and constructive review! We have updated the paper and addressed the concerns you raised.\", \"comment\": \"1. The assumption of Lemma 2 (with an adjustment of fixed mini-batch size) can be seen as a Strong Growth Condition (SGC), sigma_i^2 \\\\le C g_i^2, considered in the literature, e.g. (Vaswani et al., 2019; Schmidt and Le Roux, 2013). Under SGC optimal convergence rates for constant step-size SGD was obtained for convex, strongly convex and non-convex settings. Clearly, SGC a strong assumption, particularly it implies that all stochastic gradients are 0 at the stationary points.\\nTo be more precise, the assumption of Lemma 2 can be replaced by SGC with some constant C and requiring mini-batch size tau>2C. Under these assumptions, Lemma 2 yields lower bounds for success probabilities rho_i \\\\ge (1 - C/tau) > 1/2 and Theorem 1 gives the rates (6),(7) with respect to l_1 norm instead of rho-norm.\\n*We updated the Lemma 2*\\n\\n2. With footnote of page 2 we argue slightly different claim. We do not argue that SPB is weaker than bounded variance assumption in a usual sense, but rather in the sense of differential entropy. In fact, these two assumptions are incomparable in the direct sense: neither SPB implies bounded variance assumption nor the other way around. We claim that under bounded variance assumption, the level of uncertainty of stochastic gradient is limited, while under the SPB assumption the information entropy of stochasticity could be infinite.\\n\\n3. Again, we claim slightly different statement if you want to be more exact. We mention that convergence results in (Bernstein et al., 2018; 2019) use mini-batches *and/or* step-sizes dependent on K. Theorem 1 in (Bernstein et al., 2019) uses batch size 1, *but* step-size dependent on K. Other results, including the results for distributed setting, use mini-batch size *and* step size dependent on K. None of the results is free from K.\\nBy all means, we do not undermine the results in (Bernstein et al., 2018; 2019). In fact, these two papers were starting point for us and motivation to improve them.\\n\\n4. To alleviate your concern for the comparison with (Karimireddy et. al. 2019), we point a couple of reasons of why we consider these results distinct: 1) First of all, the stochastic estimator in (Karimireddy et. al. 2019) is not just the signed vector, but it is scaled by l_1 norm of the gradient. Without that scaling factor, the results do not hold and cannot be applied to unscaled signSGD, which we consider. 2) Another difference is the incorporation of error-feedback mechanism, which uses unbiasedness of stochastic gradient and needs to store one more vector locally.\\nFurthermore, we did compare ER-signSGD method of (Karimireddy et. al. 2019) with another scaled version of signSGD introduced in Appendix E, where we added a scaling factor as ER-signSGD does and we introduced extra stochasticity into sign vector instead of error-feedback.\\n\\n5. It seems a fair trade-off between simplicity and generalization. SPB assumption as well as the notion of rho-norm are very general and convergence rates are hard to interpret. However, in special cases rho-norm can be l_1 norm (with strong growth condition) or mixed l_1-l_2 norm (with unimodal and symmetric noise assumption) which are easier to interpret.\\n\\n6. To convince the weakness of SPB, for instance, with respect to unimodal symmetric noise assumption, consider the PDF(probability density function) of stochastic gradient. SPB requires the noise on the one side of axis, where the true gradient is, to concentrate more than the other side. In other words, median and true gradient must have the same sign, but the difference could be anything.Clearly, SPB allows multimodality and asymmetry for the noise.\\nLastly, while the case of Lemma 3 is known, the lower bound for the mini-batch size is adaptive, which is a novelty.\\n\\n\\nRespond to minor comments\\n7. see respond 4.\\n8. We clarified footnote 1 and added some reference.\\n9. Mentioning that signSGD is similar to Adam, we also added a reference to (Balles & Henning, 2018) where the actual connection was made.\\n10. For each component, the sign of stochastic gradient has two possible values: +/- the sign of true gradient. It is easy to see that the sign of true gradient is a descent direction, while the other one is ascent direction. As we are aiming to solve a minimization problem, it is reasonable to follow the descent direction more often than the ascent one.\\n11. We updated Lemma 2, replacing current variance bound by strong growth condition, where the classical definition of variance is used.\\n12. SPB is not necessary and sufficient condition in the absolute sense, since one can (keeping unbiasedness) modify the stochastic gradient in the counterexample so that SPB does not hold but signSGD converges.\\n\\nIt was well noted in (Balles & Hennig, 2018) that the usefulness of sign based methods is problem dependent. One should view the SPB condition as a criteria to a problem where sign based methods are useful. Moreover, taking account the counterexample, this criteria is not easily relaxable.\"}", "{\"title\": \"Thank you for the review and suggestion for the experiments!\", \"comment\": \"We motivated the relaxation by comparing it with 4 different conditions used in the literature: 1) unimodal symmetric noise assumption. 2) Strong growth condition with fixed mini-batch size. 3) Adaptive mini-batch size. 4) Bounded variance assumption in the sense of uncertainty.\\n\\nIn fact the stochastic gradient of the Rosenbrock function considered in the paper is biased as expectation gives 1/(d-1) times the function itself.\\n*We will add better targeted experiments.*\\n\\nThe observation on low/high SNR components in (Bernstein et al., 2019) were done under the unimodal symmetric noise assumption. Similar observation is done in our paper for rho_norm and the second half of section 3 (including figure 1) is devoted just to that.\\n*We added an explicit reference to (Bernstein et al., 2019) at the end of section 3.*\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an improved analysis of the signSGD gradient estimator. The authors propose to relax the requirements on the gradient estimator in Bernstein (2019). The only requirement imposed on the gradient is that it should have the correct sign with probability greater than 1/2. In particular this approach allows the gradient estimate to be biased as opposed to Bernstein (2019) which requires unbiased gradients. The authors also show this condition to be necessary by a small counterexample.\\n\\nIn my view the paper presents a relatively minor but still interesting extension of the work in Bernstein (2019). The main problem is that the relaxation is not well motivated in terms of scenarios where this might be applicable. Experimental validation is also very weak.\\n\\nIt is claimed in the experiment section that the stochastic gradient of the Rosenbrock function g(x) = \\\\del f_i(x) + eps, where eps is a 0-mean Gaussian and i is uniform random is biased. This seems incorrect to me and the gradient estimate should be unbiased when the expectation is taken over the randomness in i and eps.\\n\\nA key claim of the paper is the ability to use biased gradient estimates. Experimental validation of this (in light of the above) is completely missing.\\n\\nThe experiments that are presented on MNIST are very general and not very closely connected to the specific claims of the paper. The only real conclusion drawn is that larger batch sizes improve convergence.\\n\\nI think the paper needs better targeted experiments. They need to show covergence in a case where the conditions in Berstein (2019) do not hold.\\n\\nHow are the properties of the \\\\rho norm related to the observations on l_1 norm for high and l_2 norm for low SNR components in Bernstein (2019)? If they are related this should be referenced.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper focuses on signSGD with the aim of improving theoretical understanding of the method. The main contribution of the paper is to identify a condition SPB (success probability bounds), which is necessary for convergence of signSGD and study its connections with the other conditions known in the literature for signSGD analysis. One important point here is that the norm in which the authors show convergence now depends on SPB, meaning that the probabilities in SPB are used to define the norm-like function they use in the theorems.\", \"This paper is well-written and nicely structured and I like the relationships of SPB with other conditions. However, I have some concerns on the generality of SPB that I will detail below.\", \"First of all, Lemma 2 is not clear to me at all. The authors say that the variance is bounded by a constant (0 \\\\leq c_i < 1/sqrt{2}) multiplied by the true gradient norm and then they show that this assumption implies SPB. I do not know how restrictive this condition is. For example, what happens when all elements of true gradient is close to zero, I don\\u2019t know if it is reasonable to assume the noise to be small for this case. I cannot make the connection of this assumption and the classical bounded variance assumption (E((\\\\hat{g_i}-g_i)^2)\\\\leq\\\\sigma_i). I can believe the result of Lemma 3 with specific constants $c_i$ as given, but I feel that it is then much stronger than standard bounded variance assumption. Because it would be asking the variance to be smaller than some specific constant.\", \"Related to first point, I did not understand the remark in the first footnote of page 2. The authors argue that SPB is weaker than bounded variance assumption. But at the same time, it is known that bounded variance assumption is not enough to make signSGD work, with counterexamples given in Karimireddy et. al. 2019. So, it is quite weird that an assumption weaker than bounded variance (for which signSGD provably does not convergence) makes signSGD converge. So I think it is more natural for SPB to be stronger than bounded variance, because it is enough to make signSGD work. The only proof in the paper that would support this claim is Lemma 2, as I discussed above is stronger than standard variance bound. I hope that authors can clarify this point.\", \"After Theorem 1, the authors compare their result with Bernstein et. al. 2019 and mention that Bernstein et. al. needs to use mini-batches depending on $K$ where $K$ is the iteration and unimodal symmetric noise assumption. But when I check Bernstein et. al. 2019, I see that these are different cases. Specifically, Theorem 1 in Bernstein et. al. 2019, uses mini-batch size 1 under unimodal symmetric noise assumption. The case where they would use mini-batches of size $K$ is in Section 3.3 of Bernstein et. al. 2019 where they *drop* unimodal symmetric noise assumption. So, I would suggest the authors to be more exact on this comparison because it is confusing. In fact, in Section 3.3 of Bernstein et. al. 2019, the authors also identify SPB as it is implied by unimodal symmetric noise assumption. It is the paragraph under Lemma 1 in Bernstein et. al. 2019.\", \"My other concern is the comparison with Karimireddy et. al. 2019 both in theory and practice. Karimireddy et. al. 2019 modifies signSGD and under unbiased stochastic gradients and bounded variance assumption, obtains similar guarantees as this paper. I am aware that this paper does not assume unbiasedness, but like I said before, I do not know how SPB compares to variance bound. So, I see Karimireddy et. al. 2019 and this paper as similar results, so I would want to see some practical comparison as well. In Appendix E, the authors mention that Karimireddy et. al. 2019 has storage need but I think that need is negligible since they only need to store one more vector.\", \"A side-effect of SPB is that now the convergence results are given in $\\\\rho$-norm where $\\\\rho$ is determined by SPB. I understand why this is needed from the proof of Theorem 1, and its implications in the theorem, but given that Karimireddy et. al. 2019\\u2019s result is given in l_2 norm which is easier to interpret, I think more comparison is needed.\", \"Lastly, I like the fact that SPB is implied by the previous assumption in Bernstein et. al. 2019, namely unimodal symmetric noise, I am not convinced that SPB is much weaker than this assumption. The authors mention in several places in the paper that their assumption is very weak, but looking at Lemma 1, Lemma 2 and Lemma 3: Lemma 1 and Lemma 3 are the already known cases where signSGD works, and Lemma 2 is a new case where signSGD works but as I explained before, it is not clear to me how restrictive this assumption is. Therefore, I am rather unsure if this generalization of SPB is practically significant.\"], \"minor_comments\": [\"page 2, Table 1: I think it would be useful to add the results of Karimireddy et. al. 2019.\", \"page 2, Table 1 and footnote 1: Footnote sign is given for the bounded stochastic gradient assumption but the explanation in the footnote text talks about the bounded variance assumption. Of course bounded stochastic gradient implies bounded variance, but this should be clarified. In addition, the footnote text is not clear to me, could the authors either point out to some references or give a proof?\", \"page 2, Adaptive methods paragraph: The end of the paragraph says that signSGD is similar to Adam so, studies on signSGD *can* improve the understanding of Adam. I would be happy if the authors are more exact about this, such as when signSGD is equivalent to Adam etc.\", \"page 3 discussion after Assumption 1: I do not understand the sentence starting with \\u2018Moreover, we argue that\\u2019. Can the authors give more details on why it is reasonable?\", \"page 4 Lemma 2: I think the authors should include the definition of variance in the paper. Since the assumption in this Lemma is rather non-standard, I think it makes sense to be as exact as possible.\", \"page 21, Appendix E: It is written that \\u2018SPB is roughly a necessary and sufficient condition\\u2019. I could not understand what *roughly* means in this sentence. From what I have read, the authors have a counter-example showing without SPB, signSGD does not work and with SPB, it works, so I could not understand why it is written roughly here.\", \"Overall, I like the generalization of SPB, but as I detail above, I am not sure how significant the generalization is compared to other results and more specifically how it compares to standard bounded variance (which I believe is weaker than Lemma 2). Therefore, I remain not convinced about the impact of this generalization hence the results. In addition, I would have liked to see more comparisons (both theoretical and practical) with Karimireddy et. al. 2019.\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper performs a general analysis of sign-based methods for non-convex optimization. They define a new norm-like function depending on the success probabilities. Using this new norm-like function and under an assumption, they prove exponentially variance reduction properties in both directions and small mini-batch sizes.\\n\\nI am not convinced about assumption 1, which plays the key role of the proof. It assumes that success probabilities are always large or equal to 1/2. \\n\\nHow can we guarantee this property hold for an algorithm? I suggest the authors provide some real learning examples, under which it will satisfy the condition. I may revise my rating according to this.\"}" ] }
H1eNleBYwr
GENN: Predicting Correlated Drug-drug Interactions with Graph Energy Neural Networks
[ "Tengfei Ma", "Junyuan Shang", "Cao Xiao", "Jimeng Sun" ]
Gaining more comprehensive knowledge about drug-drug interactions (DDIs) is one of the most important tasks in drug development and medical practice. Recently graph neural networks have achieved great success in this task by modeling drugs as nodes and drug-drug interactions as links and casting DDI predictions as link prediction problems. However, correlations between link labels (e.g., DDI types) were rarely considered in existing works. We propose the graph energy neural network (\mname) to explicitly model link type correlations. We formulate the DDI prediction task as a structure prediction problem and introduce a new energy-based model where the energy function is defined by graph neural networks. Experiments on two real-world DDI datasets demonstrated that \mname is superior to many baselines without consideration of link type correlations and achieved $13.77\%$ and $5.01\%$ PR-AUC improvement on the two datasets, respectively. We also present a case study in which \mname can better capture meaningful DDI correlations compared with baseline models.
[ "graph neural networks", "energy model", "structure prediction", "drug-drug-interaction" ]
Reject
https://openreview.net/pdf?id=H1eNleBYwr
https://openreview.net/forum?id=H1eNleBYwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "D3pfQB_9Z4", "BJlN-zN3jS", "rkgWTlV2iH", "S1efEg4hir", "SJgPbxV3jS", "r1gnZ-O-qr", "ryx_dOHk9r", "S1lFnblnKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740380, 1573827068004, 1573826745092, 1573826602309, 1573826558716, 1572073732160, 1571932271836, 1571713457144 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2095/Authors" ], [ "ICLR.cc/2020/Conference/Paper2095/Authors" ], [ "ICLR.cc/2020/Conference/Paper2095/Authors" ], [ "ICLR.cc/2020/Conference/Paper2095/Authors" ], [ "ICLR.cc/2020/Conference/Paper2095/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2095/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2095/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper studies the use of a graph neural network for drug-to-drug interaction (DDI) prediction task (an instance of a link prediction task with drugs as vertices and interaction as edges). In particular, the authors apply structured prediction energy networks (SPEN) and model the dependency structure of the labels by minimising an energy function. The authors empirically validate the proposed approach against feedforward GNNs on two DDI prediction tasks. The reviewers feel that understanding drug-drug interactions is an important task and that the work is well motivated. However, the reviewers argued that the proposed methodology is not novel enough to merit publication at ICLR and that some conclusions are not supported by the empirical analysis. For the former, the benefits of the semi-supervised design need to be clearly and concisely presented. For the latter, providing a more convincing practical benefit would greatly improve the manuscript. As such, I will recommend the rejection of this paper at the current state.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary of updates in the revised paper\", \"comment\": \"1. We added more elaboration about the difference from previous works in the contributions of Introduction. We also add one sentence in the second paragraph of Section 2.2 to explain our difference from the previous method.\\n2. We re-organized the paragraphs in Section 4.3 and also added more detailed formulations and explanations for semi-supervised joint training.\\n3. We corrected the typo $Y_L$ and $\\\\hat{Y}_U$ in the second paragraph of Section 4.3.\\n4. We added an explanation in the first paragraph of Section 4.3 to explain why cost-augmented inference network cannot be used in the testing phase.\\n5. We added two sentences after the first two questions of the experimental section to further explain the meaning of the comparison.\"}", "{\"title\": \"Reply to Reviewer#2\", \"comment\": \"Thank you for your detailed comments. We corrected the typo in the second paragraph of section 4.3; and changed the expression \\u201clargely outperforms\\u201d by removing the word \\u201clargely\\u201d. Despite some confusion w.r.t. P@1 and P@5, for the other metrics (PR-AUC and ROC-AUC), actually our final model GENN did outperform others a lot. The other questions are answered as follows. And please also refer to our revised paper (we listed the updates in the above common comment).\", \"q1\": \"The authors didn't elaborate on why trained cost-augmented inference network cannot be directly used for test inference, and why the prediction for unknown DDI type is different in test inference.\", \"a1\": \"Thank you for proposing this question. We updated the paper and added more explanations in Section 4.2 to make this point more clear. It is actually a known conclusion from previous works for structure prediction. The cost-augmented inference network in the training phase has a different optimization goal from the test phase: the objective for training is Eq. (5) while the test objective is Eq. (4). If we use the training inference network to get the optimized $Y$, it will not get the minimized energy, due to an additional term in the objective function $\\\\delta(Y, \\\\hat{Y})$. That is why it is called cost-augmented inference during training.\", \"q2\": \"The authors did not provide any limitations of their work\", \"a4\": \"We have mentioned the limitations of our method in the conclusion section that the current version of GENN only work on the graph with the same type of nodes. It leaves a future work to extend it to heterogeneous networks where nodes have different types such as protein type and drug type.\"}", "{\"title\": \"Reply to Reviewer#3\", \"comment\": \"Thank you for your comments.\", \"q1\": \"The core part of the proposed method which differs from previous works needs to be elaborated more.\", \"a1\": \"As you suggested, we modified our paper and explain more clearly about the difference from previous methods in the Introduction. Please also refer to the related work where we explained the difference between the most related works. To summarize, first we need to clarify that the paper is not focusing on developing a new inference method to improve general SPENs. Instead, we are targeting on combining the ideas of energy models with graph neural networks (GNNs) to improve the GNNs. Compared to previous models using energies in graph neural networks, we have a new energy definition from SPENs, which can deal with more flexible, more global correlation; compared with previous models which focus on the inference of general SPENs, we improve the inference algorithm by jointly training the cost-augmented inference network and test inference network to adapt our special task in the context of graph learning.\", \"q2\": \"Why does \\u201cadapting supervised SPEN into a semi-supervised setting\\u201d make sense and what benefit can be brought through such design?\", \"a2\": \"Thank you for pointing out this. We updated the Section 4.3 and elaborated more on this point. In addition to the revised paper, let us briefly explain it there. Graph learning is generally a semi-supervised setting, as shown in GCN [1]. The node embeddings are usually obtained from both training and test nodes(or edge). The defined graph energy is derived from node embeddings, so using all the graph to get the energy is one reason for semi-supervised learning. In addition, we shared some parameters between training and testing inference networks, jointly optimizing the training and testing objectives could lead to better results as we explained in the revised paper. That is another reason. The benefit is also proved in the experiments by comparing with GENN-.\\n\\n[1] Kipf T N, Welling M. Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2016.\", \"q3\": \"experimental performance\", \"a3\": \"GLENN is a variant of GNN using local energy and the same semi-supervised joint optimization algorithm, so \\u201cGLENN<GNN\\u201d indicates the global energy (our GNN based energy function) is better than local energies. Since all baselines are tested in semi-supervised setting and GENN performs better on both datasets, we can safely attribute the performance gain to the effectiveness of the energy design, instead of the usefulness of semi-supervised learning. The semi-supervised learning is useful by comparing GENN and GENN-.\\n\\nAs to the performance gain, we believe for most models the performance is data dependent. 1% increase of PR-AUC is not marginal for a difficult task. As this work is highly related to graph neural networks, let us elaborate some recent research in this community. From GCN to every new developed GNN model (e.g. GraphSAGE, Graph Infomax, GAT, GMNN), the increased accuracy is usually just around 1% on general benchmark datasets (Cora, Pubmed, Citeer).\"}", "{\"title\": \"Reply to reviewer#1\", \"comment\": \"Thank you for the comments. Although wrapped with an application to the drug-drug-interaction prediction tasks, this paper is essentially not a domain-specific empirical study paper but rather a new graph learning method for link prediction. Its model is new compared to all previous methods, and it does not require much knowledge about medical domain.\", \"q1\": \"Using energy-based formulation for graph neural network is not novel, and thus the paper lacks novelty in methodology point of view.\", \"a1\": \"We admit this is not the first work to combine GNN and energy models, but how to formulate the energy, how to do inference, and how to apply to a real-world task is not trivial.\\nWe have clearly described the difference between our paper with the previous energy-based GNN models in Section 2.2. And we also add more explanation in the Introduction section of the revised paper. To summarize, the differences include: 1. There is no existing work that defines the energy function as a graph neural network. Instead, most of them are based on the CRF-type energies, which captures only local correlations between neighbor nodes (whose disadvantage was demonstrated in our experiments with GLENN). And no previous one can deal with the problem of multi-typed link (e.g. DDI) prediction. 2. We also introduce a new inference method that is customized in the context of graph learning with our new energy function. 3. We believe given the new energy formulation and the new inference method, our model is novel. We hope the reviewer can take a second look at it and reevaluate the novelty.\", \"q2\": \"This paper seems to fit better for a pharmaceutical journal or a journal rather than a top-tier deep learning conference such as ICLR. I do not think the paper will be of interest to a large audience.\", \"a2\": \"As we formulate in the paper, DDI prediction is a typical multi-type link prediction problem, which is one of the classical tasks for graph learning. In our paper we in fact provided a new general model for multi-type link prediction tasks instead of a DDI-specific solution. We chose DDI because its \\u201clabel correlation\\u201d is easily understood as well as the task can be evaluated based on large real-world data. It is not fair to say it does not attract larger audience. Here we can elaborate two classical GNN models as examples: \\u201cConvolutional Networks on Graphs for Learning Molecular Fingerprints (Duvenaud et al. 2015)\\u201d, \\u201cNeural message passing for quantum chemistry (Glimer et al. 2017)\\u201d. We believe no one will regard them as chemical/biological papers even though their titles are about molecular fingerprints and quantum chemistry.\\nSimilarly, the authors of this paper are actually not from pharmaceutical domain, and the main contribution of the paper is to provide a new method in the graph learning community.\", \"q3\": \"The experimental validation is limited. The authors only compare against GNN and GLENN, without even mentioning other relevant baselines such as GCN and GAT.\", \"a3\": \"GCN and GAT are obviously not suitable for the multi-type link prediction tasks. The original GCN and GAT models do not take edge type information into consideration. We do not see any reason to mention them. In contrast, we have used one of the state-of-the-art message passing neural networks (Glimer et al. 2017) (which integrates edge type information in its message passing schema) as our base model, which we believe is representative enough to demonstrate the usefulness of the energy modules. In addition we also developed GLENN and GENN- for ablation study, which helped demonstrate the effectiveness of our new modules.\\n\\n[Duvenaud et al. 2015] Convolutional Networks on Graphsfor Learning Molecular Fingerprints. In NIPS2015.\\n[Glimer et al. 2017] Neural Message Passing for Quantum Chemistry. In NIPS2017.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper presents a graph neural network for drug-to-drug interaction (DDI) prediction, which explicitly models link type correlation. Basically, the drug-to-drug interaction prediction problem is a specific type of link prediction task, with drugs as vertices and interaction as edges, and the authors propose a graph neural network with an energy-based formulation where the link types are encoded as the graph edges. The authors validate their method against feedforward GNNs on two DDI prediction datasets, and achieve significantly improved performances.\", \"Pros\", \"The drug-to-drug interaction prediction is a relatively less explored application of deep learning with growing interests.\", \"The proposed energy-based formulation that considers link type correlation intuitively makes sense and performs significantly better than some of the existing GNN approaches.\", \"Cons\", \"Using energy-based formulation for graph neural network is not novel, and thus the paper lacks novelty in methodology point of view. Thus this paper seems to fit better for a pharmaceutical journal or a journal rather than a top-tier deep learning conference such as ICLR. I do not think the paper will be of interest to a large audience.\", \"The experimental validation is limited. The authors only compare against GNN and GLENN, without even mentioning other relevant baselines such as GCN and GAT. Thus I suggest the authors to perform a more extensive comparative study.\", \"The qualitative analysis is insufficient, even as a domain-specific empirical study paper. It would be better if the authors included actual examples of drug-drug interaction predicted by the proposed model and an existing model, for further analysis and interpretation.\", \"In sum, while I believe that DDI prediction is an interesting application of deep learning with growing interests, I do not believe the paper has sufficient novelty or experimental validation / qualitative examples to be a meaningful contribution to ICLR. I suggest the authors to work on the experimental validation and qualitative analysis part and submit it to a workshop or a pharmaceutical journal instead.\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes a framework to learn correlated drug-drug interaction based on structured prediction energy networks (SPEN). The core idea is to model the dependency structure of the labels (multi-label) by minimizing a designed energy function. The graph energy is designed as MLP over the mean of all nodes embeddings, where the nodes embeddings are obtained through a graph convolutional network. The edge information is included in the node embedding when aggregating neighborhood information. The proposed method also introduces an additional test inference network to jointly train with the cost-augmented training network under the semi-supervised setting. The authors tested on two DDI datasets and the result shows improvement compared to several baseline methods.\", \"Strengths\", \"motivation: the authors consider the correlations between DDI labels (multi-label) which could potentially improve prediction of DDI. The proposed method uses structure prediction energy networks to model such dependency.\", \"method: the proposed method introduce an additional test inference network to fit the energy minimization framework into a semi-supervised setting.\", \"Weakness\", \"The core part of the proposed method which differs from previous works needs to be elaborated more.\", \"The performance improvement seems to be marginal, especially on the second dataset.\", \"Detailed comments\", \"The work is based on structured prediction energy networks (SPEN, Belanger, et al. 2016) and its follow-up works (eg., Belanger et al. 2016, Lifu Tu et al. 2018), the overall framework is very similar with previous work, where the feature extraction part is replaced by a graph convolutional network module. The core difference lies in the additional testing inference network which is jointly trained for adapting supervised SPEN into a semi-supervised setting. This part actually differs from all previous works and needs to be elaborated more. Why does such design make sense and what benefit can be brought through such design? The formulation for semi-supervised SPEN could be defined more clearly and worth elaboration.\", \"The experiments are run on 3 different random splits, based on the mean(std) of the evaluation metric, the performance of the proposed method does not vary much compared to baselines, especially on the second dataset, eg., GNN 0.25 +/- 0.02 compared with GENN 0.26. Also, GLENN < GNN seems to imply including energy is not the most important part for helping the task, but rather the semi-supervised joint training truly improves the performance.\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Abstract:\\nUnderstanding drug-drug interactions (DDI) is an important task in drug development and prescription management. The authors proposed a new graph energy neural network (GENN) for DDI prediction. Comparing to the existing Decagon model (Zitnik et al. 2018), the proposed new model considered correlations between DDI types and used a new energy function to capture this information. By comparing to the previous baselines, the authors demonstrated their approach was able to achieve the SOAT performance in terms of prediction accuracy, be more robust to missing DDI data, and better capture the correlations between DDI types.\", \"major_comments\": \"The authors succeed in combining graph neural networks and structure prediction to capture correlations of DDI types between drug-drug pairs. They used an energy function based on graph neural networks to capture the dependencies among edge types in DDI graphs. \\n\\nHowever, the description of training cost-augmented inference network and test inference network is convoluted. The authors didn't elaborate on why trained cost-augmented inference network cannot be directly used for test inference, and why the prediction for unknown DDI type is different in test inference. In addition, the notations used in the last sentence of the second paragraph of section 4.3 seem to be inconsistent with Eq. (4)\\n\\nThe authors compared their model (GENN) with multiple baselines and demonstrated superior performance only in PR-AUC. In the description for Table 1, the authors claimed: \\\"GENN largely outperforms others with respect to all metrics in both datasets.\\\" However, this conclusion can not be supported by Table 1, in which GENN only shows trivial improvement comparing to GNN for the most reported metrics.\\n\\nLast but not least, the authors did not provide any limitations of their work.\", \"minor_comments\": \"In general, the paper is well written and easy to follow, except for sections 4.2 and 4.3. As mentioned above, the notations used in the last sentence of the second paragraph of section 4.3 seem to be inconsistent with Eq. (4)\"}" ] }
H1eVlgHKPr
Event Discovery for History Representation in Reinforcement Learning
[ "Aleksandr Ermolov", "Enver Sangineto", "Nicu Sebe" ]
Environments in Reinforcement Learning (RL) are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about past observations. While common methods represent this history using a Recurrent Neural Network (RNN), in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode. Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision. We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower. Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.
[ "reinforcement learning", "self-supervision", "POMDP" ]
Reject
https://openreview.net/pdf?id=H1eVlgHKPr
https://openreview.net/forum?id=H1eVlgHKPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "agCVrWiLFe", "r1lPjaL3jr", "BylovTL2iH", "ByxIBpUhsH", "r1eaI3UhjB", "BJlBx38hoB", "H1eghoUhsH", "HyeyLsL2oB", "HyeSboInjB", "B1x9RcIhiH", "SyeCIqUnsr", "S1lLJcLhjS", "rJlhnKL3or", "HJglFFUnor", "BJgtIKUnjB", "rkl8zFUniH", "r1xGJtIhsS", "SJlfcdI2sS", "ByeQSdL2jB", "r1erzO82iB", "HyxBJd82or", "BJlUdvI3sr", "ryeUrPUnjH", "S1gaMDUnjS", "S1xiyDI2sB", "SJgKhLIhor", "rJxzim-EcS", "rye2pMZ0FS", "S1e-0pOIuS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740351, 1573838238968, 1573838179292, 1573838141943, 1573837908540, 1573837805193, 1573837736078, 1573837638765, 1573837564594, 1573837521776, 1573837398189, 1573837278178, 1573837236144, 1573837176430, 1573837137203, 1573837069744, 1573837017530, 1573836937855, 1573836858590, 1573836812879, 1573836764996, 1573836653777, 1573836606467, 1573836565075, 1573836514946, 1573836465234, 1572242330480, 1571848900232, 1570307529378 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/Authors" ], [ "ICLR.cc/2020/Conference/Paper2094/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2094/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2094/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors propose approaches to handle partial observability in reinforcement learning. The reviewers agree that the paper does not sufficiently justify the methods that are proposed and even the experimental performance shows that the proposed method is not always better than baselines.\", \"title\": \"Paper Decision\"}", "{\"title\": \"question 4\", \"comment\": \"\\\"It would be useful to know how the more standard PPO with RNN architecture performs, but the authors' choice of architecture is a fitting comparison to their method, and does perform well in practice on the more partially observed domains, so the current setup is satisfactory.\\\"\\n\\nAdditionally we added two models with experiments in 3 Atari environments in the appendix of the new version.\"}", "{\"title\": \"question 3\", \"comment\": \"\\\"The authors should also provide more clarity on choosing the history head - is the head identity fixed after pretraining?\\\"\\n\\nYes, the head identity is kept fixed after pretraining, we wrote it more explicitly in the updated version (Sec. 4.1).\"}", "{\"title\": \"question 2\", \"comment\": \"\\\"Although it would be expensive to show how changing L affects performance on all domains, some quantitative results on this hyperparameter would be useful - the same applies to C.\\\"\\n----\\nWe include additional experiments in the appendix of the new version for the hyperparameters L and S (see also answer 5 to Reviewer 2).\"}", "{\"title\": \"question 1\", \"comment\": \"\\\"First and foremost, the results should be run over several seeds with standard deviation/error reported (it appears that this might be the case for Obstacle Tower, but no error is reported). I believe that the large improvements in some of the domains are significant, but it would be best to have this confirmed empirically.\\\"\\n\\nWe performed all experiments with 3 random seeds and we have updated the results in Sec. 5.2 (Tab. 1, 2) in the new version reporting the mean and the standard deviation value.\"}", "{\"title\": \"question 6\", \"comment\": \"\\\"On the first line of the last paragraph on p. 4, there is a missing reference \\u00ab (Sec. ) \\u00bb\\\"\\n\\\"Overall there are a bunch of typos throughout the paper that could easily be fixed\\\"\\n\\nWe fixed the typos in the new version.\"}", "{\"title\": \"question 5\", \"comment\": \"\\\"It would also be interesting to analyze the impact of varying the various new hyper-parameters (in particular S, C and L)\\\"\\n\\nWe included additional experiments in the appendix in the new version concerning parameters L and S. Unfortunately, due to limited rebuttal time, the experiments with different values of C are not ready yet. We will publish the results of the latter as soon as they are ready.\"}", "{\"title\": \"question 4\", \"comment\": \"\\\"Please explain better how the clustering technique from Ji et al. (2018) works, possibly in the Appendix if there is no room in the main body of the paper. This will make the paper more self-contained.\\nWithout fully understanding how this clustering technique works, it is difficult to me to get an intuition on how the clusters evolve during training, especially as new types of states are discovered by the agent. Some discussion on this topic would be appreciated.\\\"\\n\\nWe added a brief description of IIC in the appendix in the new version.\\nThe encoder is trained online with new observations, evolving with the agent during the training, as listed in Alg. 1.\"}", "{\"title\": \"question 3\", \"comment\": \"\\\"I think a natural and important baseline to compare to is using the same architecture as in Fig. 2 but where the mapping Phi(o_t) is learned through regular backprop (using the same loss as when learning the mapping I(t)). This would validate that the advanced self-supervised clustering technique from Ji et al. (2018) is actually useful, and thus that the observed improvements are not simply due to providing 32 frames of history vs. 4 as in vanilla PPO.\\\"\", \"thank_you_for_your_suggestion\": \"it's an interesting idea. We implemented this solution and we performed an additional experiment, reported in the appendix of the new version.\"}", "{\"title\": \"question 2\", \"comment\": \"\\\"There is no comparison to PPO+RNN on Obstacle Tower\\\"\\n\\nWe performed this experiment, whose results have been included it in the new version of the paper (see Sec. 5.2, Tab. 2 and also the answer number 11 to Reviewer 3).\"}", "{\"title\": \"question 1\", \"comment\": \"\\\"Only 7 Atari games are used (vs 49 in the PPO paper the proposed technique is compared to), without justification for how they were chosen, and it seems like only 1 run is performed on each game (while RL algorithms are well known to exhibit high variance)\\nOn Obstacle Tower there seems to be also only one run of each algorithm (more runs could be done with different training & testing seeds in order to get an idea of the variance)\\\"\\n\\nIn new version in Sec. 5.1 we added motivation for choosing this subset on the Atari benchmark. We performed all experiments with 3 random seeds and we have updated our experimental results in Sec. 5.2 (Tab. 1, 2) reporting the mean and the standard deviation values.\"}", "{\"title\": \"question 15\", \"comment\": \"\\\"[5] talks about difficulties in MI based methods and I encourage the authors to look at the analysis in this paper. As it is only an arXiv version and not published yet, I have not based my assessment on the existence of this paper but still connection to such analysis will make this paper stronger.\\\"\\n\\nThank you for your suggestion, it's indeed an interesting paper and its analysis could improve the foundation of the IIC method. However, it's quite a different direction with respect to our work.\"}", "{\"title\": \"question 14\", \"comment\": \"\\\"Table 1: For Qbert, original PPO (Sch.) is best performing, not EDHR\\\"\\n\\nNote that in that table we compare columns \\u201cPPO\\u201d (our reproduction of the PPO algorithm), \\u201cPPO-RNN\\u201d and \\u201cEDHR\\u201d. This is done to remove the impact of minor technical implementation details. Conversely, the column corresponding to \\u201cPPO (Sch.)\\u201c was added only as a reference. In the new version of the paper we have better clarified this point in the beginning of Sec. 5.2 and we have graphically separated the \\u201cPPO (Sch.)\\u201d column from the other columns.\"}", "{\"title\": \"question 13\", \"comment\": \"\\\"Figure 3: Training dynamics seem to favor RNN approach for BreakOut, Gravitar. Do the authors have insight on why this is the case?\\\"\\n\\nPerformance in these environments is similar for PPO and PPO-RNN, indicating that history is less important, and PPO-RNN learns to focus on instantaneous observation. This is probably due to the fact that such environments are more reactive, so the salient information is contained in the very recent past and can be represented by the instantaneous information.\"}", "{\"title\": \"question 12\", \"comment\": \"\\\"For the experiments, authors use a specific 10000 steps budget but this seems to be highly curated. The experiments would be stronger if the authors show experiment over a range of budget. This will also give insights on when does RNN becomes better and is there a budget after which both methods perform equally well or RNN based approaches\\nsurpass the current approaches.\\\"\\n\\nWe adopt the hyperparameter values (number of steps included) taken from the original PPO paper (Schulman et al., 2017) (see Sec. 4.1). Scalability of the method is an interesting direction for the research, we plan it for future work.\"}", "{\"title\": \"question 11\", \"comment\": \"\\\"Why do the authors not report PPO with RNN for Obstacle Tower?\\\"\\n\\nThis experiment requires larger computational budget with respect to others, and this is the reason for which it was missing in the initial submission. Following your suggestion, we performed this experiment, whose results have been included it in the new version of the paper (Sec. 5.2, Tab. 2).\"}", "{\"title\": \"question 10\", \"comment\": \"\\\"Authors must compare with other methods that use RNN approaches (e.g. [1]). Also, if methods such [2],[3],[4] are not directly applicable, they must atleast, use their RNN based architectures to modify PPO and compare several baselines.\\\"\\n\\nPlease, see our answer number 4 which concern the methods [1,2,3,4]. In addition, we included experiments using two other methods in the appendix of the new version.\"}", "{\"title\": \"question 9\", \"comment\": \"\\\"Focusing only on PPO and designing an RNN version of PPO constrains the effectiveness of experiments in validating the approach. Authors mention difficulty of training with RNN as one reason for PPO with RNN's under performance but this is not convincing Could the performance of PPO with RNN be limited only due to specific RNN architecture used?\\\"\\n\\nAs mentioned in the paper, our proposed history representation is independent of the specific RL approach. Good performance and technical simplicity make PPO a good candidate for demonstration. We design the PPO-RNN method to be as similar as possible to our EDHR for a fair comparison. We additionally include two other methods in the appendix of the new version.\"}", "{\"title\": \"question 8\", \"comment\": \"\\\"It is also useful to analyse how will this method work in presence of long vs short history? Will the clustering itself and hence the learned representations get affected by length of available history?\\\"\\n\\nPlease, see our previous answers: the clustering process and the history construction are two separate phases (see Sec. 3, the \\u201cEvent discovery stage\\u201d and the \\u201cHistory Representation\\u201d stage), so they are conceptually independent.\\nWe included additional experiments in the appendix of the new version to show the impact of the history length (S). Note that S is not a parameter of the clustering encoder \\\\Phi, thus it does not have any effect on the representation learning.\"}", "{\"title\": \"question 7\", \"comment\": \"\\\"The authors attempt to use clustering based approach with the hope of recovering important information, however mention that H(t) stores all the past events. Isn't this contradictory? Also, why can RNN with an attention based mechanism not achieve similar effect?\\\"\\n\\nNote that H(t) explicitly stores only the last S events, not all the past events (to avoid confusion, in Sec. 3 we changed the sentence \\u201cin H(t) all the past events are stored\\u201d with \\u201cin H(t) all the past S events are stored\\u201d, but we believe this should also be clear from the context). In other words, H(t) explicitly stores all the past S events observed by the agent, while an RNN automatically chooses what to store in its hidden state and what not.\\n\\nNote that the clustering process (\\u201cEvent discovery stage\\u201d, Sec. 3) is separated from the \\u201cHistory Representation\\u201d stage. In the former, clustering is used to discover a dictionary of relevant events. In the latter, H represents the past S steps of the agent with respect to this dictionary. The dictionary should be representative of the whole environment, while H only represents the recent past.\\n\\nConcerning your proposed RNN +Attention based solution, you probably mean that Attention should be used to focus on specific (recent) inputs. However, as aforementioned, it is important to distinguish recent information (i.e., the past S agent observations) from the whole information used in the clustering process using self-supervision. With and Attention-based RNN, Attention can be used to select/emphasize some of the last S observations, but this largely differs from our \\u201cEvent discovery stage\\u201d, where patterns of observations are extracted from all the past trajectories.\"}", "{\"title\": \"question 6\", \"comment\": \"\\\"The authors mention tat H(t) matrix is highly sparse but also low-dimensional in all their experiments. Would this is be the case for any other task? If not, is this a limitation of the method that you need S and C to be small?\\\"\\n\\nThe sparsity (which is obtained in any environment) is a property of the IIC method (specifically because of the last softmax layer). In more detail, the probability distribution of an observation with respect to a set of events is very peaked, meaning that each step is represented with C-1 numbers close to zero and one number close to one. Thus the history matrix is represented with S x (C-1) numbers close to zero and S numbers close to one, meaning that the matrix is sparse. The dimensionality of the matrix is SxC. \\nNote that this is not a limitation of the method, instead it is an advantage, because an even more compact history representation may be produced, if necessary (e.g., using an extra fully connected layer for a dimensionality reduction). Representing past step with a low-dimensional vector is an advantage because it can be processed by the RL agent using a smaller-capacity network.\"}", "{\"title\": \"question 5\", \"comment\": \"\\\"- In the event discovery stage, it is not clear why using consecutive observations is not useful. Also, if one does use L=1 (consecutive observations), can one reduce this method to RNN as you end of capturing all the previous history? Also, why L=3 is good across all different tasks? does it have any relation with the use of 4 frames in I(t)?\\\"\\n\\nConsecutive observations are useful but may be less informative than farther observations, being the latter more different to each other. In the appendix of the new version we show experiments with L=1 and L=8. The L=1 case cannot be reduced to an RNN approach, because they are conceptually completely different methods: In a common RNN there is no self-supervision, and the goal of a recurrent network is (generally speaking) not to predict the future but to store the past. RNNs are used in some specific self-supervised methods (see Sec. 2), but when an RNN is used as input to an RL algorithm, than it usually refers to a network trained using the reward as the supervisory signal and without any future prediction step (Sec. 2). In contrast, in our method L defines a temporal translation window which is used in the self-supervised training stage and not in the representation of the past information input to the agent.\\n\\nFor the same reason, the value of L is not related with the number of frames in the instantaneous representation: L is a hyperparameter of the unsupervised encoder \\\\Phi, while the size of I(t) is a hyperparameter of the RL model.\"}", "{\"title\": \"question 4\", \"comment\": \"\\\"The paper discusses several articles that provide background on the methods used however it fails to position the exposition in comparison to existing RNN based approaches which is a big miss as the goal of the paper is to replace RNN based approaches [1,2,3,4].\\\"\\n\\nMethod [1] is interesting, we included it in the new version in the \\\"Related Work\\\", describing the difference with our method. Additionally, in the Appendix we included experiments with a common RNN-based architecture. Method [2] is similar to (Hausknecht & Stone (2015)) presented in \\\"Related Work\\\". Methods [3,4] focus on memory for RL, which is a slightly different direction of research, and these methods were not demonstrated on complex environments like the Atari benchmark, making the comparison with our approach infeasible.\"}", "{\"title\": \"question 3\", \"comment\": \"\\\"Further, RNN based methods can retain order information in sequential history of observations. However, this method appears to not consider that information. Is this is true or am I mistaken here? If this is true, why would this not create issues in learning good representations for task where order is indeed important?\\\"\", \"the_event_order_is_explicitly_retained_in_our_method\": \"each event has a specific row in the history matrix H (see Fig. 2 and Sec. 3).\"}", "{\"title\": \"question 2\", \"comment\": \"\\\"Also, why is this particular clustering approach (Invariant Information Clustering) chosen? The motivation for using this approach is not clear and overall combination appears adhoc.\\\"\\n\\nWe use temporally-close frames as positive pairs for IIC, thus the obtained clusters are consistent in time (see Sec. 5.3), which is required by the definition of \\u201cevent\\u201d (Sec. 1). This property, jointly with the high efficiency of IIC, make the IIC method a proper candidate for our approach.\"}", "{\"title\": \"question 1\", \"comment\": \"\\\"It is not clear how the proposed clustering mechanism to discover events allows to successfully capture information that an RNN based approach does. For instance, RNN helps to capture long-term dependencies but it is very hard to interpret the proposed model from that aspect. \\\"\\n\\nIn our proposed history representation (H), each observation is represented as a distribution over a set of discovered events. Basically, H is a record of the past S events. And events describe \\u201clandmarks\\u201d of the environment (see Sec. 1). Hence, H represents past information with respect to this dictionary of landmarks. In contrast, RNN represents past information which can be prone to forgetting and it is not necessarily related to important environment landmarks/events.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper focuses on the problem learning state representations that can effectively capture historical information\\nin POMDPs. Specifically, the paper proposes an alternative approach to using RNN's for capturing such history - the authors adopt a variant of recently proposed Invariant Information Clustering approach to discover important events in past observation and then use these events to learn a probability distribution over observations to represent the state information. The goal here is to address the unstable and inefficient training issues associated with RNN based architectures. The authors validate their approach with experiments on seven tasks on Atari 57 benchmark and Obstacle Tower, both with discrete action space and compare their performance against both original and RNN versions of PPO. \\n\\nThe paper addresses an interesting problem to help learning better state representation in absence of complete information about the environment. The approach of using IIC clustering (or any clustering approach) for learning state representations is novel. The overall goal of replacing RNN based methods in order to achieve stable, efficient learning in presence of budget constraints is very useful and hence this approach is potentially a good step in that direction. Although not adequate, the results in figure 3 provides good insight into effectiveness of the method in making training easier. \\n\\nHowever, I am inclining to reject this paper for the following reasons:\\n(1) The motivation for using proposed clustering approach for history representation is neither clear and nor well exposed. MI based methods are inherently difficult to learn and hence this approach needs rigorous analysis on why it works when it does and how it fails.\\n(2) The paper fails to position the new approach in comparison to related works both in discussions and experiments.\\n(3) The experiments are very limited in nature and fails to demonstrate the efficacy of the proposed approach effectively. Further, the key contribution focusing on learning effective representations under constrained budget is not adequately tested.\", \"major_concerns\": [\"Motivation\", \"-------------\", \"It is not clear how the proposed clustering mechanism to discover events allows to successfully capture information that an RNN based approach does. For instance, RNN helps to capture long-term dependencies but it is very hard to interpret the proposed model from that aspect. Also, why is this particular clustering approach (Invariant Information Clustering) chosen? The motivation for using this approach is not clear and overall combination appears adhoc.\", \"Further, RNN based methods can retain order information in sequential history of observations. However, this method appears to not consider that information. Is this is true or am I mistaken here? If this is true, why would this not create issues in learning good representations for task where order is indeed important?\", \"The paper discusses several articles that provide background on the methods used however it fails to position the exposition in comparison to existing RNN based approaches which is a big miss as the goal of the paper is to replace RNN based approaches [1,2,3,4].\", \"Method\", \"------\", \"In the event discovery stage, it is not clear why using consecutive observations is not useful. Also, if one does use L=1 (consecutive observations), can one reduce this method to RNN as you end of capturing all the previous history? Also, why L=3 is good across all different tasks? does it have any relation with the use of 4 frames in I(t)?\", \"The authors mention tat H(t) matrix is highly sparse but also low-dimensional in all their experiments. Would this is be the case for any other task? If not, is this a limitation of the method that you need S and C to be small?\", \"The authors attempt to use clustering based approach with the hope of recovering important information, however\", \"mention that H(t) stores all the past events. Isn't this contradictory? Also, why can RNN with an attention based mechanism not achieve similar effect?\", \"It is also useful to analyse how will this method work in presence of long vs short history? Will the clustering itself and hence the learned representations get affected by length of available history?\", \"Experiments\", \"----------\", \"Focusing only on PPO and designing an RNN version of PPO constrains the effectiveness of experiments in validating the approach. Authors mention difficulty of training with RNN as one reason for PPO with RNN's under performance but this is not convincing Could the performance of PPO with RNN be limited only due to specific RNN architecture used?\", \"Authors must compare with other methods that use RNN approaches (e.g. [1]). Also, if methods such [2],[3],[4] are not directly applicable, they must atleast, use their RNN based architectures to modify PPO and compare several baselines.\", \"Why do the authors not report PPO with RNN for Obstacle Tower?\", \"For the experiments, authors use a specific 10000 steps budget but this seems to be highly curated. The experiments\", \"would be stronger if the authors show experiment over a range of budget. This will also give insights on when does RNN becomes better and is there a budget after which both methods perform equally well or RNN based approaches\", \"surpass the current approaches.\", \"Figure 3: Training dynamics seem to favor RNN approach for BreakOut, Gravitar. Do the authors have insight on why\", \"this is the case?\"], \"minor_points_to_improve_submission_not_affecting_the_score\": \"- The paper needs to be proofread for various typos and sentence construction issues\\n- Table 1: For Qbert, original PPO (Sch.) is best performing, not EDHR\\n- [5] talks about difficulties in MI based methods and I encourage the authors to look at the analysis in\\nthis paper. As it is only an arXiv version and not published yet, I have not based my assessment on the\\nexistence of this paper but still connection to such analysis will make this paper stronger.\\n\\n\\n[1] Deep Variational Reinforcement Learning for POMDPs, Igl et. al.\\n[2] On improving deep reinforcement learning for POMDPs, Zhu et. al.\\n[3] Policy Learning with continuous memory states in partially observed robotic control, Zhang et. al.\\n[4] Memory-based control with recurrent neural networks, Heess et. al. \\n[5] On Mutual Information Maximization and Representation Learning, Tschannen et. al.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new way to represent past history as input to an RL agent, that consists in clustering states and providing the (soft) cluster assignment of past states in the input. The clustering algorithm comes from previous work based on mutual information, where close (in time) observations are assumed to be semantically similar. The proposed scheme, named EDHR (Event Discovery History Representation), is shown to perform better than PPO and an RNN variant of PPO on (most of) 7 representative Atari games, and better than PPO on the Obstacle Tower benchmark.\\n\\nI would like to see this paper eventually published as I find the proposed technique original and quite relevant to current RL research, however I feel like its empirical evaluation is too weak at this time, which is why I am recommending rejection. I hope the results can be strengthened in a revised version so that I can increase my rating.\", \"the_main_limitations_of_the_current_empirical_evaluation_are\": \"\\u2022\\tOnly 7 Atari games are used (vs 49 in the PPO paper the proposed technique is compared to), without justification for how they were chosen, and it seems like only 1 run is performed on each game (while RL algorithms are well known to exhibit high variance)\\n\\u2022\\tOn Obstacle Tower there seems to be also only one run of each algorithm (more runs could be done with different training & testing seeds in order to get an idea of the variance)\\n\\u2022\\tThere is no comparison to PPO+RNN on Obstacle Tower\\n\\u2022\\tI think a natural and important baseline to compare to is using the same architecture as in Fig. 2 but where the mapping Phi(o_t) is learned through regular backprop (using the same loss as when learning the mapping I(t)). This would validate that the advanced self-supervised clustering technique from Ji et al. (2018) is actually useful, and thus that the observed improvements are not simply due to providing 32 frames of history vs. 4 as in vanilla PPO.\\n\\nOther (more minor) remarks:\\n\\u2022\\tPlease explain better how the clustering technique from Ji et al. (2018) works, possibly in the Appendix if there is no room in the main body of the paper. This will make the paper more self-contained. \\n\\u2022\\tWithout fully understanding how this clustering technique works, it is difficult to me to get an intuition on how the clusters evolve during training, especially as new types of states are discovered by the agent. Some discussion on this topic would be appreciated.\\n\\u2022\\tIt would also be interesting to analyze the impact of varying the various new hyper-parameters (in particular S, C and L)\\n\\u2022\\tOn the first line of the last paragraph on p. 4, there is a missing reference \\u00ab (Sec. ) \\u00bb\\n\\u2022\\tOverall there are a bunch of typos throughout the paper that could easily be fixed\", \"follow_up_on_author_response\": \"I remain inclined to stick to my rejection recommendation. I appreciate the efforts in providing more results, but I find them too limited and not entirely convincing: Table 4 is only done on 3 games, and the only one where the proposed method has a clear edge over \\\"Without self-supervision\\\" is MsPacMan. In addition, the fact that PPO+RNN shows much better performance than the proposed method on Obstacle Tower is also worrying.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors study the problem of RL under partially observed settings. While most current (D)RL approaches use RNNs to tackle this problem, RNNS are trickier to optimise than FFNNs - in practice RNN-based DRL agents can perform well on partially observed problems, may require more effort to optimise, and may underperform FFNNs on domains with no/less partial observability. The proposed solution is to use a FFNN, but provide a \\\"history representation\\\", which is a set of feature vectors for previous timesteps that is extracted from a second network. The second network is trained separately using self-supervision (specifically, IIC, but adapted to use temporal consistency of observations rather than data augmentation). The proposed algorithm outperforms both PPO with a FFNN and PPO with an RNN on 5/7 Atari games (mainly games where partial observability is higher), as well as on the new and challenging Obstacle Tower benchmark - though PPO with RNN results are conspicuously missing on the latter! Given the promising approach (which also has a 2x better wall-clock training time than PPO with an RNN) and results, I would give this paper a weak accept. Some nice properties are that the instantaneous feature extractor is trained using RL, while the history feature extractor is trained using self-supervision at a high level (not pixel level), so that they are probably complementary; in addition it appears that in practice the resulting history features are sparse and usually binary, which is an intriguing and potentially useful property for future work.\\n\\nThere are several things to be done to improve the paper however. First and foremost, the results should be run over several seeds with standard deviation/error reported (it appears that this might be the case for Obstacle Tower, but no error is reported). I believe that the large improvements in some of the domains are significant, but it would be best to have this confirmed empirically. The authors could improve the presentation of background material by giving techniques names instead of using just author names. Although it would be expensive to show how changing L affects performance on all domains, some quantitative results on this hyperparameter would be useful - the same applies to C. The authors should also provide more clarity on choosing the history head - is the head identity fixed after pretraining? It would be useful to know how the more standard PPO with RNN architecture performs, but the authors' choice of architecture is a fitting comparison to their method, and does perform well in practice on the more partially observed domains, so the current setup is satisfactory.\"}" ] }
rke7geHtwH
Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning
[ "Noah Siegel", "Jost Tobias Springenberg", "Felix Berkenkamp", "Abbas Abdolmaleki", "Michael Neunert", "Thomas Lampe", "Roland Hafner", "Nicolas Heess", "Martin Riedmiller" ]
Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.
[ "Reinforcement Learning", "Off-policy", "Multitask", "Continuous Control" ]
Accept (Poster)
https://openreview.net/pdf?id=rke7geHtwH
https://openreview.net/forum?id=rke7geHtwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "PvkQ8cMe9M", "H1eygCrnjr", "Byehanrnir", "HJl6c3HnjS", "HygcOQSTKH", "S1gZXAlnKH", "B1eFTysoKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740323, 1573834215344, 1573833923700, 1573833876801, 1571799921695, 1571716633115, 1571692481285 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2093/Authors" ], [ "ICLR.cc/2020/Conference/Paper2093/Authors" ], [ "ICLR.cc/2020/Conference/Paper2093/Authors" ], [ "ICLR.cc/2020/Conference/Paper2093/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2093/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2093/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors present a novel stable RL algorithm for the batch off-policy setting, through the use of a learned prior. Initially, reviewers had significant concerns about (1) reproducibility, (2) technical details, including the non-negativity of the lagrange multiplier, (3) a lack of separation between performance contributions of ABM and MPO, (4) baseline comparisons. The authors satisfactorily clarified points (1)-(3) and the simulated baseline comparisons for (4) seem reasonable in light of how long the real robot experiments took, as reported by the authors. Futhermore, the reviewers all agree on the contribution of the core ideas. Thus, I recommend this paper for acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for the thorough review and detailed responses.\", \"we_want_to_first_address_the_reviewers_concerns_regarding_the_benefits_that_learning_an_rl_policy_in_addition_to_the_abm_prior_brings\": \"1. We agree that the idea of learning an ABM based on trajectories in the data that are \\u201cgood\\u201d for the current task is one of the main contributions of the paper. We also agree that one of its major benefits is its simplicity: in particular, it requires no hyper-parameters to be tuned (like the scale of an exponential function of the Q-values / Returns). The reviewer thought that this contribution could have been emphasized more clearly and we have done so both in the methods and experimental section now (please see the updated paper). \\n\\n2. Regarding the improvement RL+ABM obtains over ABM: The ABM policy is indeed often a good policy by itself. This is particularly true for simple control suite tasks such as cheetah and walker. For the most difficult tasks in our experimental comparisons (stacking with the robot arm in simulation), learning an RL policy using the prior still provides significant benefits to performance. For the two most difficult tasks (stacking the blocks and the full task, \\u201cstack-and-leave\\u201d) the ABM policy only achieves about 70 % (stack) and 60 % (stack and leave) of the RL policy performance (i.e. it often drops the block and has to try again).\\nWe appreciate that spotting this difference might have not been easy in the initial version of the paper (because the axes in Figures 7 and 8 in the appendix had different limits). We have updated the plots to have uniform axis scales between BM and RL policies. In addition, we\\u2019ve included a tabular representation of the algorithm performances in the appendix to make direct comparison easier.\\n\\n3. We want to emphasize that all different variants of our algorithm significantly outperform the published state-of-the-art baselines (BCQ & BEAR) on the tasks we consider. We think that this is an important contribution to the literature on BatchRL.\\n\\n4. Regarding prior performance plots in the main paper: We omitted these in the initial submission for some of the plots based on feedback on an early draft that the plots were \\u201ctoo busy\\u201d. We appreciate that this omission was more confusing than helpful to the reviewer and have included dashed lines - indicating the performance of the prior - in all plots in the main paper now.\\n\\nWe also understand that it can be appealing to have an even simpler algorithm (removing the policy improvement step as suggested by the reviewer) when one does not care about top performance. As the reviewer points out, this can be expected to give slightly different results to an ABM prior learned alongside the RL policy (because the advantage calculated in that case is wrt. the RL policy). We have performed initial experiments in this direction which indicate that simply regressing monte-carlo returns in combination with the ABM prior (i.e. not learning a Q-function but regressing a value V(x) against n-step bootstrapped returns) works ok for control suite domains (as perhaps also indicated by the concurrent submissions mentioned by the reviewer), but fails for the multi-task robot data. This highlights a feature of our paper: while prior work considered learning from \\u201cnoisy\\u201d data collected from either random or partially-trained policies, we consider multi-task experiments where one must learn from data generated by policies with goals that differ in non-random ways. While MC returns do not work well (due to the noise in the return estimation), another option is to stick to the policy iteration scheme outlined in our paper, but remove the policy improvement step and set pi_{i+1} = pi_\\\\prior, i.e. assuming \\\\epsilon = 0 - thus learning a Q-function for the ABM prior. We find that with this procedure we can train an ABM model that is about as good as the ABM prior obtained during learning of ABM+MPO. In short: this does give a further simplified method, though it doesn\\u2019t achieve the maximum performance on the more difficult tasks (as described above). We have included a discussion in the appendix and have added results for the control suite. We will add the corresponding results for the simulated robot domain once training finishes, though ongoing experiments suggest similar results (i.e. ABM recovers the prior performance).\\n\\nFinally, regarding the question about baselines for the real robot experiment: Indeed the initial run for collecting the data took over a week, training offline while recording all statistics on the real robot took another 3 days (the robot was used only for evaluation purposes during that time). We hence settled on comparing different algorithms in simulation, which we found in previous experiments to be a good predictor of real robot performance.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for the review, positive feedback and appreciation of the real robot experiments. We agree with the reviewer that reproducibility is important. The initial paper submission had an algorithm box in the appendix, which might not have been obvious. We have updated the paper to ensure an explicit mention of the algorithm listing is included in the main text.\\n\\nIf the reviewer would like to see additional details in the listing please let us know, and we will adjust it accordingly.\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the review and comments as well as spotting two small errors.\\nIndeed the reviewer is correct, the Lagrange multiplier should be non-negative. This is handled correctly in our implementation but was not explained in the appendix for this step. We have updated the paper to reflect this.\\n\\nWe have also fixed the mentioned issue with overlapping text in Fig. 7, and we have used consistent axis scales between prior and RL plots in the appendix to make scale comparisons clearer. In addition we included tables in the appendix to make comparison easier and updated the plots in the main paper.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper present a novel approach to reinforcement learn from batched data that come from very different sources. To achieve this, they propose to learn a prior model and then constrain the RL policy to a trust-region that does not deviate from the domain where the current policy is close to.\\n\\nThe paper is clearly written.\", \"strength\": \"the paper is very novel and tries to solve a very challenging problem. The success of this approach shows the potential that collecting large-scale data without distinguishment is possible. With that, the data efficiency will be much improved! The experiment itself shows superior performance than other methods. This method also works for real robots.\", \"weakness\": \"There is no algorithm box so it can be hard to fully reproduce. The paper mentions stableness for the method however the analysis on this aspect is limited.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a novel off-policy reinforcement learning algorithm based on a learned prior. Although it is just an extension of existing works, it does outperform the-state-of-the-art methods and can be used in real-world robots.The technique details are well introduced and analysed. The experimental results and comparison are sufficient.\\n\\nHowever, there are also some details require further explanation. For example, In Equation (10), the Langrange multiplier should be constrained to non-negative. But the authors have not taken it into consideration in the optimization process.\\nBesides, there are also some typesetting problems in the paper, such as in Fig. 7.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method for \\u201coffline RL\\u201d (a.k.a \\u201cbatch RL\\u201d), i.e. reinforcement learning from a given static dataset, with no option to perform on-policy data collection. Contrary to prior work (Fujimoto et al, Kumar et al, Agarwal et al) in this area which focuses on making Q-learning robust in the offline RL setting, the authors in this paper instead propose making the policy update step robust to the batch RL setting. They achieve this by constraining the policy that is being learned to stay close to a learned \\u201cprior policy\\u201d (using KL divergence as a distance metric). The authors provide two different ways of learning this prior policy - the first one is a simple behavior model policy (which involves fitting a model using maximum likelihood on the entire dataset - effectively performing behavior cloning on the entire dataset), while the second is a more sophisticated \\u201cadvantage behavior model\\u201d, which fits a model only on the \\u201cgood\\u201d data (i.e. environment transitions with a positive advantage, where the advantage is estimated using the current policy). The authors extend the MPO and SVG algorithms to work with this constrained update step.\\n\\nThe authors provide experiments on the standard DM control suite tasks and some robotic tasks (both simulated and real world). For all experiments, the authors collect data by first running MPO in the standard RL setting (i.e. online data collection), and use the replay buffer from these successful RL runs to relearn a policy. The proposed method performs comparable to other methods in some settings (like all the experiments in the 2K episodes setting in Figure 2), and performs slightly better in some other settings (like Hopper and Quadruped in the 10K episodes setting). The gains are more significant in the non-standard robotics domains (Stack, Stack/Leave). The real world robotics results are not compared to any baseline (perhaps due to evaluation being expensive / time-consuming). However, I think the experimental results, especially Figures 7 and 8, point to a potential flaw in the proposed algorithm. In Figure 7 in the appendix, the learned behavior model (ABM) gets about the same performance as the proposed method (ABM+MPO) in pretty much all the four tasks. In one of the tasks (hopper), ABM+MPO does slightly better than ABM alone, but in this case plane MPO is better than ABM + MPO, indicating that this might just be a task on which MPO does well (for some unexplained reason). Similarly, in the robotics results in Figure 8, ABM alone does comparable to ABM+MPO in five of the seven tasks. \\n\\nThese experimental results imply to me that most of the performance actually comes from the learned prior (ABM), and whether performing MPO on top of it leads to any improvement or not is hard to predict^[1]. The ABM model itself is somewhat novel, and similar ideas under review at ICLR this year have shown that this alone is a useful algorithm for offline RL, see https://openreview.net/forum?id=H1gdF34FvS and https://openreview.net/forum?id=BJlnmgrFvS. However, the ABM model is not emphasized much in the paper. It is also unclear to my as to why the ABM/BM results were placed in the Appendix for Figure 3, while similar results for experiments in Figure 1 were provided using dotted lines. \\n\\nFor the paper to be accepted for publication, I think it needs to make a stronger argument (experimentally, at least) about the proposed algorithm being superior to ABM. If this is not really the case, then I think it would require a substantial rewrite to emphasize that ABM is where most of the model performance comes from (as seen in concurrent work listed above). \\n\\n[1]: A slight caveat here: in the proposed Algorithm 1, the ABM model is learned online along with the policy, but I believe it could be learned without the policy - for example, by using MC returns as in https://openreview.net/forum?id=H1gdF34FvS. \\n----------------------\", \"edit\": \"The author response has convinced me to bump my rating to a weak accept.\"}" ] }
BJxQxeBYwH
Are Powerful Graph Neural Nets Necessary? A Dissection on Graph Classification
[ "Ting Chen", "Song Bian", "Yizhou Sun" ]
Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a \textit{set} of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.
[ "graph neural nets", "graph classification", "set function" ]
Reject
https://openreview.net/pdf?id=BJxQxeBYwH
https://openreview.net/forum?id=BJxQxeBYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "bfn9DplVgh", "eO0TSjvspL", "rylyUz9voB", "Bkxh3xqwsH", "S1xJYl9vsB", "HklhfgqDsr", "B1xn-Pmq5B", "SygrzG5uqH", "Hye94W7M5r", "HJgX4T6CtH", "BJg-ri4CYB" ], "note_type": [ "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1584009849293, 1576798740294, 1573524039020, 1573523636462, 1573523574611, 1573523475845, 1572644612390, 1572540940699, 1572118833883, 1571900714582, 1571863353302 ], "note_signatures": [ [ "~Clément_Vignac1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2092/Authors" ], [ "ICLR.cc/2020/Conference/Paper2092/Authors" ], [ "ICLR.cc/2020/Conference/Paper2092/Authors" ], [ "ICLR.cc/2020/Conference/Paper2092/Authors" ], [ "ICLR.cc/2020/Conference/Paper2092/Authors" ], [ "~Boris_Knyazev1" ], [ "ICLR.cc/2020/Conference/Paper2092/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2092/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2092/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Related results for graph classification\", \"comment\": \"Hello,\\nI just discovered your paper, thank you for this very comprehensive work. I wanted to bring to your attention a short paper about node classification, in which we reached very similar conclusions: https://arxiv.org/pdf/1911.05384.pdf\\n\\nOur main focus was the study of the relative performance of graph neural networks depending on the number of training examples and features in the dataset, which is a different perspective. However, one observation that we made is that even when a lot of training data is available, intertwining propagation and learning layers is not useful. We found that in this case, it was better to use several propagation layers (with no trainable parameters), followed by a non-linear feature extractor (i.e a MLP). It looks very close to your conclusions about graph classification.\\n\\nWe were not aware of your work when we wrote the paper and therefore did not cite you, but we'll make sure to do it whenever we present it.\"}", "{\"decision\": \"Reject\", \"comment\": \"This paper proposes to split the GNN operations into two parts and study the effects of each part. While two reviewers are positive about this paper, the other reviewer R1 has raised some concerns. During discussion, R1 responded and indicated that his/her concerns were not addressed in author rebuttal. Overall, I feel the paper is borderline and lean towards reject.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Revision log\", \"comment\": \"This is to log what we have changed in the revision:\\n\\n1. We added comparisons to RETGK and GNTK as suggested by Reviewer 1.\\n2. We clarified a notation as suggested by Reviewer 2.\\n3. We added an experiment on varying dataset size according to the comment of Reviewer 3.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the time and detailed comments. Please find our responses to the comments below.\\n\\n[Analysis and insights]\", \"two_types_of_theoretical_analysis_are_presented_in_this_paper\": \"1) We prove that GFN can be derived by linearizing graph filtering part of GNNs (proposition 1), and leverage this theoretical connection to decouple the two GNN parts and study the importance of them separately.\\n 2) We show that GFNs can be a very powerful framework without the restriction on the feature extraction function \\u03b3(G, X) and the exact forms of the set function (proposition 2), which is encouraging for future graph function design.\\n\\nRegarding the gaps between GCN and GFN among datasets, we note that 6 out of 10 datasets, GFN outperforms GNN counterpart in fair comparisons, and also note the gaps are *small* as they are within *1 standard deviation*. We are not convinced if these gaps are substantial, and thus conclude that both methods are on par across the whole set of benchmarks.\\n\\nThe main insight of this paper is that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.\\n\\n[Non-linearity in GNN\\u2019s middle layers]\\n\\nWe do account for the non-linearity in GNN\\u2019s as our GNN baselines have non-linearity in them. When nonlinearity in GNN\\u2019s middle layers are removed, we prove that they can be expressed as a GFN with appropriated graph features (in proposition 1). By comparing GFN and GNN, we are testing the importance of the nonlinearity of the graph filtering function (in GNN\\u2019s middle layers).\\n\\n[More comparisons]\\n\\nAt the time when this work was conducted, the state-of-the-art of GNN variant was GIN (Xu et al ICLR\\u201919), which we compared in this work (among other 7 baselines). We\\u2019d also like to point out that our goal is to dissect GNN variants, while both suggested papers are based on graph kernels, which are quadratic to the number of nodes and graphs (e.g. faster RetGKII is generally inferior than much slower RetGKI, and GNTK cannot scale to Reddit datasets). In contrast, our GFN has linear complexity thus in practice very fast/scalable (Figure 2), and the performances are better or comparable averaged over all benchmarks. Nonetheless, we appreciate the reviewer\\u2019s time and evaluation thus have added the full comparison and discussion in the revision (Appendix H), in good faith that the reviewer would also appreciate the contributions of our work.\\n\\nWe\\u2019d like to clarify further as necessary, so please feel free to let us know if any of the concerns are not fully addressed.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your time and positive feedbacks. Regarding the datasets, we compared 12 social and biological graph datasets (from 188 to 11929 graphs), which are the most widely used standard benchmarks for graph classification task as of today (some existing work does not even include the largest RE-M12K due to scalability issue).\\n\\nOn the dataset size, we try to incorporate your comments and perform extra experiments. To see how the varying dataset size affects the performance of GFN and GCN, we take the largest RE-M12K dataset (11929 graphs), and randomly sample datasets of different size (from 10% graphs to 100% graphs). We run 10 fold cross validation on each of the dataset, and found that: as dataset sizes increases, it becomes harder to overfit (especially for GFN), but GFN still performs as well as, if not better, than GCN. Details of this experiment are added to the appendix I.\\n\\nOn the dataset complexity, we fully agree that with more complex tasks/datasets powerful GNNs could probably show better performance. And in fact, that is also part of our goal in publishing our work, to raise the awareness that common graph classification benchmarks are likely inadequate for testing advanced GNN variants. We wish the community as a whole to explore and adopt more convincing benchmarks for testing advanced GNN variants, or include GFN as a standard baseline to provide a sanity check.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your time and valuable feedbacks.\\n\\n[More discussion on the observations]\\n\\nWe agree that there is more than one possibility for the empirical observations. Allow us to re-elaborate our main observation: what our experiments show is that GNN can overfit training set, but it doesn\\u2019t generalize better than GFN (GNN with linearized graph filtering function) on a broad set of benchmarks. \\n\\nOne possibility is the inadequacy of existing graph classification benchmarks, which we are inclined to think it is the case. We have tried our best and tested on the most widely used benchmarks across the spectrum (from 188 to 11929 graphs). We also try to varying the dataset size by subsampling the largest dataset (RE-M12K), and the results can be found in appendix I. We hope, along with the whole community, to realize and adopt more complex real datasets to test if the observation still persists. \\n\\nThe other possibility is that the linear graph filtering may be a good inductive bias for the tested datasets/problems. This is what other studies on node classification (e.g. Wu et al, ICML\\u201919) suggest as well - GNNs are performing low-pass filtering. However, this is again dependant on the tasks and datasets considered.\\n\\nThe third possibility is that, as suggested by the reviewer, despite GNNs can overfit but they are not capable of capturing the generalizable features (at least not prioritizing to learn those features). To show this is the case, we need to improve existing GNNs (e.g. architecture, objective) so that they can generalize better in existing benchmarks. We have not been able to find new techniques, or existing work, that can identify those more generalizable features.\\n\\nWe admit that our work has limitations on fully answering these questions, but we believe raising the right question itself (with solid experimental observations) is an important step towards the good answers. We wish our work can raise the awareness of the phenomenon so that it can be better studied in the future. What\\u2019s more, the proposed GFN can serve as a fast and accurate approximation to GNN for graph classification task, which we believe is a practical contribution.\\n\\n[Other datasets and tasks]\\n\\nIn this work we focus on graph classification problem on 12 datasets as they are the most widely used benchmarks for recently proposed advanced GNN variants. However, we agree that to further demystify the above possibilities, more work should be done to adopt GFN as baseline and apply it for more datasets/tasks.\\n\\nToward that end, we conduct experiments on image classification as graph classification on MNIST, where we find significant gap between GCN and GFN (Appendix D), which suggests non-linear graph filtering is important for image classification by treating images as graphs (unlike other natural graph datasets). We wish to conduct more meaningful downstream tasks that use graph neural nets but it requires careful selection and establishing benchmark datasets, thus we defer it for future work.\\n\\nAs for node classification task, which does not require the graph readout function (i.e., only has graph filtering function), and typically it is tested in the transductive setting (i.e. single graph), thus is a simpler task. Wu et al (ICML\\u201919) has shown that fully linearizing GCN yields similar performance in several node classification datasets, which can be seen as a special case of GFN without set function. However, fully linearizing GCN for graph classification (i.e. GLN) significantly degenerates the performance, making it an important distinction between graph and node classification tasks.\\n\\n[Notation clarification]\\n\\nWe have updated the draft to clarify \\\\tilde{A}. Shown below Eq 2, $\\\\tilde{A} = \\\\tilde{D}^{-1/2}(A+\\\\epsilon I)\\\\tilde{D}^{-1/2}$ is the normalized adjacency matrix, with \\\\epsilon=1 this is the one proposed in Kipf and Welling (2016). We use this formulation by default (with \\\\epsilon=1e-8), but want to note that other formulation of \\\\tilde{A} is also allowed (e.g. different normalized graph Lapacian) under the general framework of GFN.\"}", "{\"title\": \"It is one instantiation of graph augmented features in our framework\", \"comment\": \"Thanks for your interest in our work. It is fair to instantiate graph augmented features with other filters/operators, in our work, we follow Kipf and Welling (2016) and use modified adjacency matrix with renormalization trick, which is shown to be better than Chebyshev polynomials in their work. But I think Chebyshev polynomials can probably be used as another instantiation of the graph augmented features, along with possibly many more. We are different from those GCN based methods (with normalized adj or Chebyshev polynomials) in the sense that we fix graph augmented features in learning time, and treat the graph as a set.\"}", "{\"title\": \"Relation to Chebyshev graph convolution\", \"comment\": \"Thank you for an interesting paper. I found that Eq. 5 resembles the Chebyshev graph convolution proposed in [1] which you cite in the first sentence only. The nice thing about [1] is that it approximates spectral graph convolution if K is large enough and uses the orthogonal Chebyshev basis, so it's theoretically sound. In your Eq. 5 you just take powers of adjacency matrices, and my hypothesis is that it can lead to unstable training dynamics, which might explain somewhat lower than expected performance when you use K>1.\\n\\nIt would be interesting to see the connection of your formulation in Eq. 5 to [1].\\n\\n[1] Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper dissects the importance of two parts in GCN: 1) nonlinear neighborhood aggregation; 2) nonlinear set function by linearizing the two parts and resulting in Graph Feature Network (GFN) and Graph Linear Network (GLN). It shows empirically that GFN achieves almost the same performance while GLN is much worse, suggesting the nonlinear graph neighborhood aggregation step may be unnecessary. Extensive ablation studies are conducted to single out the effects of various factors.\\n\\nThe paper studies an interesting problem and sets out a good plan of experiments to verify the hypotheses. The results are interesting: merely constructing graph neighborhood features alone is enough to get comparable performance with GCN since the nonlinearity in the set function is strong enough. The experiments are designed nicely: 1) it compares with various baselines on a variety of popular benchmarks; 2) ablation studies single out the importance of different graph features, such as degree, and multi-hop averages; 3) verifying whether the good performance GFN comes from easier optimization.\\n\\nThe paper is also clearly written, with clean notations, and well-structured sections.\\n\\nI think the experiment can be improved by comparing on larger, more complex datasets. Figure 1 seems to suggest GCN is overfitting compared to GFN due to its extra capacity--significantly better training accuracy but slightly worse test accuracy. It is usually the case that larger and more complex datasets require more sophisticated models. But the paper makes a good case for GFN in these datasets for the graph classification task.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a dissection analysis of graph neural networks by decomposing GNNs into two parts: a graph filtering function and a set function. Although this decomposition may not be unique in general, as pointed out in the paper, these two parts can help analyze the impact of each part in the GNN model. Two simplified versions of GNN is then proposed by linearizing the graph filtering function and the set function, denoted as GFN and GLN, respectively. Experimental results on benchmarks datasets for graph classification show that GFN can achieve comparable or even better performance compared to recently proposed GNNs with higher computational efficiency. This demonstrates that the current GNN models may be unnecessarily complicated and overkill on graph classification. These empirical results are pretty interesting to the research community, and can encourage other researchers to reflect on existing fancy GNN models whether it's worth having more complex and more computationally expensive models to achieve similar or even inferior performance. Overall, this paper is well-written and the contribution is clear. I would like to recommend a weak accept for this paper. If the suggestions below can be addressed in author response, I would be willing to increase the score.\", \"suggestions_for_improvement\": \"1) Considering the experimental results in this paper, it is possible that the existing graph classification tasks are not that difficult so that the simplified GNN variant can also achieve comparable or even better performance (easier to learn). This can be conjectured from the consistently better training performance but comparable testing performance of original GNN. Another possibility is that even the original GNN has larger model capacity, it is not able to capture more useful information from the graph structure, even on tasks that are more challenging than graph classification. However, this paper lacks such in-depth discussions;\\n\\n2) Besides the graph classification task, it would be better to explore the performance of the simplified GNN on other graph learning tasks, such as node classification, and various downstream tasks using graph neural networks. This can help demystify the question raised in the previous point; 3) The matrix \\\\tilde{A} in Equation 5 is not well explained (described as \\\"similar to that in Kipf and Welling (2016)\\\"). It would be more clear to directly point out that it is the adjacency matrix, as described later in the paper.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper tries to study the importance of different components of GNNs. This paper studies two components 1) graph filtering: aggregation of neighboring features and 2) the aggregation function for the output.\\n\\nTo study this problem, this paper proposes two models, Graph Feature Network (GFN) and Graph Linear Network (GLN). GFN first uses the adjacency matrix to create several layers of features, then applies a multi-layer fully-connected neural network. GLN is a special case of GFN with the fully-connected neural network being linear.\\n\\nThis paper conducts experiments on graph classification task and finds GFN gives a reasonable performance, whereas GLN's performance is weaker.\", \"comments\": \"This paper studies an important problem in GNN, and the proposed method is interesting. However, I cannot accept the paper in the current form because of the following reasons.\\n\\n1. There is no theoretical analysis in the paper. For example, on some datasets, GFN, GLN, and GNN's performances are close while on other datasets, there are gaps. The current paper does not provide insight.\\n\\n2. GNN also contains non-linearity in the middle layers. However, the methodology in this paper cannot account for the importance of non-linearity in the middle layers.\\n\\n3. The experiment section ignores some recent results on graph classification tasks. See:\", \"https\": \"//arxiv.org/abs/1905.13192\"}" ] }
B1xGxgSYvH
Domain-Invariant Representations: A Look on Compression and Weights
[ "Victor Bouvier", "Céline Hudelot", "Clément Chastagnol", "Philippe Very", "Myriam Tami" ]
Learning Invariant Representations to adapt deep classifiers of a source domain to a new target domain has recently attracted much attention. In this paper, we show that the search for invariance favors the compression of representations. We point out this may have a bad impact on adaptability of representations expressed as a minimal combined domain error. By considering the risk of compression, we show that weighting representations can align representation distributions without impacting their adaptability. This supports the claim that representation invariance is too strict a constraint. First, we introduce a new bound on the target risk that reveals a trade-off between compression and invariance of learned representations. More precisely, our results show that the adaptability of a representation can be better controlled when the compression risk is taken into account. In contrast, preserving adaptability may overestimate the risk of compression that makes the bound impracticable. We support these statements with a theoretical analysis illustrated on a standard domain adaptation benchmark. Second, we show that learning weighted representations plays a key role in relaxing the constraint of invariance and then preserving the risk of compression. Taking advantage of this trade-off may open up promising directions for the design of new adaptation methods.
[ "Domain Adaptation", "Invariant Representation", "Compression", "Machine Learning Theory" ]
Reject
https://openreview.net/pdf?id=B1xGxgSYvH
https://openreview.net/forum?id=B1xGxgSYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "phI1I6Oav", "HJx6Sqr2sr", "BklhSIZ5iS", "BJgmzUqtsS", "rJleDfqtjB", "HJe2_6KFiB", "HklaqN7yqH", "BJeSgyAaKB", "S1x9j79pKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740267, 1573833285172, 1573684803965, 1573656074991, 1573655127890, 1573653875564, 1571923093277, 1571835628800, 1571820450171 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2091/Authors" ], [ "ICLR.cc/2020/Conference/Paper2091/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2091/Authors" ], [ "ICLR.cc/2020/Conference/Paper2091/Authors" ], [ "ICLR.cc/2020/Conference/Paper2091/Authors" ], [ "ICLR.cc/2020/Conference/Paper2091/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2091/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2091/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper provides a new theoretical framework for domain adaptation by exploring the compression and adaptability.\\n\\nReviewers and AC generally agree that this paper discusses about an important problem and provides new insight, but it is not a thorough theoretical work. The reviewers identified several key limitations of the theory such as unrealistic condition and approximation. Some important points still require more work to make the framework practical for algorithm design and computation. The presentation could also be improved.\\n\\nHence I recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Updated version of the submission\", \"comment\": \"We would like to thank the reviewers for their valuable and insightful comments which have helped us to improve our submission and for their time for checking both the proofs and experiments.\", \"we_provide_an_updated_version__of_the_submission_which_addresses_some_concerns_of_the_reviewers\": [\"We clarify the experimental section bringing more details on the choice of the different losses (ask by reviewer1).\", \"We add a comment on the risk of increasing the variance of estimators when using weights in section 5 (a concern of reviewer1).\", \"We extend our discussion part by indicating precisely the scope of this work.\", \"We add a notation section in the appendix which includes the most important ratings and definitions (ask by reviewer1)\", \"We correct the bibliography by adding the right conferences/journals where the papers have been published in addition to the ArXiv reference (ask by reviewer3).\"]}", "{\"title\": \"Answer.\", \"comment\": \"Ok, thank your for your answers.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your comments and concerns.\\n\\nAbout the concerns on the experimental section and the choice of $\\\\mathcal L_0(\\\\varphi)$. This loss involves $\\\\hat \\\\varphi$, the representation for which we want to study both the risks of compression and adaptability. Then, we train $\\\\varphi \\\\in \\\\Phi$ with the loss $-\\\\Pi + \\\\lambda_0 \\\\cdot \\\\mathcal L_0$ which is a penalized version of the constrained optimization $\\\\min_{g\\\\circ \\\\varphi \\\\in \\\\mathcal H_0^\\\\eta} -\\\\Pi$. $\\\\mathcal L_0$ involves two terms (equation 7). The first one enforces the source equivalence, the second one (with ReLU) enforces the new representations to not deviate from the original one in the target domain with a rate of $\\\\eta$ (following the definition of $\\\\mathcal H_0^\\\\eta$).\\n\\nAbout the choice of the dataset and the scope of the experimental analysis. We followed a comparable experimental setup (both datasets and filtering strategies for studying the problem of target shift) than papers [1,2].\\n \\n[1] Zhao, Han, et al. \\\"On Learning Invariant Representations for Domain Adaptation.\\\" International Conference on Machine Learning. 2019.\\n[2] Johansson, Fredrik, David Sontag, and Rajesh Ranganath. \\\"Support and Invertibility in Domain-Invariant Representations.\\\" The 22nd International Conference on Artificial Intelligence and Statistics. 2019.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"We would like to thank you for your valuable comments.\\n\\nWe provide details about your concerns. We follow the 'itemized' presentation for answering to each point.\\n\\n- (1 & 2nd items) We agree the term compression can be misleading and \\u2018hypothesis space reduction\\u2019 would be clearer. The relation of compression (which is defined as $\\\\mathcal H(\\\\varphi_1) \\\\subset \\\\mathcal H(\\\\varphi_2)$ ie $\\\\forall g \\\\in \\\\mathcal G, \\\\exists g\\u2019 \\\\in \\\\mathcal G$ such that $g\\\\circ\\\\varphi_1 = g\\u2019 \\\\circ\\\\varphi_2$), we agree there is no reason to observe necessarily a relation of inclusion between two given representations. However, as mentioned by the reviewer, considering a set of possible transformations is relevant and deserves deeper investigations. For instance, one can expect to not reduce (so much) the hypothesis space if $\\\\varphi_1$ is a transformation of $\\\\varphi_2$ by a translation or a rotation. However, we believe this notion is in someway embedded in the definition itself: if $\\\\mathcal G$ has enough capacity to \\u2018learn\\u2019 those transformations, then the raised point is addressed. Interestingly, this intuition is connected to the capacity of the set of classifiers $\\\\mathcal G$.\\n\\n---------------------------------------------\\n\\n- (3 & 4 th items) We underline that the bound is generally less tight than the original version [1]. In the particular case of $\\\\mathcal H_0 = \\\\tilde{\\\\mathcal H}$, the bound is increased of $\\\\frac 1 2 d(\\\\tilde{\\\\mathcal H}\\\\Delta\\\\tilde{\\\\mathcal H}) + \\\\beta$ with respect to [1]. When the size of $\\\\mathcal H_0$ increases, our adaptability term decreases. But, this is followed by the increase in the term of compression (the trade-off introduced in the paper). To sum-up, we propose to better control the term of adaptability relying on a pessimistic estimation of the risk of compression (due to the supremum on $\\\\mathcal H_0$). \\n\\nWe would like to clarify the positioning of our work. DA bounds involve necessarily an intractable term during training (called adaptability in the literature relying on [1]). Roughly speaking, the tighter the bound, the more important becomes the contribution of intractable terms at training time. For instance, we can consider two limit cases (those cases are not relevant but show the difficulty) when considering the accuracy of a classifier:\\n1. $\\\\varepsilon^t(h) \\\\leq \\\\varepsilon^t(h)$: this bound is the tightest but is totally intractable.\\n2. $\\\\varepsilon^t(h) \\\\leq 1$: tractable but not useful.\\nOur analysis shows that we can reduce the importance of intractable terms in the bound by considering the risk of compression (this is a strategy and others should be also explored). The bound is significantly looser but more tractable and therefore offers better guarantees. The looseness of the bound can be controlled by the choice of a relevant $\\\\mathcal H_0$ depending on the use case. In the submitted version, we provide the example $\\\\mathcal H_0 = \\\\mathcal H_0^\\\\eta$ for addressing the robustness to adversarial attacks in a domain adaptation context. \\n\\nThe main takeaways of our work (we are sorry for not having clearly stated it in the submitted version): \\n\\n\\u201cAchieving representation invariance may help to reduce the generalization gap between two domains. However, all invariances will not have the same guarantees for better generalization. We will look for representations which preserve the best the information in the original target features spaces. We leave both the design on new DA methods which incorporate such consideration and relevant design of $\\\\mathcal H_0$ as future works.\\u201c\\n\\n-------------------------------------\\n\\n- In the experimental section, the cross-entropy is a trainable proxy for optimizing the accuracy. All error terms are reported on a test set computing the accuracy, not the cross-entropy, then the experimental analysis is consistent with the theoretical analysis. The choice of the squared loss is imposed by the use of the conditional expectation. More precisely, some parts of the proof need to bound $\\\\varepsilon^s(h, \\\\tilde f^s\\\\circ\\\\varphi) \\\\leq \\\\varepsilon^s(h)$ which derives from $\\\\varepsilon^s(h) = \\\\sigma^2 + \\\\varepsilon^s(h, \\\\tilde f^s\\\\circ\\\\varphi)$ where $\\\\sigma^2$ is the noise in the data. For more general losses (which verify the triangular inequality), we have $\\\\varepsilon^s(h) \\\\leq \\\\sigma^2 + \\\\varepsilon^s(h, \\\\tilde f^s\\\\circ\\\\varphi)$ then no guarantee to have $\\\\varepsilon^s(h, \\\\tilde f^s\\\\circ\\\\varphi) \\\\leq \\\\varepsilon^s(h)$.\\n\\n----------------------------------\\n\\n- We are sorry for the lack of clarity. Since we are studying a risk of a classifier $\\\\hat Y = h(X)$ given by $\\\\varepsilon(h) = \\\\mathbb E [(Y - \\\\hat Y)^2]$, we refer to this risk as a $L^2$ norm between estimated labels $\\\\hat Y$ and true labels $Y$. Depending on the domain where such risk is computed, we refer to source or target $L^2$ norms.\\n\\n[1] Cortes, Corinna, Yishay Mansour, and Mehryar Mohri. \\\"Learning bounds for importance weighting.\\\" Advances in neural information processing systems. 2010.\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"We thank you for the fruitful comments and suggestions.\", \"our_responses_are_below\": \"1. The source equivalence is a strong constraint. Understanding how this constraint can be implemented in a finite sampling case will provide a broader range of application of our analysis. In the experimental section, we suggest to implement the source equivalence with the loss $\\\\mathbb E^s[||\\\\varphi'(X) - \\\\varphi(X)||^2]$.\\n\\n2. Thanks for pointing this issue when extending the analysis to finite sampling. We agree with the fact that adding weights will increase the variance (proportionally with $\\\\left (\\\\mathbb E^s [w^2] \\\\right)^{1/2}$) of the complexity term when bounding empirical errors. However, our weighting strategy is flexible enough for controlling the variance of $w$ (for instance by enforcing $w\\\\leq M$ for a given $M>0$ in the choice of $\\\\mathcal W$). In the section 5 of [1], they suggest to rely on alternative weight $u$ which achieves a trade-off between re-weighting ($u\\\\approx \\\\mathbb P^t(X)/\\\\mathbb P^s(X)$) losses and and high variance (preventing too high value of $\\\\left (\\\\mathbb E^s [u^2] \\\\right)^{1/2}$). In our work, the trade-off is between relaxing the constraint of invariance and limiting the weight variance. This trade-off is embedded in the choice of $\\\\mathcal W$. We will provide in the updated version a clear detail of the interest of adding weight for invariance relaxation. \\n\\n3. Thanks for pointing out this interesting issue. We agree that $\\\\beta$ is a domain adaptation problem by itself but in a covariate shift situation in the representation space (indeed, label function is the same for both domains: it is $\\\\tilde f^s$). Therefore, one can expect this term to be small since the infremum on $\\\\tilde{\\\\mathcal H}$ is involved. Another way to deal with it is to input $\\\\hat f^s = \\\\arg \\\\min_{h \\\\in \\\\tilde{\\\\mathcal H}} \\\\varepsilon^s(h)$ in the inequality rather than $\\\\tilde f^s$ (this leads to some but doable changes in the inequality). \\n\\nAbout the sequence of $(\\\\varphi_n)$ introduced for demonstrating that $\\\\beta$ is trainable from the data. First of all, if $\\\\varphi$ changes, all terms in the bound change as mentioned by the reviewer. More importantly, the adaptability term $\\\\lambda(\\\\tilde{\\\\mathcal H}) = \\\\inf_{h \\\\in \\\\tilde{\\\\mathcal H}} \\\\varepsilon^s(h) + \\\\varepsilon^t(h)$ from [2] change. This is the setting of methods based on Domain Adversarial Learning. We have shown that learning invariant representations controls $\\\\beta$. To do so, we have considered a sequence of representations which converges to 0 in the sense of Learning Objective 1 (the objective of Domain Adversarial Learning). If such condition hold, then $\\\\beta$ tends to 0 also. In other words, we have shown that: $\\\\varepsilon^s(h) + d(\\\\tilde{\\\\mathcal H}\\\\Delta\\\\tilde{\\\\mathcal H}) + \\\\beta = \\\\varepsilon^s(h) + d(\\\\tilde{\\\\mathcal H}\\\\Delta\\\\tilde{\\\\mathcal H}) + o(\\\\varepsilon^s(h) + d(\\\\tilde{\\\\mathcal H}\\\\Delta\\\\tilde{\\\\mathcal H}))$. The distribution may mismatch but $\\\\beta$ is neglectable with respect to the term which quantifies distribution mismatching. This is the reason why we considered it as a minor issue during our analysis.\\n\\n4. If the source and the target supports coincide a lot (let consider the limit case they are equals), the source conservation enforces $\\\\tilde{\\\\mathcal H}^s = \\\\tilde{\\\\mathcal H}$ (which implies a compression term equals to 0) and the bound is increased by $\\\\frac 1 2 d(\\\\tilde{\\\\mathcal H}\\\\Delta\\\\tilde{\\\\mathcal H}) + \\\\beta$ with respect to the one introduced in [2].\\n\\n5.a In the notation, we emphasize the dependence on $\\\\varphi$ with $\\\\tilde{\\\\cdot}$. Thank you for raising the difficulty of reading the notations, we will think about an other notation system.\\n\\n5.b We will provide an updated version of the manuscript with a clear statement of the losses. \\n\\n5.c We will provide a notation page in the appendix\\n\\n\\n\\n\\n[1] Cortes, Corinna, Yishay Mansour, and Mehryar Mohri. \\\"Learning bounds for importance weighting.\\\" Advances in neural information processing systems. 2010.\\n[2] Ben-David, Shai, et al. \\\"A theory of learning from different domains.\\\" Machine learning 79.1-2 (2010): 151-175.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This submission provides a new theoretical framework for domain adaptation. In order to tackle the adaptability term in the classical domain adaptation theory, this submission proposes a new upper bound that enlarge the hypothesis space in the adaptability term. A weighted version of this theory is also given. Authors further support their conclusion by empirical results.\", \"pros\": \"1. This submission studies an important problem in domain adaptation.\\n2. This submission proposes new theoretical insight about compression and adaptability.\\n3. The conclusions of this paper can be partially proved by the empirical results.\", \"cons\": \"1.\\tAs the author says in their future work, the source constraint is too strong that need to control the feature unchanged across all source domain. For this condition is not build on samples but on the support of source domain. It seems that authors use $L_0$ to constrain \\\\phi\\u2019 to have same value with \\\\phi on source dataset, which may be only a small part with zero measure of source support set. \\n2.\\tThere is no generalization error analysis for these upper bounds. This submission provides weighted version of the main theory in the section 5. It seems that weighted version of upper bound could be further minimized by find a good weight. But add weight will add variance in the complexity term [A].\\n3.\\tThis submission adds \\\\beta term to change the adaptability term of $\\\\tilde{\\\\mathcal{H}}$ into the adaptability term of $\\\\mathcal{H}_0$. The reason why \\\\beta can be estimated from finite sample is not clarified, which is the premise of being trainable and should be mainly discussed in this paper. We can see that to estimate \\\\beta is a domain adaptation problem under the fact that the labeled functions are same. \\\\beta is a term that can\\u2019t be computed from small finite samples if there is no more assumption: It is not easy to approximate $\\\\tilde{f}_s$ uniformly, otherwise the estimation of \\\\beta will suffer from distribution shift. This submission claimed that the term can be trainable by giving Proposition 2, a proof of the consistency. However, this proposition is built on the assumption that there exists a series of \\\\phi minimizing the distribution distance to zero. But this is impossible for finite sample estimation, when there will always be generalization error of estimating the distribution distance. Furthermore, it is usually impossible to make the two embedded distributions completely the same in empirical. In addition, if there is a series of \\\\phi, how to control other terms in the upper bound? Every \\\\phi will induce new $\\\\tilde{\\\\mathcal{H}}$ and $\\\\mathcal{H}_0$ which will change all other terms. In summary, the main theory in this submission changes the unknown adaptability to a new term that is very hard to estimate. And there is no sufficient empirical or theoretical evidence in this paper that could support the fact that \\\\beta is small. The contribution is therefore limited.\\n4.\\tThe theory also fails to give upper bounds on compression term and adaptability term, or some explicit upper bounds for certain hypothesis spaces as examples. Readers could not have a clear image of how large will these terms be. Furthermore, if the support sets of source and target domain coincide a lot, the adaptability of $\\\\mathcal{H}^s$ will not be too smaller than the previous one.\\n5.\\tThe organization of the submission makes it hard to read:\\na)\\tThe symbol of this submission is chaotic. For example, $\\\\tilde{\\\\mathcal{H}}$, $\\\\tilde{\\\\mathcal{H}}^s$ ,$\\\\tilde{\\\\mathcal{H}}_h$ are defined based on $\\\\phi$, $\\\\pi(h)$ is defined based on $\\\\tilde{\\\\mathcal{H}}$, but all these facts are not revealed in their symbols. \\nb)\\tFor clarity, all loss functions defined in Section 4 should be stated in a independent line. The Section 5 should be moved to the front of Experiment part.\\nc)\\tI recommend the authors to restate all new defined symbols as a list on the top of appendix. It really troubles me during checking the proof.\\n\\nI think this submission discusses about an important problem and provides new insight, but it is not a thorough theoretical work because of above reasons. So, I vote for rejecting this paper.\\n\\n[A] Cortes, Corinna, Yishay Mansour, and Mehryar Mohri. \\\"Learning bounds for importance weighting.\\\" Advances in neural information processing systems. 2010.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary\\n-------\\nThis paper presents a revisit of existing theoretical frameworks in unsupervised domain adaptation in the context of learning invariant representation. They propose a novel bound that involves trainable terms taking into account some compression information and a novel interpretation of adaptability. The authors mention also contribution showing that weighting representations can be a way to improve the analysis. \\n\\nEvaluation\\n-----\\nThe ideas are novel and the result brings novel and interesting light on the difficult problem of unsupervised domain adaptation. \\nHowever, the practical interest in terms of applicability of the proposed framework is not fully demonstrated, the properties of the proposed analysis have to be studied more in details and some parts better justified. The experimental evaluation brings some interesting behavior but is somewhat limited. The weighting aspect of the contribution is not supported by any experiment.\\n\\nOther comments\\n------------\\n\\n-I am a but puzzled by the use of the term \\\"compression\\\". This is maybe subjective, but in the context of learning representation, I would have interpreted it as a way to sparsify the representation, and thus compression could then be measured with respect to a given norm (L2?) or another criterion (Kolmogoroff, ...). \\n\\nIn the paper, the notion of compression is related to a reduction of the hypothesis space after application of a transformation \\\\phi, so I am wondering if using \\\"hypothesis space reduction\\\" would not be more appropriate.\\nIn this case, however, there are maybe links with structural risk minimization that could be investigated here.\", \"a_side_remark\": \"there is no particular restriction on the space of transformations, we wonder if it would be useful to indicate if all the possible transformations are included as subspaces of a given latent space. Since, to be very general, one can imagine the existence of an unbounded number of transformations that correspond to an increase of the input dimension. For transformations leading to different representations of different dimensions, the way the deduced hypothesis can be compared should also be indicated (for defining properly the inclusion H(\\\\phi_1)\\\\subset H(\\\\phi_2).\\n\\nOn the other hand, the authors seem to need the use of norms over transformations as illustrated in the definition of H_0^\\\\eta in the experimental section. So I suggest that the analysis could be revisited by directly incorporating (representation) norms in the theoretical framework and in particular for defining more properly H_0.\\n\\n-One weakness of the theoretical framework is for me the lack of definition of H_0 in Section 3. We just know that it is included between two classes of hypothesis of interest, but there is no clear characterisation of H_0 which makes the analysis fuzzy: we have a bound that involves an object without any clear definition and it is for me difficult to really interpret the bound. Trying to define H_0 with some restrictions related to the norm of the transformations, as evoked before, could be a way to address this point (and actually the way the experiments are done tend to confirm this point).\\n\\n-Another weak point is the lack of qualitative analyse of the bound in Inequality 3 (the same applies for Inequality 5). I would have appreciated if the authors could provide an analysis similar to the one of (Mansour et al., COLT 2009) - it is cited in the paper - when they compared their result to the one of (Ben-David et al., 2007). For example, what happens when source is equal to the target, when is the bound significantly loose, significantly tight, different from other existing results, ...\\n\\nIn particular, if we compare the bound with the one of Ben-David et al. (we can also consider the one of Mansour et al.), there is two additional term, one is weighted by a factor 2, another one involved a supremum and one can think that this bound is rather loose and does not provide any insightful information and said differently it could not give a strong framework for practical considerations.\\nI may understand that when the bound is tight we could deduce that the compression term is low, but finding cases leading to a tight interesting bound does not seem obvious.\\n\\n-The experimental evaluation presents some expected behavior in the context of the bound, but I miss a real study trying to make use of the proposed framework to do adaptation in practice with comparisons to other strategies.\\nAdditionally, having additional studies with other models and tasks will probably reinforce the analysis.\\n\\n-At the beginning of Section 3.2, the authors mention that they restrict their analysis to the square loss, however I think the analysis is true for larger class of losses with more general properties. In the experimental evaluation, the cross entropy is used, so I think that the experimental evaluation should also be consistent with the theoretical analysis by considering the square loss.\\n\\n\\n-Paragraph below Definition 5 is unclear: the notion of L2 norm has not been introduced in this context, so the message of the authors is a bit unclear.\\n\\n-I do not find the notation \\\\gamma(\\\\phi,H) appropriate, I woud rather suggest to use \\\\gamma(H\\\\cdot \\\\phi)\\n\\n-The biblioggrgaphy can be improved by adding the right conferences/journals where the papers have been published in addition to the ArXiv reference.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper introduces the compression risk in domain-invariant representations. Learning domain-invariant representations leads to larger compression risks and potentially worse adaptability. To this end, the authors presents gamma(H) to measure the compression risk. Learning weighted representations to control source error, domain discrepancy, and compression simultaneously leads to a better tradeoff between invariance and compression, which is verified by experimental results.\", \"The paper presents an in-depth analysis of compression and invariance, which provides some insight. However, I have several concerns:\", \"In Section 4, the authors propose a regularization to ensure h belongs to H_0. How is the regularization chosen? How does it perform on other datasets? Experimental results only on digit datasets are not convincing.\", \"In Section 5, the authors introduce weighted representations to alleviate the curse of invariance. However, they do not provide experiments to validate their improvement.\", \"The organization of this manuscript is poor and difficult to follow. Starting from Section 3, the authors use several definitions to introduce their main theorem. However, these definitions are somewhat misleading. I cannot get the point until the end of Section 3. Besides, the notations are confusing, so I have to go back to the previous sections in case of misunderstanding.\"]}" ] }
HJlzxgBtwH
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
[ "Francesco Croce", "Matthias Hein" ]
The evaluation of robustness against adversarial manipulations of neural networks-based classifiers is mainly tested with empirical attacks as the methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the $l_p$-norms for $p \in \{1,2,\infty\}$ aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similarly to state-of-the-art attacks which are partially specialized to one $l_p$-norm.
[ "adversarial attacks", "adversarial robustness" ]
Reject
https://openreview.net/pdf?id=HJlzxgBtwH
https://openreview.net/forum?id=HJlzxgBtwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "NOHiNs2Mq", "rJe_TblqiS", "ryxt4-xcjr", "ByePLDycsH", "HJgiUscAFH", "Hkgp5zksFS", "BJeCBkS5Kr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740238, 1573679552153, 1573679408718, 1573676878871, 1571887955243, 1571644052746, 1571602246091 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2090/Authors" ], [ "ICLR.cc/2020/Conference/Paper2090/Authors" ], [ "ICLR.cc/2020/Conference/Paper2090/Authors" ], [ "ICLR.cc/2020/Conference/Paper2090/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2090/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2090/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This work presents a method for generating an (approximately) minimal adversarial perturbation for neural networks. During the discussion period, the AC raised additional concerns that were not originally addressed by the reviewers. The method is an iterative first order method for solving constrained optimization problems, however when considered as a new first order optimization method the contribution seems minimal. Most of the additions are rather straightforward---e.g. using a line search at each step to determine the optimal step size---and the reported gains over PGD are unconvincing. PGD can be considered as a \\\"universal\\\" first order optimizer [1], as such we should be careful that the reported gains are substantial and not just a question of tuning. Given that using a line search at each step increases the computational cost by a multiplicative factor, the comparison with PGD should take this into account.\\n\\nThe AC notes several plots in the Appendix show PGD having better performance (particularly on restricted Imagenet), and for others there remain questions on how PGD is tuned (for example the CIFAR-10 plots in Figure 5). One of two things explains the discrepancies in Figure 5: either PGD is finding a worse local optimum than FAB, or PGD has not converged to a local optimum. There needs to be provided experiments to rule out the second possibility, as this is evidence that PGD is not being tuned properly. Some standard things to check are the step size and number of steps. Additionally, enforcing a constant step size after projection is an easy way to improve the performance of PGD. For example, if the gradient of the loss is approximately equal to the normal vector of the constraint, then proj(x_i+ lambda * g) ~ x_i will result in an effective step size that is too low to make progress.\\n\\nFinally, it is unclear what practical use there is for a method that finds an approximately minimum norm perturbation. There are no provable guarantees so this cannot be used for certification. Additionally, in order to properly assess the security and reliability of ML systems, it is necessary to consider larger visual distortions, occlusions, and corruptions (such as the ones in [2]) as these will actually be encountered in practice. \\n\\n1. https://arxiv.org/pdf/1706.06083.pdf\\n2. https://arxiv.org/abs/1807.01697\", \"title\": \"Paper Decision\"}", "{\"title\": \"Answer to reviewer 1\", \"comment\": \"We thank the reviewer for the helpful comments.\\n\\n\\\"According to the results in fig.2 the backward steps has the highest impact in comparison to deepfool. But mixing with original projection always helps a little and random restarts help a little too. Without the backward steps there is almost no gain from mixing the projections.\\\"\\n\\nOur projection (alpha=0.1, no restarts) compared to a ``Deep-Fool with backward step'' (alpha=0, no restarts) is always better and improves robust accuracy by more than 10% in the area between eps=0.35 and eps=0.38. We think this is significant. The overall method with 100 restarts (alpha=0.1, 100 restarts) compared to \\n``Deep-Fool with backward step'' (alpha=0, 100 restarts) is again always better and improves robust accuracy by 5-10% on a wide range of epsilon values. This makes the difference between FAB-attack which produces state-of-the-art results or some attack which is ok.\\n\\n\\\"Considering the full results in the appendix, the results are mixed with no obvious advantage in comparison to PGD specially\\\"\\n\\nWe think there are quite some quantitative and qualitative advantages of FAB-attack over PGD which we highlight below. Nevertheless, it is clear that the computation of a minimal attack or robust accuracy is a non-convex optimization problem and thus it is unlikely that there will be ever something like the best attack algorithm (unless you solve the mixed-integer program directly). Our experiments are honest, extensive (many different models (normal, robust) and data-sets) and contain no cherry picking. \\n\\nHighlights of FAB-attack (compared to PGD and other methods)\\ni) FAB-attack achieves for all norms on average the best robust accuracy, is closest to the best on average and has no dramatic failure cases (maximum difference to the best). This is still the case when we take out the models where gradient obfuscation is a problem and PGD fails (please see answer to Reviewer 2) \\n\\nii) Compared to PGD, FAB-attack requires no step-size. For PGD at least for each norm, but potentially for each model, one has to tune the step-size parameter of PDG for optimal performance. In particular, in our experience for attacking new defense strategies, one has to carefully tune the stepsize parameter of PGD. Note that we have selected the optimal parameters for PGD via a grid-search for every norm separately by taking the one achieving best performance on average on MNIST and CIFAR-10. Please see attack details in A.2 and B.1 and the answer to reviewer 2. In contrast, for FAB-attack all parameters are constant across all threat models on MNIST and CIFAR-10, with little adjustment for Restricted ImageNet.\\n\\niii) as noticed by Reviewer 2, FAB does not suffer from gradient obfuscation, as can be seen for the results wrt l_2 and l_1 on the l_infty-adversarially trained model of (Madry et al, 2018) in Table 6.\\n\\niv) FAB aims at the minimal adversarial perturbation and thus provides with one run a complete robustness evaluation, which is different for PGD where one evaluates the robustness at a fixed threshold (even though we agree with reviewer 2 that in order to evaluate different thresholds for PGD it is not necessary to run the attack for all data points again).\\n\\nv) FAB has achieved for one very competitive public challenge the lowest reported robust test accuracy for a robust model on MNIST. It has also obtained on two other public challenges the lowest robust accuracy but has been outperformed since then by a new attack scheme. As these challenges are running some time already, all major attacks have been tried there. Unfortunately, we cannot be more concrete here without violating the anonymity policy.\"}", "{\"title\": \"Answer to reviewer 2\", \"comment\": \"We thank the reviewer for the detailed comments. We are sorry that our evaluation was perceived as misleading. Since we ran extensive experiments on 3 datasets, 3 models, 3 norms, with 5 thresholds and 8 competitors plus our attack, we needed a concise summary, hence the statistics of Tables 1/2. In fact we are not aware of any other attack paper with such an extensive evaluation and comparison. In particular we tried hard to run each attack with optimal parameters.\\n\\n\\\"The step size used for PGD is quite large---eps/4 for the L2 case---which is quite uncommon when using 150 iterations. Based on prior work and my own personal experience, a step size of 2 * eps / #steps (i.e., eps / 75) is suitable...\\\"\\n\\nWe added in Appendix B.1 the performance of PGD with different step sizes epsilon/t for t\\\\in\\\\{1, 2, 4, 10, 25, 75\\\\} for the l_2 attack on MNIST/ CIFAR-10. We report in Figure 6 for each step size the the robust accuracy over iterations (best run out of 10 for each step size) for different epsilon. Our chosen stepsize of \\\\epsilon/4 for PGD achieves best/close to best results in all cases and is best on average. We chose this stepsize by optimizing average PGD performance over all models for MNIST and CIFAR-10 trying out 8 different stepsizes. Thus we took care that we run PGD with the best possible parameters.\\n\\n\\\"While it is encouraging that FAB is robust to such gradient obfuscation, this is arguably not the ideal setting to compare gradient based methods (especially when averaging performance over models).\\\"\\n\\\" ...EAD performs similarly or better compared to FAB (again modulo the Linf-trained model).\\\"\\n\\nIt is a favorable property that FAB is robust to gradient obfuscation which is also a clear advantage over PGD and other attacks affected by gradient obfuscation. However, we see the point of the reviewer that this could affect the overall statistics in Table 1 for \\nl_2 and l_1. Thus we report additionally the statistics of Table 1 for l_2 and l_1 without considering the l_infty-adversarially trained model of (Madry et al, 2018). FAB still outperforms the competitors, with the only exception of \\\"# best\\\" for l_1 (13 of EAD vs 12 of FAB). \\n\\nThe difference between FAB and EAD is small when not considering Madry's model l_infty model but since FAB does not suffer from gradient obfuscation, while EAD does so, we think it is fair to say that FAB outperforms EAD. While FAB does not always outperform PGD for l_infty, FAB is the best attack in the summary statistics, in particular it is never far away from the best result. Please note that even FAB-10 outperforms PGD-100\\n\\n\\\"Based on these observations, I am not fully convinced that FAB outperforms PGD (for L2 and Linf) and EAD (for L1) by as much as Table 2 suggests.\\\"\\n\\nWe hope that the new evaluation and illustration of our step size choice for PGD convince the reviewer of the opposite.\\n\\n\\\"It is not clear how many restarts where included in the runtime of PGD. Its runtime should be in the same ballpark as FAB but the time reported is ~20x higher.\\\"\", \"in_the_corresponding_paragraph_we_write\": \"``\\\"if not specified otherwise, it includes all the restarts\\\". However, to improve clarity we added the number of restarts. PGD-100 (100 restarts, 150 iterations) and FAB-100 (100 restarts, 100 iterations) are comparable as they do both 300 forward/backward passed for one run. PGD-100 takes 3820s on MNIST for 5 thresholds (764s for one threshold), whereas FAB-100 takes 1613s. In theory PGD-100 should take the same amount of time as FAB-100 (300 forward/backward passes per restart). The difference is most likely a suboptimal implementation of the gradients of FAB and we are currently trying to fix this.\\n\\n\\\"PGD is known to produce quite accurate estimates when run with much fewer (say 15) steps. Thus in order to make a fair comparison ... the entire #steps vs robust accuracy curve ...\\\"\\n\\nWe agree but then this would yield 135 plots (5 thresholds, 3 datasets, 3 models, 3 norms). We have also the problem that the time per step is not the same for all the methods and typically varies with the hardware and implementation. We have added a comparison of FAB-1 and PGD-1 in Appendix B.2 wrt to the number of forward/backward passes (1 iter. PGD: 2 passes, 1 iter. FAB: 3 passes) . We cannot confirm that PGD yields always good results already within 15 steps (=30 passes). In the 27 reported cases (Figure 4,5,6 in the Appendix) FAB-1 is better in 18 out of 27 cases after 20 passes. However, both methods sometimes which require the full number of passes to get good results.\\n\\n\\\"It is not necessary to run PGD 5 times to evaluate the robust accuracy at 5 thresholds. One can perform binary search .... This will result in at most 3... evaluations per point ...\\\"\\n\\nThanks for pointing this out. We report runtime for PGD in this way in the final version. However, we still think that for a detailed robustness evaluation a method like FAB evaluating the robustness curve in one run is of advantage.\"}", "{\"title\": \"Answer to reviewer 3\", \"comment\": \"We thank the reviewer for the comments. We address below the questions.\\n\\n\\\"The main concern for me about this paper is the comparison to other methods such as PGD. As far as I know, these attackers DO NOT explicitly minimize the distortion, thus it is quite believable that these models do not identify the minimal distortion solution (rather it will more likely to find a solution that lies in the boundary since it would be the easiest way to attack). However, for the proposed algorithm in this paper, the algorithm is explicitly minimizing the distance to the given input (x_orig in their language).\\\"\\n\\nIt is right that PGD is not aiming at the minimal adversarial change. However, please note that we evaluate our models exactly in a way which is fair to PGD, that means robust test error for the threat model of an l_p-ball of fixed epsilon. This\\nmeans we are just evaluating if the classifier does change its decision inside this l_p-ball or not but we are not comparing the size of the distortions. In particular, we therefore run PDG for each choice of epsilon again and indeed the found adversarial examples are typically located at the boundary of the l_p-ball\\nbut since we just check if the decision changes this is counted in the same way even if the adversarial distortion found by FAB has smaller norm.\\nIn the appendix A.4 (Table 4) we additionally report the average minimal distortions found by other methods which also aim at minimal distortions e.g. EAD, CW, LRA and DF. In this case we don't report the results of PGD and DAA exactly because they don't aim at minimizing the norm and it would be an improper comparison.\\n\\nWe hope that the reviewer readjusts his/her score after this major misunderstanding has been clarified.\\n\\n\\\"I would like to see more implementation details of the other algorithms, for example, what is the performance if we add an additional regularizer as the distance of the current attacker to the given input to PGD. So far, the paper lacks solid proof of the usefulness of this particular algorithm. (In particular the justification for solving the local linear system instead of doing a gradient descent step).\\\"\\n\\nNote that for the other algorithms we mainly take them (see Section 3, paragraph Attacks) as implemented in cleverhans (Papernot et al. 2017) and foolbox (Rauber et al 2017) or directly the code provided by the authors (DAA, LRA). Only SparseFool and PGD were implemented by us. Note that it is not necessary to add a \\nregularization term to PGD as we are not aiming at minimal adversarial distortions but just to change the class and therefore maximizing the cross-entropy loss\\nof the correct class as done in PGD is perfectly aligned with this goal.\\n\\nRegarding our FAB attack, a clear advantage of projecting on the approximated decision hyperplane over doing a gradient step is that the projection does not need to fix a step size, but rather, in practice, adaptively chooses the optimal step size. While our method has also some hyperparameters they generalize across\\nmodels, datasets and threat models (l_\\\\infty, l_2, l_1).\\nSecond, as noticed by Reviewer 2, FAB does not suffer from gradient obfuscation, as can be seen for the results wrt l_2 and l_1 on the l_\\\\infty-adversarially trained model of (Madry et al, 2018) in Table 6. Third, it is fast and at the same time produces high quality adversarial examples (in Section 3 we show it is competitive or outperforms attacks specialized in just one of the three norms,\\nsee also the additional experiments in Appendix B.2). Since it minimizes the norm of the perturbations, it provides quickly a complete overview of the robustness of a classifier at every threshold.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors extend deepFool by adding extra steps and constraints to find closer points to the source image as the adversarial image. They both project onto the decision boundary. Deepfool does and adhoc clipping to keep the pixel values in (0,1) but the new proposed method respects the constraints during the steps. Also during the steps they combine projection of last step result and original image to keep it closer to the original image. Moreover, at the end of the optimization they perform extra search steps to get closer to the original image. Also they add random restarts. Rather than considering the original image, they randomly choose an image in the half ballpark of the total delta.\\n\\nAccording to the results in fig.2 the backward steps has the highest impact in comparison to deepfool. But mixing with original projection always helps a little and random restarts help a little too. Without the backward steps there is almost no gain from mixing the projections.\\n\\nConsidering the full results in the appendix, the results are mixed with no obvious advantage in comparison to PGD specially.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a new gradient-based method (FAB) for constructing adversarial perturbations for deep neural networks. At a high level, the method repeatedly estimates the decision boundary based on the linearization of the classifier at a given point and projects to the closest \\\"misclassified\\\" example based on that estimation (similar to DeepFool). The authors build on this idea, proposing several improvements and evaluate their attack empirically against a variety of models.\\n\\nI found the proposed method quite interesting and intuitive. All the improvements made to the core method are well-motivated and clearly explained, while the ablation experiments are relatively thorough.\\n\\nHowever, I did find the presentation of experimental evidence quite misleading. \\n\\nSpecifically, reporting mean accuracy over models, datasets, and epsilon constraints in Table 2 does not give the full picture. Going through the appendix tables, we can see the following:\\n-- The step size used for PGD is quite large---eps/4 for the L2 case---which is quite uncommon when using 150 iterations. Based on prior work and my own personal experience, a step size of 2 * eps / #steps (i.e., eps / 75) would seem more suitable. I wonder if this is the reason for PGD performing worse than FAB for large epsilon values on CIFAR10. The authors mention that they chose this parameter using grid search but do not provide concrete details.\\n-- The adversarially trained MNIST model of Madry et al. 2018 learns to use thresholding filters as the first layer (observed in the original paper). This causes issues for most gradient-based methods (e.g., PGD performs worse than the decision-based attack of Brendel et al. 2018, also observed in other prior work). While it is encouraging that FAB is robust to such gradient obfuscation, this is arguably not the ideal setting to compare gradient based methods (especially when averaging performance over models). \\n-- For MNIST and Restricted IN, PGD performs comparably or even better than FAB (modulo larger epsilon values for which the large step size used could be an issue for PGD and the Linf-trained model with the thresholding filters).\\n-- For the L1-norm setting, EAD performs similarly or better compared to FAB (again modulo the Linf-trained model).\\nBased on these observations, I am not fully convinced that FAB outperforms PGD (for L2 and Linf) and EAD (for L1) by as much as Table 2 suggests.\\n\\nMoreover, the runtime comparison performed in not exactly fair:\\n-- It is not clear how many restarts where included in the runtime of PGD. Its runtime should be in the same ballpark as FAB but the time reported is ~20x higher. \\n-- PGD is known to produce quite accurate estimates when run with much fewer (say 15) steps. Thus in order to make a fair comparison one would also need to look at the entire #steps vs robust accuracy curve to get a better picture of the efficiency of these two methods. Choosing an arbitrary number of steps for each method is not very enlightening.\\n-- It is not necessary to run PGD 5 times to evaluate the robust accuracy at 5 thresholds. One can perform binary search for each input in order to find the smallest epsilon for which a misclassification can be found. This will result in at most 3 (sometimes 2) evaluations per point (instead of 5).\\n\\nDespite these shortcomings of the experimental evaluation, I still believe that the paper has merit. After all, the method is clean and well-motivated, performs comparably to the best of PGD and EAD in a variety of settings, and is robust to a certain degree of gradient masking. In that sense, it could potentially be a valuable contribution and could be of interest to a subset of the adversarial ML community.\\n\\nIn the sense, while my initial stance is to recommend (weak) rejection, I would be open to increasing my score and recommending (weak) acceptance should my concerns be addressed.\", \"update\": \"I appreciate the response and the additional experiments performed by the authors. The authors have addressed my concerns in their response. I am increasing my score to a weak accept.\\n\\nOne thing that would be nice to add in the next version of the manuscript is a note inviting the reader to consider the appendix tables since average robust accuracy can be inconclusive.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies the problem of the white-box attack of neural network-based classifiers, with an emphasis on the \\\"minimal distortion solution\\\": The new input that changes the labeling output of the network with the minimal distance (l1, l2, l_inf) with respect to a given input.\\n\\nThe main intuition of the algorithm is to do a local linear approximation of the network at the current point (which is the Taylor expansion up to the gradient term). After that, the algorithm identifies a class (output coordinate) with the minimal \\\"margin to gradient norm ratio\\\", i.e. the total movement in gradient direction to change the labeling function in that coordinate, within this linear approximation. The algorithm solves the subproblem of minimizing a linear function inside lp ball as the critical routine.\\n\\nOverall, the notion of finding the minimal distortion attacker as opposed to finding the best attacker inside a fixed distortion ball is quite interesting to me. The main concern for me about this paper is the comparison to other methods such as PGD. As far as I know, these attackers DO NOT explicitly minimize the distortion, thus it is quite believable that these models do not identify the minimal distortion solution (rather it will more likely to find a solution that lies in the boundary since it would be the easiest way to attack). However, for the proposed algorithm in this paper, the algorithm is explicitly minimizing the distance to the given input (x_orig in their language). \\n\\n\\nI would like to see more implementation details of the other algorithms, for example, what is the performance if we add an additional regularizer as the distance of the current attacker to the given input to PGD. So far, the paper lacks solid proof of the usefulness of this particular algorithm. (In particular the justification for solving the local linear system instead of doing a gradient descent step).\", \"after_rebuttal\": \"I have read the authors' responses and acknowledge the sensibility of the statement. I apologize for the earlier misunderstanding and higher the score accordingly.\"}" ] }
Hkg-xgrYvH
Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
[ "Shell Xu Hu", "Pablo Garcia Moreno", "Yang Xiao", "Xi Shen", "Guillaume Obozinski", "Neil Lawrence", "Andreas Damianou" ]
We propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging the unlabeled query set in addition to the support set to generate a more powerful model for each task. To develop our framework, we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task. We derive a novel amortized variational inference that couples all the variational posteriors via a meta-model, which consists of a synthetic gradient network and an initialization network. Each variational posterior is derived from synthetic gradient descent to approximate the true posterior on the query set, although where we do not have access to the true gradient. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification outperform previous state-of-the-art methods. Besides, we conduct two zero-shot learning experiments to further explore the potential of the synthetic gradient.
[ "Meta-learning", "Empirical Bayes", "Synthetic Gradient", "Information Bottleneck" ]
Accept (Poster)
https://openreview.net/pdf?id=Hkg-xgrYvH
https://openreview.net/forum?id=Hkg-xgrYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "N5xGt4-Gn", "M8Eje-Bmkv", "Hkev_eo2oB", "BJgV-MI3sB", "Bklq9Z83or", "Hkgz8nS3oH", "Skx5esH3iB", "SJedK9owqS", "HkgSFawkqr", "S1gFwdapYr" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1581781385493, 1576798740203, 1573855343081, 1573835260153, 1573835153898, 1573833802494, 1573833457912, 1572481664125, 1571941757350, 1571833953374 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper2089/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2089/Authors" ], [ "ICLR.cc/2020/Conference/Paper2089/Authors" ], [ "ICLR.cc/2020/Conference/Paper2089/Authors" ], [ "ICLR.cc/2020/Conference/Paper2089/Authors" ], [ "ICLR.cc/2020/Conference/Paper2089/Authors" ], [ "ICLR.cc/2020/Conference/Paper2089/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2089/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2089/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Paper revised\", \"comment\": \"Dear program chairs,\\n\\nWe have revised our paper according to reviewer's comments and have made our code public for reproducing our few-shot learning experiments.\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for your comments\", \"comment\": \"Dear reviewers,\\n\\nThank you for your comments.\\n\\nWe reply to your comments with individual replies to your reviews, please see corresponding replies below. We believe we have addressed all of your concerns.\\n\\nWith this top-level comment we'd like to also highlight that:\\n- we have added code for our paper\\n- we have uploaded a new pdf which incorporates our edits according to your comments.\\n\\nMany thanks,\\nAuthors\"}", "{\"title\": \"Thank you for a thorough review! (II/II)\", \"comment\": \"Q6. \\u201cConsidering the authors argue specifically for the importance of transduction in the zero-shot learning regime, I think it would be reasonable to expect experiments substantiating this, and the strength of their method in this regard, on non-synthetic datasets.\\u201d\", \"answer\": \"The purpose of Section 5 (now Section 5.2) was to validate whether SIB can be applied to zero-shot learning, that is, without resorting to the support set.\\n\\nWe agree that comparing to a BNN baseline is unfair in this case, and thus remove the comparison completely. We have also rewritten Section 5.2 to clarify all the details.\\n\\nWe would like to argue that Figure 3 offers an initial sense for the performance of SIB on zero-shot learning. Inspired by the success in toy data, we have conducted a new experiment of zero-shot classification on real data, which can be found in Section 5.3. Besides, we have also reorganized the experiments to make the presentation more fluent.\"}", "{\"title\": \"Thank you for a thorough review! (I/II)\", \"comment\": \"Dear R2,\\n\\nThank you for your detailed comments. We understand that the main issues that lead your vote to \\u201cweak reject\\u201d are: \\n(a) Coherence in the presentation, especially when it comes to explaining motivation for transduction in an empirical Bayes setting.\\n(b) Novelty. \\n(c) You find the experiments \\u201cimpressive\\u201d but would like to see more insights out of them.\\n\\nRegarding (a), we have addressed all of your concerns and argued about the importance of theoretically justifying transduction within Empirical Bayes; we explained how this is placed into context with regards to other frameworks that could support transduction. Importantly, improved generalization is evident from our motivation. \\n\\nRegarding (b), we argue that novelty comes from considering transduction in meta-learning, which improves generalization and, hence, performance. \\n\\nRegarding (c) we have now added the suggested experiments.\\n\\nWe hope these changes and answers can make you consider raising your score.\\n\\nDetailed answers follow below.\\n\\nQ1. \\u201cThe paper does, however, at times feel to be disjointed and, to an extent, lacking in focus.\\u201d\", \"answer\": \"To justify this intuition we compared to a variant which incorporates the gradient difference as an additional loss, on MiniImageNet with 1-shot, K=3, WRN-28-10. The test accuracy was 62.935%, which is about 6% lower than the accuracy (69.6%) reported in our Table 1. Therefore, we did not include this variant as a baseline.\", \"we_wish_to_clarify_the_importance_of_theorem_1\": \"it justifies theoretically the empirical Bayes formulation for meta-learning, which is a key element in our approach. As far as we know, this is the first such justification; indeed, previous theoretical analyses (e.g. Amir & Meir 2018) are not specialized to empirical Bayes.\\n\\n\\nQ4. \\u201cmore experiments, per Appendix C, highlighting the importance of transduction and therein the synthetic gradients and its formulation would be welcome.\\u201d\"}", "{\"title\": \"Thank you for a thorough review!\", \"comment\": \"Dear R1,\\n\\nThank you for your comments. You cite presentation and experiments as the main reasons for \\u201cweak reject\\u201d. As you can see in the details below, we have now addressed all of your comments (and those of other reviewers) regarding presentation and have clarified the contribution of the experiments (including presenting new experiments). We hope this will make you consider raising your score.\\n\\nDetailed answers follow.\\n\\n\\u201c0. the empirical evaluation, the results on standard benchmarks (miniImageNet and CIFAR-FS) seem reasonably strong; however, I would not call it \\\"significantly outperform previous state-of-the-art\\\" (as the authors claim in the abstract), since really all the top methods are in the same ballpark (the provided 95% CI overlap).\\u201d\", \"answer\": \"We have significantly reworked on section 4 and thus Theorem 1 to elaborate\\na) why transduction helps achieving good generalization \\nb) the connection between empirical Bayes (EB) and information bottleneck leading to a generalization bound for EB models.\\n\\nPlease read the paragraphs \\\"Implications of (14)\\\" and \\\"Implications of (15)\\\" for detailed discussions of Theorem 1.\", \"we_wish_to_clarify_the_importance_of_theorem_1\": \"it justifies theoretically the empirical Bayes formulation for meta-learning, which is a key element in our approach. As far as we know, this is the first such justification; indeed, previous theoretical analyses (e.g. Amir & Meir 2018) are not specialized to empirical Bayes.\"}", "{\"title\": \"Thank you for a thorough review!\", \"comment\": \"Dear R4,\\n\\nThank you for your insightful comments. Below we address one by one the issues that you mentioned.\\n\\n\\u201c1. The first paragraph on page 5, which describes the key step of syntheising gradient, can be made clearer\\u201d\", \"answer\": \"In theory, SIB has O(n) time complexity, where n is the number of examples of a task. In practice, for training SIB with WRN-28-10 backbone on a GTX Titan X GPU, it takes about 7 hours.\", \"reply\": \"With cosine-similarity based classifier, regardless of 1-shot or 5 (or more)-shot, seeing additional points in the feature space should help us to sketch the distribution of features, and thus help the fast adaptation of the weights of the classifier. CTM (Li et al. 2019) was also motivated by this intuition. \\nHowever, it doesn't mean the more unlabeled data the better. As argued by Theorem 1 (see the paragraph \\\"Implications of (14)\\\"), the meta-model (gradient network $\\\\xi$ in our case) may not be able to absorb the amount of information efficiently resulting an over-regularization: note that there is a trade-off between the generalization error and the training error; when considering too many unlabeled data, we put a large weight on the generalization error.\\nWe have empirically confirm this on Mini-ImageNet. See Table 4 in the updated paper. \\n\\n\\n\\u201c4. Since the proposed method works in a tranductive manner, it is presumed that the whole model needs to be retrained/updated once a new set of query data (e.g., for the same task or another new task) is given? In other words, how does the trained model generalise to unseen unlabeled test data? Please provide some discussion on this issue.\\u201d\", \"an_interesting_discussion_on_this_topic_can_be_found_here\": \"http://olivier.chapelle.cc/ssl-book/discussion.pdf\\n\\n\\n\\u201c5. Finally, how is the computational complexity of training the proposed EB model?\\u201d\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper addresses the issue of meta-learning in a transductive learning setting. That is, it aims to learn a model from multiple tasks and make it generalise to a new task in order to solve it efficiently. In the transductive setting, the query set (i.e., containing the unlabeled test data) of the new task is taken into account when learning the model.\\n\\nThis paper takes the empirical Bayes approach to meta-learning. In order to utilise the test data that do not access to groundtruth labels, it proposes to use synthetic gradient to implement the tranductive learning. A multi-layer perceptron network is used to systhesize the gradient. Theoretical analysis is conducted to demonstrate the generalization capability of the proposed model and reveal its connection to the information bottleneck principle in the literature of neural networks. \\n\\nOverall, this is a well organised and nicely presented work. The idea on how to utilise the unlabeled test data to realise tranductive learning is novel; the analysis is thorough; and experimental study is provided to show the effectiveness of the proposed method. Meanwhile, this work can address the following issues:\\n\\n1. The first paragraph on page 5, which describes the key step of syntheising gradient, can be made clearer; \\n2. In the experimental study, Table 1 compares various methods with the proposed one. It will be helpful to clearly indicate for each method in comparison whether/how it also utilises the query set. This will give more context in interpreting the comparison results;\\n3. The advantage of the proposed method seems to diminish quickly from 1-shot to 5-shot settings. Does this mean in the case of 5 (or more)-shot setting, considering unlabeled test data with the proposed method could even adversely affect the meta-learning performance? Please comment. \\n4. Since the proposed method works in a tranductive manner, it is presumed that the whole model needs to be retrained/updated once a new set of query data (e.g., for the same task or another new task) is given? In other words, how does the trained model generalise to unseen unlabeled test data? Please provide some discussion on this issue. \\n5. Finally, how is the computational complexity of training the proposed EB model?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a method for transductive few-shot learning. The method is derived by taking a Bayesian perspective and recasting meta-learning as amortized variational inference, showing that results in a transductive scheme, and then using maml-style approximation of the inference (i.e., based on truncated stochastic gradient). While the idea of the paper seems intuitive, I find the writing quite confusing throughout (see my comments below) and I believe it must be improved before publishing the paper. Regarding the empirical evaluation, the results on standard benchmarks (miniImageNet and CIFAR-FS) seem reasonably strong; however, I would not call it \\\"significantly outperform previous state-of-the-art\\\" (as the authors claim in the abstract), since really all the top methods are in the same ballpark (the provided 95% CI overlap).\", \"comments\": \"1. In Eq. 2, if the task-specific losses are arbitrary, the whole construction is no longer a log-likelihood but rather just a loss. The authors also denote the distribution over the meta-training datasets as p_\\\\psi(d_t), where d_t includes both inputs and targets. However, the concrete instantiations of the framework use discriminative models. Adjusting and clarifying the notation would improve the paper.\\n\\n2. The way KL divergence is used in Eq. 5 is misleading since the arguments are two distributions over different sets of random variables. I would recommend keeping expected log conditional probability as a separate term (which is common in the literature).\\n\\n3. Relatedly, going from ELBO to amortized VI (Eqs. 4-6) is a standard widely used VAE trick, so the derivation itself is not that informative. On the other, it would be great to include the inductive inference scheme mentioned right before Eq. 7 and compare it side-by-side with the standard amortized VI (Eq. 6). The way that part is presented now leaves the reader to derive all the details on their own.\\n\\n4. Sec. 3, paragraph 1: While the original neural processes tend to underfit the data as pointed by the authors, more recent versions of the model such as attentive neural processes might work well, and perhaps worth mentioning.\\n\\n5. Difference between Eq. 7 and 8 -- I believe I am misunderstanding this, but the updates look identical to me up to KL between q_\\\\theta and a prior p_\\\\psi. How exactly does \\\\phi(x_t) parametrize the optimization process? I don't see how it enters into the equations. Generally, I feel deriving the method through a Bayesian perspective is quite confusing (as it is presented now) and way less clear than what is illustrated in Figure 1c.\\n\\n6. Re: theoretical analysis -- it seems like the more than half a page spent on defining what generalization error is in the given setup (where all the definitions are quite standard), but then the discussion of the result, discussion of specific cases, connection to the information bottleneck bounds are all compressed down to in 1-2 sentences. This makes the \\\"analysis\\\" section really useless. Exemplifying the result of Thm. 1 and significantly elaborating the discussion would improve the paper.\", \"minor\": \"- The paragraph before Theorem 1: \\\"Proposition\\\" -> \\\"Theorem\\\"\\n- [UPD] Figure 3: titles, labels, ticks are all too small to be readable.\\n\\n---------\\n\\nThanks to the authors for a detailed response. Most of my points have been addressed satisfactorily. I've updated my review accordingly.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors argue for the importance of transduction in few-shot learning approaches, and augment the empirical Bayes objective established in previous work (estimating the hyperpior $\\\\psi$ in a frequentist fashion) so as to take advantage of the unlabelled test data.\\nSince the label is, by definition, unknown, a synthetic gradient is instead learned to parameterize the gradient of the variational posterior and tailor it for the transductive setting. The authors provide an analysis of the generalization ability of EB and term their method _synthetic information bottleneck_ (SIB) owing to parallels between its objective of that of the information bottleneck. SIB is tested on two standard few-shot image benchmarks in CIFAR-FS and MiniImageNet, exhibiting impressive performance and outperforming, in some cases by some margin, the baseline methods, in the 1- and 5-shot settings alike, in addition to a synthetic dataset.\\n\\nThe paper is technically sound and, for the most part, well-written, with the authors' motivations and explanation of the method conceived quite straightforwardly. The basic premise of using an estimated gradient to fashion an inductive few-shot learning algorithm into a transductive one is a natural and evidently potent one. The paper does, however, at times feel to be disjointed and, to an extent, lacking in focus. The respective advantages of EB and the repurposing of synthetic gradients to enable the transductive approach are clear to me, yet while they might indeed be obviously complementary, what is not obvious is the necessity of the pairing: it seems there is nothing prohibiting the substitution of the gradient for a learned surrogated just as well under a deterministic meta-initialization framework. As such, despite sporting impressive results on the image datasets, I am not convinced about how truly novel the method is when viewed as a whole. \\n\\nOn a similar note, while the theoretical analysis provided in section 4 was not unappreciated, and indeed it was interesting to see such a connection between EB with information theory rigorously made, it does feel a little out of place within the main text, especially since it is not specific to the transductive setting considered, nor even to the meta-learning setting more broadly. Rather, more experiments, per Appendix C, highlighting the importance of transduction and therein the synthetic gradients and its formulation would be welcome. Indeed, it is stated that an additional loss for training the synthetic gradient network to mimic the true gradient is unnecessary; while I agree with this conclusion, I likewise do not think it would hurt to explore use of the more explicit formulation.\\n\\nConsidering the authors argue specifically for the importance of transduction in the zero-shot learning regime, I think it would be reasonable to expect experiments substantiating this, and the strength of their method in this regard, on non-synthetic datasets. As far as the toy problem is concerned, I am slightly confused as to the choice of baseline, both in the regard to its training procedure and as to why this was deemed more suitable than one purposed for few-shot learning, so that we might go beyond simple verification to getting some initial sense for the performance of SIB. Moreover, it is not clear from the description as to how $\\\\lambda$ is implemented here. As it stands, Section 5, for me, offers little in the way of valuable insights. The experiments section on the whole, results aside, feels somewhat rushed; the synthetic gradients being a potential limiting factor for instance feels \\\"tacked on\\\" and seems to warrant more than just a passing comment.\\n\\nMinor errors\\n- Page 7: the \\\"to\\\" in \\\"let $p_\\\\psi(w)$ to be a Gaussian\\\" is extraneous\\n- Page 8: \\\"split\\\" not \\\"splitted\\\".\\n- Further down on the same page, \\\"scale\\\" in \\\"scale each feature dimension\\\" should be singular and Drop\\\" is misspelled as \\\"dropp\\\".\\n- Page 9: \\\"We report results **with using** learning rate...\\\"\\n- _Equation 17_ includes the indicator function $k \\\\neq i$ but $i$ is not defined within the context.\", \"edit\": \"changed score\"}" ] }
rJxWxxSYvB
Spike-based causal inference for weight alignment
[ "Jordan Guerguiev", "Konrad Kording", "Blake Richards" ]
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
[ "causal", "inference", "weight", "transport", "rdd", "regression", "discontinuity", "design", "cifar10", "biologically", "plausible" ]
Accept (Poster)
https://openreview.net/pdf?id=rJxWxxSYvB
https://openreview.net/forum?id=rJxWxxSYvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "P7wcJAFekV", "H1xV5ffosr", "HyxVOMzior", "Hyxe0-GoiS", "BJx5tbGjjB", "H1lQdqkRKB", "H1lpyw_atB", "S1eQziLmFB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740171, 1573753483602, 1573753451623, 1573753287915, 1573753217989, 1571842667317, 1571813093192, 1571150602918 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2088/Authors" ], [ "ICLR.cc/2020/Conference/Paper2088/Authors" ], [ "ICLR.cc/2020/Conference/Paper2088/Authors" ], [ "ICLR.cc/2020/Conference/Paper2088/Authors" ], [ "ICLR.cc/2020/Conference/Paper2088/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2088/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2088/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"All authors agree the paper is well written, and there is a good consensus on acceptance. The last reviewer was concerned about a lack of diversity in datasets, but this was addressed in the rebuttal.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewers\", \"comment\": \"We would like to thank all of the reviewers for their encouraging comments and helpful critiques. We have updated the manuscript and believe that we have addressed the concerns that were raised. We provide responses to specific points below.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"\\\"From the reported results, it is not possible to decide whether RDD really outperforms Feedback Alignment (FA). The comparison is performed on only two data sets and each algorithm is better on one.\\\"\\n\\nWe are afraid that we must have been unclear in the original submission, so thank you for raising this. To clarify, RDD performs better than FA on all of the datasets we have investigated to date. \\n\\n\\\"Could the authors report results on at least two more data sets (however small or simple) during the rebuttal?\\\"\\n\\nThis is a very valid request, and we are happy to oblige. We have no also tested on the SVHN and VOC datasets. RDD outperforms FA on both datasets (see updated Figure 5).\\n\\n\\\"Fig and Table 1 report the same outcome. One of the two need to be removed.\\\"\\n\\nFair point, we have removed Table 1, and provide both training and testing results in Figure 5 now.\\n\\n\\\"The Conv Net illustrated in Fig 2 panel A shares its weights with the biologically plausible net on panel B. Further, these two nets communicate for pre-training. How does the paper then isolate the contribution of the biologically plausible net to the prediction accuracy from the vanilla ConvNet? What would happen if we trained only the LIF net without a contact with the conv net?\\\"\\n\\nWe now see that we were insufficiently clear in the original submission, so again, thank you for raising this. The interaction between the ConvNet and the LIF net is as follows: the two networks share weights, but the ConvNet is used for training the feedforward weights and measuring accuracy, while the LIF net is only for training the feedback weights. More specifically, on each epoch, we train the feedforward weights with the ConvNet, using the current setting of the feedback weights. This means that the transpose of the feedforward weights in the usual gradient update term is replaced with the current feedback weights. Then, we transfer the new feedforward weights from the ConvNet to the LIF net, and we train only the feedback weights. This continues: the feedback weights of the ConvNet are set to the new values from the LIF net, and so on. Thus, the LIF net is not learning to categorize the images, it is only learning the feedback weights, which get used by the ConvNet for the feedforward training. We have clarified this in the text and Figure 2A. We do this because our goal in this paper is simply to test the RDD algorithm's ability to learn good feedback weights, not to test the ability of an LIF net to perform categorization.\\n\\n\\\"Eq. 1 proposes induction of symmetry to solve the weight transform. At the extreme, this regularizer would make W and Y identical, boiling down to a vanilla artificial neural net, which the ML community already knows well and performs with excellence. Would not having the biologically implausible artificial neural model as the extreme solution contradict with the goal of biologically plausible learning? This would in the end make one conclude that the biological brain only performs a broken gradient descent.\\\"\\n\\nThe reviewer is correct that the symmetric alignment cost function would only be zero when perfect symmetry in weights is achieved. The reviewer is also correct that this would indicate that biological networks were approximating gradient descent. However, that is part of the point of this exercise. To date, no one has demonstrated how one can achieve efficient credit assignment in large networks without at least a good correlation with the true gradient. To be clear, we hypothesize that the brain may in fact have a means of estimating gradients, and that this would be achieved, in part, by ensuring symmetry between feedforward and feedback pathways. That may not be a \\\"broken\\\" gradient descent, in so far as there can be regularizing advantages to not always perfectly following the gradient. If the reviewer is interested in this perspective, they can read more in our recent review on the topic: Richards, et al. Nature Neuroscience 22, no. 11 (2019): 1761-1770.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"\\\"I had one minor comment on the arrangement of the writing of the paper. Section 4 starts off with \\\"Results\\\" but the earlier sub-sections are not really about the results. I would split section 4 as methodology/algorithm and include the everything until section 4.4. From sub section 4.5 onwards are the actual results.\\\"\\n\\nYes, we see your point. We have split the materials into methods/results as requested.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your comments!\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Strong paper in the direction of a more biologically plausible solution for the weight transport problem, where the forward and the backward weights need to be aligned. Earlier work for feedback alignment has included methods such as hard-coding sign symmetry. In this method, the authors show that a piece-wise linear model of the feedback as a function of the input given to a neuron can estimate the causal effect of a spike on downstream neurons. The authors propose a learning rule based on regression discontinuity design (RDD) and show that this leads to stronger alignment of weights (especially in earlier layers) compared to previous methods. The causal effect is measured directly from the discontinuity introduced while spiking - the difference between the outputs of the estimated piece-wise linear model at the point of discontinuity is used as the feedback.\\n\\nCompared to feedback alignment, RDD-based pre-training demonstrates stronger alignment between forward and backward weights and better performance on CIFAR-10 and Fashion-MNIST datasets. Overall, the paper is very well written and addresses an important problem. The theoretical foundation, to my knowledge, is well studied.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"summary\\n\\nThis paper considers the \\\"weight transport problem\\\" which is the problem of ensuring that the feedforward weights $W_{ij}$ is the same as the feedback weights $W_{ji}$ in the spiking NN model of computation. This paper proposes a novel learning method for the feedback weights which depends on accurately estimating the causal effect of any spiking neuron on the other neurons deeper in the network. Additionally, they show that this method also minimizes a natural cost function. They run many experiments on FashionMNIST and CIFAR-10 to validate this and also show that for deeper networks this approaches the accuracy levels of GD-based algorithms. \\n\\n\\n\\ncomments\\n\\nOverall I find this paper to be well-written and _accessible_ to someone who is not familiar with the biologically plausible learning algorithms. To overcome the massive computational burden, they employ a novel experimental setup. In particular, they use a separate non-spiking neural network to train the feedforward weights and use the spiking neurons only for alignment of weights. They have experimental evidence to show that this method is a legitimate workaround. I find their experimental setup and the results convincing to the best of my knowledge. The experimental results indeed show the claim that the proposed algorithm has the properties stated earlier (i.e., learns the feedback weights correctly and that using this to train deep neural nets provide better performance than weight alignment procedure). I must warn that I am not an expert in this area and thus, might miss some subtleties. Given this, it is also unclear to me why this problem is important and thus, would leave the judgement of this to other reviewers. Here I will score only based on the technical merit of the method used to solve the problem.\\n\\nI had one minor comment on the arrangement of the writing of the paper. Section 4 starts off with \\\"Results\\\" but the earlier sub-sections are not really about the results. I would split section 4 as methodology/algorithm and include the everything until section 4.4. From sub section 4.5 onwards are the actual results.\\n\\n\\noverall decision\\n\\nWithout commenting on the importance of this problem, I think this paper merits an acceptance based on the technical content. The paper provides convincing experiments to test the properties the author claim the new algorithm has.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces a training mechanism for spiking neural nets that employs a causal inference technique, called RDD, for adjustment of backward spiking weights. This technique induces the backward influence strengths to be reciprocal to the forward ones, bringing desirable symmetry properties.\", \"pros\": [\"The relationship between causal inference and biologically plausible learning is very interesting. This relationship is also important and impactful for the machine learning community, as we are on the quest of new deep learning technologies.\", \"Application of the RDD method to spiking neural net training is novel. The reciprocal relationship of the causal effect to the synaptic strength is a very intuitive and elegant solution to the weight transport problem.\"], \"cons\": [\"From the reported results, it is not possible to decide whether RDD really outperforms Feedback Alignment (FA). The comparison is performed on only two data sets and each algorithm is better on one. Could the authors report results on at least two more data sets (however small or simple) during the rebuttal?\", \"Fig and Table 1 report the same outcome. One of the two need to be removed.\"], \"further_questions\": \"* The Conv Net illustrated in Fig 2 panel A shares its weights with the biologically plausible net on panel B. Further, these two nets communicate for pre-training. How does the paper then isolate the contribution of the biologically plausible net to the prediction accuracy from the vanilla ConvNet? What would happen if we trained only the LIF net without a contact with the conv net?\\n\\n * Eq. 1 proposes induction of symmetry to solve the weight transform. At the extreme, this regularizer would make W and Y identical, boiling down to a vanilla artificial neural net, which the ML community already knows wella nd performs with excellence. Would not having the biologically implausible artificial neural model as the extreme solution contradict with the goal of biologically plausible learning? This would in the end make one conclude that the biological brain only performs a broken gradient descent.\\n\\nOverall, this is a decent piece of work with some potential. My initial vote is a weak reject, as I am at present missing sufficient evidence that the improved symmetry properties introduced by the causal inference scheme also brings an accuracy improvement over the vanilla feedback alignment method. I am open to improve to an accept if this evidence is provided and my aforementioned concerns primarily on the role of ConvNet are properly addressed during rebuttal.\\n\\n\\n--\", \"post_rebuttal\": \"My only major concern was the lack of sufficient empirical evidence to support the idea. The updated version of the manuscript has properly addressed this issue by reporting results on additional data sets. The authors have also given enlightening clarifications to some of the open points I have raised earlier. Hence, I'm happy to increase my score.\"}" ] }
BylWglrYPH
Symmetry and Systematicity
[ "Jeff Mitchell", "Jeff Bowers" ]
We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, composition and grammar learning.
[ "symmetry", "systematicity", "convolution", "symbols", "generalisation" ]
Reject
https://openreview.net/pdf?id=BylWglrYPH
https://openreview.net/forum?id=BylWglrYPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ObEr38Q6Qd", "B1x8rXrnsB", "SyeDvfMhoB", "S1xGY5VijB", "B1xUuaAtir", "SJgWsDRKor", "HJl2YAI7sS", "SJeX-qL7jH", "BklVZr8XiS", "Syx8eeAbir", "B1ePQyfk9S", "HJe7pRBAFH", "ryx5OlGTYr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740142, 1573831485692, 1573818975482, 1573763705519, 1573674349913, 1573672856710, 1573248643784, 1573247482615, 1573246204448, 1573146606466, 1571917598738, 1571868346930, 1571786866052 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2087/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2087/Authors" ], [ "ICLR.cc/2020/Conference/Paper2087/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2087/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2087/Authors" ], [ "ICLR.cc/2020/Conference/Paper2087/Authors" ], [ "ICLR.cc/2020/Conference/Paper2087/Authors" ], [ "ICLR.cc/2020/Conference/Paper2087/Authors" ], [ "ICLR.cc/2020/Conference/Paper2087/Authors" ], [ "ICLR.cc/2020/Conference/Paper2087/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2087/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2087/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Thanks for clarifying several issues raised by the reviewers, which helped us understand the paper.\\n\\nAfter all, we decided not to accept this paper due to the weakness of its contribution. I hope the updated comments by the reviewers help you strengthen your paper for potential future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"model of a set\", \"comment\": \"Thanks for the clarification. You are right, the first model is not simply a bag of words, rather a bag of word+locations.\\n\\n(The multi-hot coding used for the representation of the input is encoding the presence/absence of the word and its location in the sentence.)\"}", "{\"title\": \"Relation between bag-of-words, Deep Sets and our approach.\", \"comment\": \"Thanks for your response and suggestions.\\n\\n> the input is treated as a bag of words\\n\\nThis is a misunderstanding. A bag-of-words approach would not be able to learn to distinguish ABB from ABA on the training set, never mind generalise to the test set. Under a bag-of-words approach, wo fe fe looks the same as fe wo fe, so ABB and ABA are indistinguishable.\\n\\nThe permutation we consider in our paper is not permutation within the input, but permutation of the symbols themselves, i.e. wo is replaced with la. As I explained above, we do not share weights across time steps (as in a bag-of-words approach) but instead between symbols. \\n\\nThus we are talking about a different sort of permutation, having considerably different applications, from bag or set representations. Indeed, that form of permutation is not really relevant to the issue of systematicity. Thus, these works are only superficially related to this paper.\"}", "{\"title\": \"Thanks for the response but my main concerns still linger\", \"comment\": \"Thanks to the authors for their response and further explanation of results observed in tasks 1 and 2. Regarding cfgs, I would like to see a more formal mathematical connection behind the motivation to use cnns in the final draft.\\nMy main concern is the lack of experiments on a large scale task. Right now, it looks like the paper focuses on just 3 systemic rules that are desirable to encode in the learning network and proposed different instantiations of convolutional neural networks to encode the three specifically identified phenomena. Is there a more general purpose network that would account for all the three phenomena and many of the other much more complex systematic rules? Also, does the proposed technique that learns to encode a specific form of systematicity finally help in improving performance on a real world task. My suggestion is not to reject the controlled experiments but to add to the current set of experiments. If the proposed architectures are indeed efficient at encoding such useful systematic rules then we should expect to see improvements w.r.t. performance or sample complexity on real world task. Even results on full SCAN dataset would go about throwing some light on the effectiveness of reasoning about systemic rules.\\nAlso, for the 3 different tasks, although the architectures are based on CNNs, they are very different from each other and need to be manually designed. How feasible is this to use CNNs for capturing more complex systematic rules in the real world datasets?\\nI'm not inclined to change my score based upon the current draft and the author response.\"}", "{\"title\": \"clarification regarding related works and contribution\", \"comment\": \"Thanks for your response and explanations.\", \"please_let_me_clarify_my_objection_to_the_dismissal_of_related_work\": \"The first experiment as pointed out is using convolution with a filter of width one, which is the permutation invariant model that is studied in the related works I have cited (i.e., the input is treated as a bag of words.) Changing the width of this convolution to multiple words will make the model invariant to the translation of symbols. Both of these types of parameter-sharing models in language are studied in previous works. The paper ignores these works. Applying invariant architectures to toy problems that are by design permutation invariant, is not by itself a significant contribution (the final experiment is an exception to this, where unfortunately the claims are precise.)\\n\\nI disagree with the statement that these recent advances in building invariant networks are \\\"distantly or superficially related\\\". This work is about the application of such invariant networks to language.\\n\\nThanks for clarifications on the final experiment. I suggest making the statement about the role of symmetry in memory more precise using math. As it is, I find the reasoning hard to follow.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your comments.\\n\\n> While this shows efficacy of modeling symmetry, I'd be curious about performance graphs as the training data increases in size.\\n\\nThe key point about Marcus's task is that the syllables encountered at test time are not present in the training data. So, given a one-hot representation, the units representing those unseen syllables are never active during training and in a standard MLP or RNN that means the weights connected to those units are never updated. No additional quantities of training data change this, and their performance remains at random. \\n\\nConvolution solves this problem by sharing weights between syllables, allowing an abstract pattern to be learned across all syllables. Test time performance on the unseen syllables is identical to the other syllables, because the symmetry ensures all the syllables are equivalent.\\n\\nThere are other ways to solve this problem, for example by changing the representation. But this avoids the central problem of learning rules that can be applied to novel inputs. Our solution imposes a symmetry of symbolic systems onto the net and as a consequence learns abstract rules of the kind that Marcus was interested in.\\n\\n\\n> Curiously, the recurrent baseline seems to perform better than 0% accuracy (if still poorly) on the original SCAN task which is much harder than the proposed task in this paper.\\n\\nActually, many of the instances in the original SCAN test sets are easier than ours. \\n\\nFor example, 'jump left and look left twice' occurs in one of the SCAN test sets, while 'jump' on its own and 'walk left and look left twice' occur in the corresponding training set. Generalisation in this case is easier, because 'jump' and the context it occurs in can be translated independently.\\n\\n> I still cannot intuitively understand why convolutions in the forget architecture would learn about symmetry related to structured repetition produced by a CFG.\\n\\nIn a width 3 convolution, the filter 100 corresponds to shifting values to the right and 001 corresponds to shifting to the left. In the stack of memory cells, these shifting operations can be used to move the contents up and down the stack. Along with reading and writing only from the bottom of the stack this mimics a Push Down Automata, and the correspondence between PDAs and CFGs is well known.\\n\\nMore concretely, to predict palindromes like aadcbobcdaa the net learns to push the first half of the string onto the stack until it reaches the central o and then switches direction to pop items off the stack until the end.\\n\\n> my major concern is that the tasks considered are too simple and at least one complicated large-scale task would have strengthened the paper.\\n\\nWould a complicated large-scale task have made things clearer?\\n\\nUltimately, we intend to apply the ideas described here to a more naturalistic task, e.g. language modelling. However, real data brings a host of complications that can obscure the underlying hypotheses. Here we have introduced the idea that symmetry can be a means to gain systematicity, and demonstrated this on three toy problems, which allowed us to investigate specific questions in a controlled manner.\\n\\nWe found that symmetry allows us to generalise to unseen symbols, to separate content from structure and to generalise to structures that are more complex than those seen at training. If we had rejected the use of controlled experiments then it is unlikely that we could have gained comparable clarity on these questions.\"}", "{\"title\": \"Discussion of Methodology\", \"comment\": \"Thanks for raising these concerns.\\n\\n> The model in section 2 is hand-coded. It is not shown that it can actually learn this solution from data.\\n\\nThe architecture is designed for this particular task, but the weights are learned from the data. This is not an unusual setup. Nor are convolution and pooling anomalous architectural choices.\\n\\nBut the point of the paper is not to sell a particular architecture, it is to investigate how symmetries relevant to symbolic processes can be introduced into neural architectures, and whether that leads to more systematic generalisation.\\n\\n> It is to come up with an architecture that would learn to generalize systematically in a much broader set of problems.\\n\\nI agree that this should be a core objective of Machine Learning. And I am happy to concede that I have not presented in this paper an architecture that faithfully reproduces the full robustness of human generalisation capabilities.\\n\\nToy problems and simple architectures help us to progress towards this goal, because they allow us to investigate specific questions under controlled conditions with comprehensible models.\\n\\nAll of the models described in this paper are shallow and built out of standard components, and while the reviews suggested this simplicity or lack of novelty was a problem they also managed to be confused by what exactly these apparently trivial models were doing. I am unconvinced a deeper and more sophisticated architecture applied to more complex and heterogeneous tasks would have been easier for readers to understand and so shed more light on the questions posed.\\n\\n> there\\u2019s no evidence that the models proposed there can learn to generalize in any task other than the very specific tasks they were designed for\\n\\nIt is fairly common for ML papers to focus on a single dataset, i.e. to provide no evidence that the particular innovation they introduce is relevant to any other task.\\n\\nIn contrast, we showed that symmetry was relevant to obtaining systematic generalisation in three different tasks. We also discussed how the symmetries imposed on the neural architectures related to the properties of symbolic systems. In other words, we provided a range of both theoretical and experimental evidence that imposing the right kind of symmetries can support systematic generalisation.\"}", "{\"title\": \"Discussion of Models and Experiments\", \"comment\": \"Thanks for raising these questions.\\n\\n> Also being able to generalize to 3 held-out combinations out of 100 is not very impressive. On the contrary, it is almost trivial.\\n\\nTraining on 97% of the data excludes 'not having seen enough of the data' as an explanation for a failure to generalise. And this is precisely why we chose this regime, as this allow us to focus on the impact of symmetry.\\n\\nThe results suggest that the problem is trivial for the convolutional architecture, but almost impossible for the MLP and RNN.\\n\\nIt is trivial for the convolutional architecture, because the symmetry allows it to represent structure (e.g. repeat the same thing twice) independently of the content (e.g. JUMP). So when it encounters 'jump two' at test time it has no problem generalising. The MLP and RNN in contrast learn hidden representations that are typically conjunctive, ie do not represent structure and content separately. So it is very unlikely for them to generalise robustly.\\n\\nHowever, the difficulty or triviality of the task is not the point of the experiment, which is instead to investigate how symmetry can be used to support composition.\\n\\n> Previous works reported perfect or near perfect accuracy with similar baselines in similar tasks (see e.g., Lake & Baroni, 2018).\\n\\nAlthough our task is a simplification of SCAN, this does not mean our task is easier. SCAN is complex because it contains a diverse range of structures, some of which are easier than others.\\n\\nFor example, 'jump left and look left twice' occurs in one of the SCAN test sets, while 'jump' on its own and 'walk left and look left twice' occur in the corresponding training set. Generalisation in this case is easier, because 'jump' and the context it occurs in can be translated independently.\\n\\n\\n> What is the semantics of x and y in section 3?\\n\\nAs suggested in paragraph 5 of Section 3, our intention is to represent an action (e.g. JUMP) in terms of a position and a structure (do it twice) in terms of the channels of a convolutional network. So, x represents the action to be performed and y is the structure.\\n\\n> I have no idea how the proposed model is actually supposed to work. The motivation for the model and its description are not clear at all.\\n\\nThe basic idea is that CFGs allow nested structures, e.g. a noun phrase contained within another noun phrase. If the same rules apply to all noun phrases, however deeply embedded, then this is a kind of symmetry. In particular, for an LSTM to handle these structures we really need a symmetry over the memory cells, so that it can hold multiple constituents of the same type, as it moves through a nested structure.\\n\\nA convolution over the memory cells not only supplies the symmetry that makes the idea of the same symbol stored in multiple places meaningful, it also introduces the possibility of shifting symbols across the stack of memory cells. This is important because a context free language can also be defined in terms of a PDA. The stack of the PDA has push and pop operations that correspond to shifting everything one place further into the stack and writing a new symbol at the top, or reading a symbol from the stack and shifting everything one place back. For convolution, these shift operations can be defined in terms of width 3 filters.\"}", "{\"title\": \"Typos and Clarifications from Review #3\", \"comment\": \"Thank you for identifying these issues.\\n\\n> How can you sample 1000 pairs out of a possible 100 combinations?\\n\\nWe sample with replacement.\\n\\n> y is an M+1-dimensional vector ... Please clarify this.\\n\\nThis is a typo. It should be an (M+1)xM dimensional vector. \\n\\n> \\u201cstrings of length 15, 17, 19, 21, 23 and 24,...\\u201d (p. 5). ... Is this a typo?\\n\\nYes, it should be 25.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your comments.\\n\\n> For example, the first task is simply using a single convolution layer followed by pooling for sequence prediction. However, using 1D convolution layers are somewhat wide-spread in NLP.\\n\\nThis is a misunderstanding. Our application of convolution is significantly different to the standard application to sequences in NLP. \\n\\nIn the standard use, weight sharing happens across time steps, and the function learned is equivariant to time-translations. In our case, weight sharing is between symbols, and the function is equivariant to permutation of symbols.\\n\\nSo, in Figure 1, the standard way of applying convolution to sequences would take the horizontal syllable dimension as the channels and convolve in the vertical dimension of time steps. In our network, this is reversed, and we take time steps as the channels and convolve across symbols, as explained in paragraph 6 of section 2.\\n\\nThe standard convolutional approach to sequences (weight sharing across time) will not solve Marcus's challenge, but our approach does.\\n\\nHowever, the point of the experiment is not to show off a new architectural innovation. It is to demonstrate that imposing a permutation invariance on symbols (as discussed by Tarski) allows us to model the rule learning behaviour studied by Marcus.\\n\\n> The paper is oblivious to a large body of related work\\n\\nThanks for the references, some of which may usefully expand our related work section. However, many of these papers are only distantly or superficially related to the problems and architectures we discuss in the paper.\\n\\n> it is not clear why translation invariance in-memory models the structure in a context-free grammar\\n\\nThe relation between CFGs and PDAs is well known, and imposing translation invariance on the memory cells turns an LSTM into a PDA. In particular, the push and pop operations of a PDA can be thought of as translation within the stack of memory cells. If we apply the width 3 filter 001 to a vector of values, this will shift all the values one place to the left, while 100 corresponds to a right shift. Along with only reading and writing to one end of the stack, this reproduces the behaviour of a PDA.\\n\\nMore abstractly, a symmetry across memory locations allows us to treat all instances of a symbol equivariantly. This allows the architecture to exploit memory slots at test time that were not used in training. As a consequence, the network more readily extends the learned grammar to more complex examples (i.e. those requiring more memory), essentially because the symmetry gives meaning to the idea of applying the same rule to all memory slots.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper focuses on modelling invariances or symmetry between various components for solving tasks via convolutions and weight sharing.\\nThe proposed tasks are toyish in nature although they do give insights into importance of modeling symmetry for better generalization. The first task is a symbol substitution which considers a permutation in source symbols and maps them to either \\\"ABA\\\" or \\\"ABB\\\" categories i.e binary classification. While this task does require generalizability, it is surprising that the mlp and recurrent net baselines are so much inferior (basically random) to the convolution baseline. While this shows efficacy of modeling symmetry, I'd be curious about performance graphs as the training data increases in size.\\nThe second task is an artificially created task inspired from the SCAN dataset. The task is to translate a verb-number pair into number repetitions of the verb. The encoder decoder network uses convolution in the recurrences to capture the notion of generalizability. The input and output space is very small (10 verbs and 10 numbers) but shows superiority of convolution and weight sharing over other baselines. Curiously, the recurrent baseline seems to perform better than 0% accuracy (if still poorly) on the original SCAN task which is much harder than the proposed task in this paper. Maybe, the number of examples (1000) is too small recurrent networks but this makes me a little surprised. More details about the architecture and training procedure for baselines would be helpful to ensure that the comparison is fair across baselines.\\nThe final task is CFG modeling where convolutions are used to model the forget gate of an LSTM which seems to endow the network with PDA like properties and the convolutions are more effective than baselines at modeling this.\\n\\nApart from the concerns related to the results mentioned above, my major concern is that the tasks considered are too simple and at least one complicated large-scale task would have strengthened the paper.\\nAlso, for tasks 2 and 3, the motivation behind using convolutions is not as clean as in task 1. So more analysis and insights into model performance, the weights learned, ablation studies etc. would have helped in understanding how the convolutions are modeling the symmetry. This should be informative and tractable because of simplicity of the tasks involved.\\n\\nFinally, as mentioned above, I still cannot intuitively understand why convolutions in the forget architecture would learn about symmetry related to structured repetition produced by a CFG. Hence, more analysis or a better motivation would have helped.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper investigates the idea of using symmetry and invariance in symbolic reasoning. In particular, it considers models where modeling symbolic symmetries through parameter-sharing help with generalization. The three tasks considered are: 1) rule learning: performs sequence-classification. 2) composition: performs sequence-to-sequence with structured input using encoder-decoder architecture, and 3) context-free language learning using memory. In the first two cases, the convolution (of single width) is used to benefit from the symmetry prior, and in the third task, convolution is applied to stack memory structure. In all cases, the proposed architectures were shown to outperform MLP and RNN.\\n\\nThe paper addresses an important area in deep learning, and the paper is accessible and easy to read. However, there are major issues:\\n\\n-- I found it challenging to identify a novel contribution. For example, the first task is simply using a single convolution layer followed by pooling for sequence prediction. However, using 1D convolution layers are somewhat wide-spread in NLP. \\n\\n-- The paper is oblivious to a large body of related work in the area of relational learning and invariant/equivariant deep learning. Here are some examples: Permutation invariant model for sets [1,2], and the link between parameter-sharing and invariance [3,4] is theoretically studied in several works. Note that convolution with a filter of width one followed by pooling is exactly invariant to the symmetric group. There are related works that extend these ideas to graphs [5,6], and relational learning [7]. Invariance has also been explored as it relates to memory [8]. Another relevant direction to discussions of the paper is the idea of attention in various architectures, such as transformers.\\n\\n-- There are vague or misleading claims. In particular, for some tasks, it is not clear why the proposed architecture addresses the targeted symmetry. For example, it is not clear why translation invariance in-memory models the structure in a context-free grammar. \\n\\n\\n[1] Zaheer, Manzil, et al. \\\"Deep sets.\\\" Advances in neural information processing systems. 2017. \\n[2] Qi, Charles R., et al. \\\"Pointnet: Deep learning on point sets for 3d classification and segmentation.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. \\n[3] Shawe-Taylor, John. \\\"Building symmetries into feedforward networks.\\\" 1989 First IEE International Conference on Artificial Neural Networks,(Conf. Publ. No. 313). IET, 1989.\\n[4] Ravanbakhsh, Siamak, Jeff Schneider, and Barnabas Poczos. \\\"Equivariance through parameter-sharing.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017. \\n[5] Kondor, Risi, et al. \\\"Covariant compositional networks for learning graphs.\\\" arXiv preprint arXiv:1801.02144 (2018). \\n[6] Maron, Haggai, et al. \\\"Invariant and equivariant graph networks.\\\" arXiv preprint arXiv:1812.09902 (2018). \\n[8] Kazemi, Seyed Mehran, and David Poole. \\\"RelNN: A deep neural model for relational learning.\\\" Thirty-Second AAAI Conference on Artificial Intelligence. 2018. \\n[9] Vinyals, Oriol, Samy Bengio, and Manjunath Kudlur. \\\"Order matters: Sequence to sequence for sets.\\\" arXiv preprint arXiv:1511.06391 (2015).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"======================================== Update after rebuttal =============================================\\n\\nI have now read the author rebuttal, but my concerns about the paper remain. The training details are not described in anywhere near sufficient detail (optimizer?, batch size?, learning rate?, initialization?, etc). The baseline architectures \\u201crecurrent net\\u201d or the \\u201cmulti-layer perceptron\\u201d are not described at all, despite my explicit request to that effect. I had also requested to see the source code for the experiments as this would perhaps have illuminated a lot of the details left out in the paper, but the authors have not provided it. I understand that the authors are not required to provide their code, but this should have been a relatively straightforward request in this case given the simplicity of the experiments and as I mentioned in my initial review, it would have been very useful in evaluating the paper. \\n\\nIn their rebuttal, the authors also claimed that the results in Fig. 1 and Table 1 are training results (that even though the architecture is \\\"innate\\\", the weights are learned), but I'm concerned about this claim. I happen to be doing some experiments along these lines at the moment, and it is not trivial at all to get such crisp results as those shown in Fig. 1 & Table 1 in these kinds of experiments (even when the architecture is correctly specified). Again, it would have been very helpful if the authors had either provided their source code or had described their experimental setup in sufficient detail to allow the reproduction of these results. \\n\\nGiven these concerns, I have decided to keep my score as it is.\\n\\n========================================================================================================\", \"this_paper_addresses_an_important_problem\": \"systematic generalization in neural networks. However, the paper is very confusing and I have some serious concerns about the models and the results presented in sections 3 and 4. Here are the main issues:\\n\\n1) In section 3, there are only 10x10=100 possible combinations in this composition task. Yet, the paper says \\u201cwe randomly sample 1000 such translation pairs, choose three combinations and remove all instances of them from the training data and then exclusively test on unseen pairings of command and modifier.\\u201d How can you sample 1000 pairs out of a possible 100 combinations? Also being able to generalize to 3 held-out combinations out of 100 is not very impressive. On the contrary, it is almost trivial.\\n\\n2) No details are given about the \\u201crecurrent net\\u201d or the \\u201cmulti-layer perceptron\\u201d baselines in section 3. What are these models? The fact that they have exactly zero accuracy is a bit suspicious, especially given the almost trivial nature of the task in section 3. Previous works reported perfect or near perfect accuracy with similar baselines in similar tasks (see e.g., Lake & Baroni, 2018).\\n\\n3) I'm afraid the proposed model in section 3 also doesn\\u2019t make sense to me. It is explicitly acknowledged (Appendix B) that y is an M+1-dimensional vector, g is an Nx(M+1) matrix. Then by all accounts, the convolution of these should be an N-dimensional vector. Yet, somehow, h_t+1 in Equation 2 manages to be an NxM matrix. How? Please clarify this. If possible, making the source code available would be very helpful.\\n\\n4) What is the semantics of x and y in section 3? What exactly are they supposed to be doing? This is not explained in the paper beyond a vague description.\\n\\n5) Similar problems arise in section 4. The task is not explicitly described in the text. We only learn from Appendix C that it is actually to predict the next symbol. The task description mentions \\u201cstrings of length 15, 17, 19, 21, 23 and 24,...\\u201d (p. 5). But, the grammar in Fig. 3 can only generate odd length strings, it cannot generate a string of length 24. Is this a typo?\\n\\n6) Again in section 4, I have no idea how the proposed model is actually supposed to work. The motivation for the model and its description are not clear at all.\\n\\n7) The model in section 2 is hand-coded. It is not shown that it can actually learn this solution from data. What happens if the sequences are longer or if the rules are different, for example? Then you have to hand-code a completely different architecture.\\n\\n8) Which brings me to another important issue I have with this paper (and with similar papers): this whole set-up is very misguided in my mind. I think the real problem is not to come up with an architecture that would generalize systematically in a very specific (and usually toy) problem. It is to come up with an architecture that would learn to generalize systematically in a much broader set of problems. The learning aspect in sections 3-4 is a step in the right direction, but there\\u2019s no evidence that the models proposed there can learn to generalize in any task other than the very specific tasks they were designed for (if they can actually do that).\"}" ] }
SJlxglSFPB
Efficacy of Pixel-Level OOD Detection for Semantic Segmentation
[ "Matt Angus", "Krzysztof Czarnecki", "Rick Salay" ]
The detection of out of distribution samples for image classification has been widely researched. Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution. This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects. It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task. The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts.
[ "Out-of-Distribution Detection", "Semantic Segmentation", "Deep Learning" ]
Reject
https://openreview.net/pdf?id=SJlxglSFPB
https://openreview.net/forum?id=SJlxglSFPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Fs4xmSjke", "SJguyu66Fr", "HylBaYf6FB", "BkgE5OCoYr" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740113, 1571833824353, 1571789245019, 1571707020175 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2086/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2086/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2086/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper studies the problem of out-of-distribution (OOD) detection for semantic segmentation.\\n\\nReviewers and AC agree that the problem might be important and interesting, but the paper is not ready to publish in various aspects, e.g., incremental contribution and less-motivated/convincing experimental setups/results.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"# Summary\\n\\nThis paper puts forward a study over out-of-distribution (OOD) detection for semantic segmentation. OOD detection is an active area research which has recently dealt mostly with image level classification. For semantic segmentation the same conclusions might not apply since decisions must be taken for each pixel individually. To this effect the authors propose here to study this task over a set of architectures (PSPNet, DeepLabV3+), outlier datasets (SUN, Indian Driving Dataset, synthetic images) and multiple methods for OOD detection from recent works on image classification. \\nA major difficulty in OOD works is the definition of a relevant OOD dataset and evaluation setup, and the authors propose here a novel setup for this task by adjusting the SUN and IDD datasets as OOD for Cityscapes. \\nThe experimental part is thorough with multiple evaluation metrics and some qualitative examples and discussion. \\n\\n# Rating\\nAlthough the paper studies a meaningful and interesting problem, I would reject the paper for the following reasons (more detailed arguments some lines below):\\n1) I find that the proposed dataset setups for OOD detection are not adequate for semantic segmentation and thus alter the reliability of the results.\\n2) Other than the evaluation (which has some faults), there is no proposed method addressing this task and its challenges.\\n3) The proposed MaxIoU metric proposed is not discussed and compared in more detail against the usual ones for this task.\\n\\n\\n# Strong points\\n- This work is dealing with a highly interesting and challenging problem. Indeed there has been few studies in this area and defining proper techniques and evaluation setups is challenging.\\n- The authors consider a wide array of methods for evaluation and test them across two architectures and multiple real and synthetic datasets.\\n\\n# Weak points\\n- There are two real datasets considered for OOD evaluation, but I consider there are some flaws in their utilization for OOD detection. First, the SUN dataset is quite different from Cityscapes with a significant domain gap (at least in the visual appearance and distributions of the classes) between the two datasets. Upsampling the SUN images to 5x their size in order to make them compatible to Cityscapes should increase the artefacts even further making it easier to spot the gap between SUN and Cityscapes. This means that there is a risk of having the OOD detector acting merely as domain classifier spotting whenever the domain is different from the one use for training. \\nThis argument applies to IDD, although to a smaller extent as the datasets are both automotive, but again there are strong visual differences between samples of the same class across the two datasets.\\nThe authors argue that they somehow take this argument into consideration (Figure 2) and select only the non-ID classes to perform evaluation. However in both networks the scene information is mixed into the representation (via pyramid pooling in PSPNet respectively via a trous convolutions with different dilations in DeepLabV3+). So again, instead of an OOD detector we can end up doing domain classification.\\nIt would be useful to see how is the classification performance changing for ID classes, e.g. how is a model trained on Cityscapes scoring for cars and other ID objects in IDD comparing to a model that is trained on IDD for the same classes. A big difference between these scores would correspond to a significant domain gap and in correlation with the OOD performance we might be able to take some more conclusions on the matter. \\nAhmed and Courville[i] propose an interesting discussion on this type of problems and propose focusing on semantic anomaly detection, i.e. detecting different classes from the same dataset, to make sure the setting has practical interest. They propose a very basic technique to detect OOD in previous classification setups showing the limitations of previous OOD methods. I encourage the authors to check the arguments stated in that work.\\n\\n\\n- In my opinion, the Fishyscapes work from Blum et al. is unfairly dismissed here by considering only a part of the benchmark for which animals from COCO and internet are inserted over Cityscapes images. The authors argue that this lack of realism of the inserted images make this dataset insufficient for OOD detection. However in the first version of their paper, Blum et al. propose a mix of Foggy Driving, Foggy Zurich, WildDash and Mapillary as dataset for ODD detection, which is similar with the setup proposed here. Furthermore, the latest version of Fishyscapes includes the Lost & Found dataset (mentioned in Fig. 1 here) which is recorded in similar conditions with Cityscapes with the addition of a few small outlier objects used as OOD. This is a relevant dataset and work and I would adjust the critics brought to their work here. That paper has the same objectives and endeavors as the current submission.\\n\\n\\n- Although there are some discussions and experiments on multiple techniques there is not technical contribution mitigating all the limitations of previous OOD methods on classification and the challenges of OOD detection in semantic segmentation. This would have had greatly helped the paper.\\n\\n- I find that the MaxIoU metric considered here is not sufficiently discussed and analysed to show its utility and the additional perspective it brings when evaluating along with the usual metrics.\\n\\n## Other less important weak points\\n- The choice of dataset in Figure 1 , i.e. Lost & Found, can be misleading. This dataset is not further mentioned and evaluated in the rest of the paper. The image could be replaced with a qualitative results from the rest of the evaluation.\\n\\n# Suggestions for improving the paper:\\n1) The current evaluation setting could have some flaws. I would propose some sanity checks and look at the classification performances over other ID classes, as suggested in the previous section\\n\\n2) Evaluate on a setting similar to Fishyscapes Lost and Found, in which the dataset does not change much, but there are some novel objects. \\n\\n3) Include a trivial OOD baseline in the spirit of [i] to show the utility of the proposed datasets for this task by being robust to such baselines.\\n\\n4) Consider extending the breadth of OOD methods with Deep Prior Networks[ii] which have been shown to perform well on Fishyscapes for OOD detection.\\n\\n5) Add a qualitative example with OOD detection on Perlin noise images\\n\\n# References\\n[i] F. Ahmed and A. Courville, Detecting semantic anomalies, arxiv 2019 https://arxiv.org/abs/1908.04388\\n\\n[ii] A. Malinin and M. Gales, Predictive uncertainty estimation via prior networks, NeurIPS 2018\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a comparative study of out-of-distribution (OOD) methods for semantic segmentation. To this end, authors extend networks designed for OOD image detection to accommodate for the segmentation task. For evaluation purposes authors create a new dataset and use 3 well-known metrics, as well as a new proposed metric.\", \"The paper is in general a bit dense to read and understand, since authors fail to explain many important details. Furthermore, the structure of the paper can be improved (e.g., related work is added at the end of the manuscript).\", \"Technical contribution of this work is insufficient. Authors merely employ ODD classification networks to address the pixel-level OOD task. Nevertheless, the motivation of those changes are never detailed. Furthermore, some other important aspects are not explained. For example, typically classification networks are adapted for segmentation adding a decoding path, so that the output result is a map of the same size of the input times number of classes. However, how the probability map for the pixel-level OOD predictions is never explained.\", \"Authors also mention that GANs and AEs are excluded to limit the scope of the paper. First, this reason is not convincing. Second, I believe that including all the significant works for the task-at-hand is more relevant. How these methods would work compared to the proposed networks?\", \"Unless I miss something, the adapted OOD versions degrade the semantic segmentation performance of original networks. Why not to use the original versions instead?\", \"Results are very unclear. Which dataset represents the OOD evaluation? Further, authors talk about results at image-level and pixel-level. Nevertheless, this is not detailed in the experimental section. Fig. 3 reports results of the different models at pixel-level. Where are the results of OOD at image-level? In addition, results are barely interpreted, and authors basically describe the values reported in the figures. I would appreciate a deeper interpretation of the results.\", \"I am curious to know why the confidence OOD approach has much better performance with DeeplabV3+ than with PSPNet. While PSPNet is among the worst performing models with confidence (sometimes the worst), Deeplab + confidence is typically top-ranked. Do the authors have any insight on this?\", \"There is no evidence that the proposed metric better models the OOD pixel-level performance than other standard metrics. Which are the results on this task achieved by mIOU instead?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper evaluates a variety of existing pixel-wise out-of-distribution detection methods in the task of semantic segmentation of road scenes. To do so, the paper introduces an evaluation protocol and applies it to two datasets (SUN and IDD) and two models (PSPNet and DeepLabV3+).\", \"strengths\": [\"The paper is well written with high quality visuals and plots\", \"The paper studies an important problem\"], \"weaknesses\": \"- The contribution seems to be rather incremental (evaluating existing methods on 2 dataset) and some related work might be missing\\n- Although the analysis is well executed, it is not clear what the community learns from the paper\\n\\n\\nAlthough I enjoyed reading the paper, I'd lean towards rejection of the paper. My main concern are as follows:\\n\\nIt is not clear what the community learns after reading this paper. Is the difference between different approaches significant? Are the class boundary pixels the biggest problem? The paper would benefit strongly from providing some indications of future steps, e. g. how to improve OOD in semantic segmentation.\\n\\nI'm missing a discussion between OOD and and the prior work on uncertainty estimation in semantic segmentation (e. g. https://arxiv.org/pdf/1703.04977.pdf and the follow up works, or https://arxiv.org/pdf/1807.00502.pdf). It seems that uncertainty estimates for semantic segmentation could be applied to the tested scenarios off-the-shelf without the need for additional modifications. Is there any reason the paper did not include approaches for uncertainty estimation in semantic segmentation? In general, it would be useful to connect the tested OOD scenarios to other topics already studied in semantic segmentation such as: uncertainty estimation, outlier detection, and distribution shift in semantic segmentation. A paragraph drawing connections and highlighting the differences would make the paper stronger.\", \"other_comments\": \"Section 3.2, \\\"Therefore only the car class as ID...\\\" I'm not sure I understand why car class is the only one considered ID for IDD. From Fig. 2, it seems that other classes such as bus, traffic light, pole, terrain, etc. could be also considered. Could the authors comment on this?\\n\\n\\\"The random normal noise is usually very easily detected by all methods, therefore Perlin noise images are used\\\". However, when looking at the results Fig 3 and Fig 4 it feels like Normal random noise is harder than Perlin noise. Could the authors comment on this?\\n\\n\\\"All OOD datasets used are mixed with Cityscapes evaluation sets\\\". Why it is important to add Cityscapes images to evaluation set? Wouldn't it be enough to use OOD datasets?\\n\\nOne suggestion of a plot that could jointly display the information from RQ1 and RQ2 would be to plot both of them in a one scatter plot (e. g. with ID IoU on x-axis and AUROC on the other).\\n\\nFigures 3 and 4 show results for 6 methods, while Table 2 only displays 3 scenarios for 2 models. Table 3 would benefit from including all 6 models and using the same labels as in Figures. Moreover, it would be interesting expand Table 2 by including the performance of the segmentation on in distribution classes from IID and SUN datasets in addition to Cityscapes results.\"}" ] }
SJg1lxrYwS
PatchFormer: A neural architecture for self-supervised representation learning on images
[ "Aravind Srinivas", "Pieter Abbeel" ]
Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning. Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images. In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image. Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches. We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks. Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods. Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods.
[ "Unsupervised Learning", "Representation Learning", "Transformers" ]
Reject
https://openreview.net/pdf?id=SJg1lxrYwS
https://openreview.net/forum?id=SJg1lxrYwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "RG1EvURgtL", "ryxGPZfzqS", "S1gbpSTe9B", "B1xXZpgaFB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740084, 1572114777808, 1572029880714, 1571781883257 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2084/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2084/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2084/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper presents a generative approach to learn an image representation along a self-supervised scheme.\", \"the_reviews_state_that_the_paper_is_premature_for_publication_at_iclr_2020_for_the_following_reasons\": [\"the paper is unfinished (Rev#3); in particular the description of the approach is hardly reproducible (Rev#1);\", \"the evaluation is limited to ImageNet and needs be strenghtened (all reviewers)\", \"the novelty needs be better explained (Rev#1).\", \"It might be interesting to discuss the approach w.r.t. \\\"Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles\\\", Noroozi and Favaro.\", \"I recommend the authors to rewrite and better structure the paper (claim, state of the art, high level overview of the approach, experimental setting, discussion of the results, discussion about the novelty and limitations of the approach).\"], \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Contributions:\\nThe paper aims to develop generative pre-training method for learning representations of images. Although representation learning for images has been widely investigated, the present work distinguishes itself by a combination of the following: \\na) building on the use of transformers as a series of layers after initial convolutional layers; \\nb) using self attention for aggregating context; \\nc) learning spatial dependencies across patches;\\nd) training on the task of predicting two bit gray scale version of randomly masked patches in an image\", \"results\": \"Limited experiments aim to compare against the CPC (Hefnaf et al 2019) and Selfie (Trinh et al 2019) algorithms both of which are contrastive unlike the generative approach adopted in the paper. After pre-training on unlabeled imageNet datasets the proposed approach is competitive with these algorithms with roughly similar results. \\n\\nEvaluation/Suggestions:\\nOverall the paper combines ideas from several previous works in ways that are not sufficiently novel in the opinion of this reviewer and the experiments are very limited to the imageNet dataset with 1%, 10% and 20% of labels provided to downstream classification modeling, and evaluated on top-1 and top-5 accuracies. The paper could improve on its experimental evaluation bycomparing on multiple datasets, showing error bars when averaging across multiple samplings (eg for getting the 1% label set from the entire imageNet dataset) and also comparing with other approaches even when they dont directly aim to learn representation from unlabeled data (eg Image Transformers by Parmar et al). In addition the description is very high level and does not provide enough details for experimental reproducibility. For example the reviewer had to actually guess at some of specifics of the overall end-to-end architecture since it was not fully described precisely eg in a diagram. It would be relatively easy (but important) to provide suffcient detail for reproducibility\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper attempts unsupervised representation learning, via a patch prediction task on ImageNet. The paper is sparse on details, but the method appears to be: (1) split the image into non-overlapping visible and masked patches, (2) from features extracted from the visible patches, predict the masked patches. Rather than predict RGB, they choose to predict 2-bit grayscale images. Also, rather than use the full patches, they use random crops of the input ones, and a center crop of the output ones.\\n\\nThe paper seems to be an early draft of something bigger, submitted with the hope of getting some feedback. The method description is mostly composed of tiny details, such as the number and sizes of the patches; I recommend rewriting this to focus on the big idea first, and pack the details into another sub section like \\\"Implementation Details\\\". The paper barely includes any evaluation. Also, the method does not appear to be very novel: I recommend the authors look at and compare against \\\"Unsupervised Visual Representation Learning by Context Prediction\\\" (ICCV 2015), which is conceptually very similar.\\n\\nThe evaluation right now is not good. \\\"Unknown\\\" is not a valid point of comparison. I understand the code for CPC++ might not be released yet, but the authors could at least implement their best approximation of it, and also find older works (which CPC compared against in their paper), to fill out the results and make a convincing argument.\\n\\nIn Table 2, the proposed model performs worse than CPC++, yet its values are bolded anyway. Please only put the best-performing result in bold.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The motivation of this paper is to use the idea of Transformer-based NLP models in image data, which is appreciated. However, this seems to be a far unfinished paper. The introduction part is well written. But, the method is not well described. It is very unclear how exactly the model is built. Moreover, the network structure in Figure 2 is not explained. The experimental part is very brief, and unconvincing. Much more investigations and comparisons are needed.\", \"minors\": \"deicisons? \\nmodel the only the most significant few bits -> model only the most significant few bits\\nposition position embedding -> position embedding\"}" ] }
SJlJegHFvH
Address2vec: Generating vector embeddings for blockchain analytics
[ "Ali Hussein", "Samiiha Nalwooga" ]
Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority. All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security it’s original structure does not allow for direct analysis of address transactions. Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate. We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms. We compare our approach against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.
[ "crypto-currency", "bitcoin", "blockchain", "2vec" ]
Reject
https://openreview.net/pdf?id=SJlJegHFvH
https://openreview.net/forum?id=SJlJegHFvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "mNvt_xyu-", "Ske6w1375S", "H1ey-GxEFH", "BJgjf53jOH" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740056, 1572220773509, 1571189238613, 1570650642952 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2083/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2083/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2083/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper propose to analyze bitcoin addresses using graph embeddings. The reviewers found that the paper was too incomplete for publication. Important information such as a description of datasets and metrics was omitted.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Authors propose to apply the existing machine learning model to analyze bitcoin blockchain addresses. It uses autoencoder to extract the feature of the transaction and construct a transaction graph. Then the node2vec algorithm is used to generate node embeddings for the given graph. The task is to predict the behavior of addresses. The experiments are conducted against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.\", \"pros\": \"This work studies an interesting and challenging problem.\\n\\nCons\\n1. This is an unfinished work. The proposed method lack of detail description.\\n2. The performance is much lower than the MLSLs baseline methods.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose a new method for generating vector embeddings. The studied problem is important and the topic is related to area of Bitcoin.\\n\\nEmpirical studies on some dataset (no description about the dataset) show some results on some evaluation metrics (no clear description about the metrics). For the methods 1-MLSL, 2-MLSL and 3-MLSL, it is seems that they are better than the proposed one (i.e., A2V) on some metrics, which is then inconsistent with the claims in the paper.\\n\\nMy major concern is that the paper is a bit too short and is lack of some necessary information, for example:\\n\\n1 The authors are encouraged to provide sufficient background introduction, so that the reader can have a big picture of the problem and area.\\n\\n2 The authors are encouraged to provide a detailed discussion and justification about the motivation, as well as the challenges and intuitions about the proposed method in the Introduction Section.\\n\\n3 The authors are encouraged to show a detailed derivation about the technical details, in particular its difference compared with the major baseline, i.e., MLSL.\\n\\n4 The authors are encouraged to follow the typical writing about the experiments in a paper, e.g., description about the datasets, evaluation metrics, baseline methods, parameter configurations and results analysis, etc. \\n\\nBased on the above comments, I have to make a reject recommendation.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to use an autoencoder, networkX, and node2Vec in succession to convert a Bitcoin transaction to a vector. This is then used to predict whether a Bitcoin address will become empty after a year. The results are better than flipping a coin, but worse than an existing baseline.\\n\\nGiven the apparent lack of any technical contribution to machine learning theory or practice, the inconclusive empirical results, and the generally unpolished writing (e.g., long run-on sentence in the conclusion, vague problem definition), I do not believe this paper is suitable for publication.\"}" ] }
HkgAJxrYwr
Attack-Resistant Federated Learning with Residual-based Reweighting
[ "Shuhao Fu", "Chulin Xie", "Bo Li", "Qifeng Chen" ]
Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices. However, the aggregation process in federated learning is highly vulnerable to adversarial attacks so that the global model may behave abnormally under attacks. To tackle this challenge, we present a novel aggregation algorithm with residual-based reweighting to defend federated learning. Our aggregation algorithm combines repeated median regression with the reweighting scheme in iteratively reweighted least squares. Our experiments show that our aggression algorithm outperforms other alternative algorithms in the presence of label-flipping, backdoor, and Gaussian noise attacks. We also provide theoretical guarantees for our aggregation algorithm.
[ "robust federated learning", "backdoor attacks" ]
Reject
https://openreview.net/pdf?id=HkgAJxrYwr
https://openreview.net/forum?id=HkgAJxrYwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "mG8yUNB1AS", "Bklb6D92sB", "HkxDJvg1cS", "HJxmkyoaFr", "Skxw--ijYB" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798740028, 1573853112990, 1571911391190, 1571823323122, 1571692799277 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2082/Authors" ], [ "ICLR.cc/2020/Conference/Paper2082/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2082/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2082/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes an aggregation algorithm for federated learning that is robust against label-flipping, backdoor, and Gaussian noise attacks. The reviewers agree that the paper presents an interesting and novel method, however the reviewers also agree that the theory was difficult to understand and that the success of the methodology may be highly dependent on design choices and difficult-to-tune hyperparameters.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary of Revisions\", \"comment\": \"We want to thank the reviewers for their suggestions and comments! We have posted a revised version of the paper with several improvements based on the suggestions from the reviewers.\\n\\n1. We revised our proof in A.1 to include the whole passage as well as the exact reference to the previous work. \\n2. We added more discussion and analysis in the experiment part 4.2 and 4.3 to help readers understand the intuition and design choices.\\n3. The revision also includes additional experiments on \\n (1) hyperparameter selections in appendix A.3.\\n (2) alternative linear estimator and weighting schemes in appendix A.4.\\n (3) effects of underrepresented data in appendix A.5.\\n (4) another potential attack in appendix A.7.\\n4. We added in-depth analysis in appendix A.6.1 to explain a few phenomena raised in the backdoor attack.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an approach to robust federated learning that uses robust regression to weigh all the model parameter coefficients in order to achieve the robustness.\\n\\nSpecifically, for each coefficient in the model, a repeated median estimator is used to compute a linear regression fit, and the residual of each individual model's coefficient is normalized and used to compute a confidence score for that coefficient. Coefficients which have too large a confidence have their confidence reset to zero, to avoid the influence of outliers. The local model's coefficients are now aggregated using a weighted average with weights given by these confidence scores.\", \"they_compare_their_algorithm_experimentally_to_reasonable_baselines_of_model_aggregation_algorithms\": \"the FedAvg algorithm, 3 recent robust FL algorithm, and an approach based on a standard robust regression estimator, using experiments on 4 different datasets and 4 different neural net architectures. They test the robustness of these algorithms to label flipping (MNIST, CIFAR-10), backdoor attacks, and multiplicative gaussian noise corruption of the model coefficients.\\n\\nOverall the paper presents an interesting and novel approach to robustness in FL, using a robust regression estimator to aggregate the model coefficients. The motivation of the algorithmic design is for the most part clear, but the rationale behind the particular choice of the parameter confidence score is unclear, and should be clarified. The theory in support of the method seems reasonablish, but key definitions and steps in the proof are not explained in detail, referring instead of an earlier paper. In particular, it is not clear how the smoothness of the loss function and subexponentiality of its derivatives enter into the analysis of the method, nor do these parameters enter into the final error bounds. Also, how is mu defined in the error bound: what does it mean that it is the expected global model --- does this mean this would be the model if all the participating workers were non-corrupted, honest, and had iid data? This theory is hard to parse: more effort should be spent in clarifying the assumptions and definitions and showing how the claimed result follows. The specific result referenced from earlier work should be stated unambiguously as a proposition so the reader sees how it applies where it is used.\\n\\nThe experimental results show that the algorithm performs slightly better than the considered baselines in most situations considered, but the important question of the impact of hyperparameter selection for the method (e.g. the clipping at which the weights of \\\"outlier\\\" parameters are set to zero) and the competing methods (e.g. the clipping in the trimmed mean estimator) is not addressed-- the authors indicate that the method is robust to some choices and fixes them in the appendix. This makes it difficult to tell whether the method performs better due to careful or lucky hyperparameter selection. \\n\\nAlthough the method is interesting and novel, and seems principled, the theoretical claims are unclear, and the experimental evaluation is not sufficiently informative about the impact of hyperparameter selection to draw conclusions about the effectiveness of this method of model aggregation as opposed to the baselines considered. In particular because of the latter issue, I'm leaning towards reject, but would be willing to change my score if this were addressed.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an aggregation algorithm, based on repeated median regression and residual-based weighting to defend federated learning from adversarial attacks. Experiments are shown to demonstrate the method robustness against label-flipping, backdoor and Gaussian noise attacks.\\n\\nThe paper is interesting, the topic recent and the methodology quite new, but a number of comments arise: \\n\\n1) the methodology seems to rely on a number of ad-hoc steps, hyper parameter-dependent, that might hinder reproducibility and generalization: i) residual normalization via IRLS; ii) confidence assignment; iii) extreme value correction. Could be interesting to show analysis how varying such hyper-parameters affects the results, or otherwise to add further explanation at the comments in Appendix A.3 as to why the model seems to be insensitive to \\\\lambda, or why \\\\delta is significantly affected by data distribution.\\n\\n2) could the repeated median estimator still be affected by the \\u201cfederated size\\u201d i.e. the number of models involved in the federated learning? Is there any bound on the number of participants, below which the estimator would perform poorly?\\n\\n3) proof of eq (14) could be more readable if\\n\\t\\u2022\\tfull passages were shown (for instance for the reviewer the first passage was not immediate and took adding and subtracting \\\\sum_i z^(i)\\\\mu/\\\\sum_i z^(i), and further simplification to be addressed), and\\n\\t\\u2022\\tii) reference to the exact point in which previous results are used were made explicit (i.e. where in (Yin et al 2018) the bounds are proven).\\n\\n4) proof of eq (14), when the attacker a\\\\in B is fixed, then |\\\\hat{y}^(i) - \\\\tilde{y}| should be replaced by |\\\\hat{y}^(a) - \\\\tilde{y}|\\n\\n5) A last question concerns the aspect of \\\"fairness\\\" of this learning strategy. By removing aberrant updates there is still a chance of excluding from the learning process nodes that are intrinsically different form the average ones. In this sense, it is not clear from the paper how the reweighing strategy can mitigate this aspect, as there is no certainty that underrepresented data samples would not be rejected with the proposed scheme. Still aspect could have been better investigated in controlled scenarios.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes an algorithm for mitigating poisoning attacks in federated learning settings and compares it , on four different datasets, against state-of-the-art baselines.\\n\\nExcept for some minor issues (see the list below), the paper is well-written and -organized. The description of the proposed algorithm (in pseudo code and using the illustration in Figure 2) is very clear. Overall, the experiments are carefully described.\\n\\nMy main concern is that many choices in the design of the proposed algorithm lack context/discussion and thus appear rather ad-hoc. For instance,\\n- Why is the repeated median estimator used for estimating the linear regression? Could other robust estimators have been used?\\n- Similarly, could alternative weighting schemes be used in equations (3)-(5)?\\nI think it's important to provide more context and discuss possible alternatives. An important element is the exact threat model that the authors are considering. E.g., in the last paragraph on page 4, the authors mention specific attack strategies like altering only 10% of the model parameters. It appears that the design of the model weighting scheme aims at defending against these specific types of attacks. It will be good to either discuss or evaluate empirically how this scheme performs against other strategies.\\n\\nThe theoretical guarantee in Section 3.2 is a bit sketchy in my opinion. In what sense is $\\\\mu$ the \\\"expected value of the global model\\\"? I.e. what is the expectation over? Consequently, I could not follow the statement in equation (14). Some explanation in plain text is needed here, too: in what sense does this equation provide a guarantee?\\n\\nIn the experiments, several aspects deserve further discussion: (1) the poor performance of FoolsGold almost across the entire board (except for the Gaussian noise attacks), which may indicate that this method was applied outside the threat model it was designed for; (2) the failure of all the baselines on CIFAR-10 for the naive attacking approach, while they perform fairly well on MNIST; (3) why does the attack success rate starts increasing in Figure 4 for the baseline methods only after ~25 iterations? (4) why do the baseline methods perform so poorly against label-flipping against on MNIST (Figure 3) while performing fairly well on CIFAR-10 and Amazon reviews (Table 1/2)? - I think that answering those questions may shed insights into the type of attacks that the different defences can / cannot withstand. I'd also like to challenge the authors to address whether they expect their defence to match or outperform the baselines on *any* attack strategy, or whether they can come up with scenarios where some of the baselines perform better? I would expect that the latter should be possible; it would not diminish the value of the proposed defence but shed more clarity on its possible limitations.\", \"list_of_minor_issues\": [\"in the abstract: \\\"aggression\\\" -> \\\"aggregation\\\"\", \"p.1: I would omit the statement in brackets \\\"less than 100 lines\\\".\", \"p.2: some of the related work discussion repeats content from the introduction\", \"p.3: \\\"summaries\\\" -> \\\"summarizes\\\"\", \"p.3: what does that mean: \\\"has a high breakdown point of 50%\\\"? Please explain/clarify.\", \"p.4: \\\"is the k-the diagonal of matrix in Hn\\\" -> \\\"is the k-th diagonal element of the matrix Hn\\\"\", \"p.4: my pdf reader couldn't render the binary operator on the right hand side of equation (8)\", \"p.5: \\\"the details of the proof is presented\\\" -> \\\"are\\\"\", \"p.8: \\\"that of which\\\" -> \\\"whose\\\"\", \"p.8: upper case \\\"We\\\" after comma\", \"p.8: first column \\\"Acc\\\" in Table 3: FedAvg has the highest accuracy. Generally, bold numbers in tables do not always mark the best-performing method. Sometimes, bold numbers are entirely missing. In cases where the difference is insignificant (which often appears to be the case) I would mark multiple numbers in bold, as appropriate.\"]}" ] }
rJxRJeStvB
Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding
[ "Hyunwook Kang", "Aydar Mynbay", "James R. Morrison", "Jinkyoo Park" ]
Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning? In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications. In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components. First, we show any robot scheduling problem can be expressed as a random probabilistic graphical model (PGM). We develop a mean-field inference method for random PGM and use it for Q-function inference. Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state. Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems (Multi-Robot Reward Collection (MRRC)) and scalably achieve 97% optimality with transferability. This optimality is maintained under stochastic contexts. By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance in any type of multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling (IPMS) problems.
[ "reinforcement learning", "multi-robot/machine", "scheduling", "planning", "scalability", "transferability", "mean-field inference", "graph embedding" ]
Reject
https://openreview.net/pdf?id=rJxRJeStvB
https://openreview.net/forum?id=rJxRJeStvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "TdX0BPMmUw", "SJx0hqNhjH", "rkxYp8N2oH", "rkgTSLN3jr", "Hkg7ErgLcr", "BkxFKINy5S", "rkgBtXVhFr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739997, 1573829301664, 1573828289077, 1573828165212, 1572369707431, 1571927680519, 1571730301298 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2081/Authors" ], [ "ICLR.cc/2020/Conference/Paper2081/Authors" ], [ "ICLR.cc/2020/Conference/Paper2081/Authors" ], [ "ICLR.cc/2020/Conference/Paper2081/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2081/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2081/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Unfortunately, the reviewers of the paper are all not certain about their review, none of them being RL experts. Assessing the paper myself\\u2014not being an RL expert but having experience\\u2014the authors have addressed all points of the reviewers thoroughly.\", \"title\": \"Paper Decision\"}", "{\"title\": \"We addressed all your concerns. Please check!\", \"comment\": \"We were able to improve our paper a lot thanks to your precious comments. We hope you will be able to increase your reviewer score if our revised paper and comments below addresses your concerns.\\n\\n1. Response to concern 1: It is great that you pointed out ride-sharing and package delivery since journal version of this paper were supposed to include such applications. Those problems can actually be formulated as a MRRC problem with no extra cost if we are given the set of user requests to serve. This feature was supposed to be added in the journal version of this paper, but we decided to add a paragraph about this in the last paragraph of the conclusion section. (Of course, in reality, we are not given the whole set of user requests to serve. There can be some scheduled customer arrival with random arrival time, or all customers may arrive randomly with some arrival rate (e.g. following Poisson processes). Our next working paper on stochastic customer arrival deals with such situations by adding 'vehicle location' as a task to which we can assign a robot.)\\n\\n 2. Response to concern 2: Since scalable performance means that 'the optimality gap does not significantly decrease as problem size increases', the optimal solution must be computed for every problem we test. Therefore, the test size had to be bounded not because our method is not scalable but because exponentially increasing computation time required to get the \\u201coptimal solution\\u201d baseline. However, this experiment result is enough to say our method is 'scalable' since near-optimal scheduling with the size of 8-robots and 50-tasks with time-dependent rewards is certainly an unprecedented triumph (see Rossi et al. 2018 and Li et al. 2017 in our reference list). The word 'scalable' does not usually mean that it scales infinitely large in any multi-robot planning literature; for example, see Omidshafiei et al. 2017, 'Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces'. \\nWhile it was not our problem scope to deal with 1000 robots with 1 million robots, your comment was really interesting and made us consider whether the proposed robot scheduling method can be used for such large problems. We certainly believe the answer is `yes, ours will do best among all possible methods' and here is why. Any heuristic which can deal with 1000 robots, million tasks must be based on a local-search based policy (which is supposed to near-optimally solve small local problems with few numbers of robots and tasks). In our paper, we showed that at least 8 robots/50 tasks size problem can be near-optimally solved with polynomial computational complexity, which is unprecedented triumph. This means that partitioning 1000 robots/million tasks to a lot of 8 robots/50 tasks is likely to give us a solution better than any of existing heuristics. In addition, even without partitioning our proposed method can compute assignment of 1000 robots for million tasks with the fairly fast amount of time; to be precise, our auction-based joint assignment choice rule has a computational complexity of O(number of robots x number of tasks), resulting in O(10^9). That is, at each time-step, any desktop computer can compute a joint assignment within 1 second. 1 second is very small time for a system of 1000 robots traveling to serve a million tasks. Of course, we cannot guarantee the near-optimal performance we achieved for 8/50 tasks. But the way our auction-based joint assignment is chosen was designed to achieve optimality at least locally around robots, where local here means at least 8 robots/50 tasks scale. \\n\\n3. Response to concern 3: Since we propose the first learning-based to solve multi-robot/machine planning with time-dependent rewards, we can\\u2019t provide any learning-based baseline. It is true, however, in the previous version we did not include the most-up-to-date baseline for the MRRC problem. We added certainly the most up-to-date baseline that can solve MRRC with deterministic task completion time and linearly decaying rewards. \\n\\n4. Response to concern 4: We newly included an indeed comprehensive test and analysis on transferability on page 9. \\n\\n5. Response to concern 5: We admit that previous version had Typos and English issues. Those were resolved in the new version. \\n\\nWe appreciate for helping us improve our paper to a great degree.\"}", "{\"title\": \"We addressed all your concerns. please check!\", \"comment\": \"We appreciate how much your comments improved the completeness of our paper. We hope that our new version of the paper addresses the two following concerns.\\n1.\\tAbout lack of exposition of PGM and structure2vec: It was great that you pointed this out. Thanks to your pointing this out, we included paragraphs introducing all the necessary backgrounds required to understand our paper (for PGM, see page 4, section 3.1, first and second paragraph; for structure2vec, see page 4, section 3.1, third paragraph)\\n\\n2.\\tAbout universal PPL: We included a footnote giving credit to papers in universal PPL, with citation of the paper you recommended us (see page 5 footnote)\\n\\n3. We believe this version has improved its readability to a great degree. We are sure you will feel this version is polished enough to be accepted. Thank you.\"}", "{\"title\": \"We addressed all your concerns. Please check!\", \"comment\": \"We are really glad that you are giving us some room to make your rating higher. Thanks to your comments, we found that the previous description of our RL problem reads how you described it. We wrote an entirely new introduction. We appreciate how much your helpful comment increased the readability of our paper. Now I will explain what a new version makes clear about, addressing your concerns that came from our poor writing in the previous version.\\n\\nFirst, our paper's problems are indeed different from typical reinforcement learning problems that try to learn from end to end. However, Dai et al. 2017's TSP learning problem is also different from such typical reinforcement learning problems in the exact same way. In Dai 2017 paper's method for TSP, they don't learn the travel distances among tasks. That information is assumed to be given as prior information like our paper assumes that task completion time distribution is given as prior information (In maze for MRRC experiment, we don't extract any information from the problem. Task completion time distribution is assumed to be given thanks to Dijkstra\\u2019s algorithm for deterministic environments or thanks to dynamic programming for environments with stochastic environments). The types of problems addressed by Dai and our paper are called \\\"planning\\\" problems. TSP is about deterministic planning, while MRRC in our paper is about both deterministic and stochastic planning. In planning problems, some information is assumed to be given. Solving planning problems using reinforcement learning is, as you described, not like typical RL problems that try to learn from end to end. We are sorry that all those points above were not certain in our previous version. They are clear in the current version.\\n\\nSecond, it is true that the key insights in Dai et al. 2017 were to highlight the fact that non-learning based methods for combinatorial problems are not exploiting distributions on problem instances to learn from. This key insight you pointed out actually exactly applies to non-learning methods for our problems (MRRC and IPMS) which are typical combinatorial optimization problems (In the same way non-learning methods for TSP has such issue, any non-learning methods for MRRC and IPMS have the same issue) One of the key contributions of Dai et al. 2017 is highlighting that for any optimization problems, learning methods enable us to exploit the distribution of problem instances to learn from. Our method, as described in the experiment section of the current version, was trained using problem instances that were randomly sampled from a certain probability distribution and was tested using problem instances that were randomly sampled from the same probability distribution we used for training. This was both for MRRC and IPMS. This is exactly what Dai et al. 2017 did for TSP. \\n\\nThird, we agree that our previous version lacked explanation and so it could make anyone feel that we are \\\"using RL to solve a subset of a combinatorial problem that was studied by RL before\\\". How much is MRRC different from TSP and how much it is more difficult? How much is IPMS different form TSP? As a simple example, suppose that you are given 100 tasks in 10x10 grid and 10 robots that are all located at one corner. With TSP, using how many robots can best solve this problem efficiently? Only one. More than two robots cause inefficient movement costs. In contrast, one can see that for MRRC and IPMS the number of robots we want is \\\"the more the better\\\". Other than the fact that we solve multi-robot problems, decaying rewards makes our problem much more complicated than TSP.\\n\\nFourth, it is true that we lacked baselines for MRRC. In the current version, we added the most up-to-date heuristic for MRRC with deterministic/linearly decaying rewards. In terms of learning-based baselines, we can't provide any baseline because we are the first learning-based method that solves any type of multi-robot combinatorial optimization problem. \\n\\nWe again appreciate your comments, and please rate our paper higher if our new version of the paper addresses all your concerns! Thank you.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors study a combinatorial multi-robot scheduling problem (in fact the robot part is a bit inflated, since the experiments only involve agents in a simulated discrete state-space maze) using a method that builds upon recent advances from [Dai et al. (2017)]. The main contribution is to consider each of the steps taken by Dai et al. to solve combinatorial problems on graphs, and adapt them to the considered scheduling problem.\\n\\nNot being an expert in RL, my assessment should be discounted. However, I am not sure I follow properly the main idea of the paper. The point of Dai et al. was to use RL to solve a wide family of combinatorial problems. Now, the authors claim to build upon these ideas to solve... what looks essentially like a far more standard RL problem, and not necessarily a combinatorial optimization problem. The main insight by Dai et al. was to highlight the fact that combinatorial problems are usually solved (or approximated) without \\\"warm starts\\\", i.e. they do not consider distributions on problem instances to learn from. The problem considered by the authors is, quite on the contrary, a typical RL problem where information is extracted from the problem's structure (here a maze). Therefore, I feel there is something of a fundamental contradiction going on at a fairly high-level, in the sense that the paper \\\"uses RL to solve a subset of combinatorial problems that were studied by RL before\\\". The absence of other baselines in experiments make this even more suspicious. Therefore I believe the paper's presentation could be greatly improved if it were better \\\"located\\\" within the RL literature (which is almost non-existent in the very brief bibliographic section) and that the authors were able to show that their proposals are original, within an RL context.\", \"minor_points\": [\"the comment \\\"While learning-based methods are generally believed to suffer exponentially increasing training requirements as problem size (number of robots and tasks) increases, our method\\u2019s training requirement is empirically shown not to scale while maintaining near-optimal performance\\\" --> this is too loose a statement. Provide more evidence or references.\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Paper addresses the problem of centralized multi-machine task assignment in an RL setting (\\\"multi-robot reward collection\\\"). Claim is that this has not been successfully done in a RL setting before, so a new problem is proposed (multi-agent pac-man) and results are presented on this problem. Approach proposed extends prior work from Dai 2017 and 2016 (which I am a priori unfamiliar with), and it seems to me that the exposition of this method leans a bit too heavily on presumed familiarity with those works. An auction-consensus approach is proposed whereby each machine makes a bid for each unclaimed task, then the coordinator picks the highest bid and assigns that task-machine pairing, after which the remaining machines make bids for the remaining tasks, and so forth.\\n\\nAs it stands, part of me leans toward rejecting for a couple reasons.\\n1. The exposition of the method needs to be improved to assume less background knowledge of the heuristic PGM and structure2vec methods, investing some text introducing them. Appendix C seems to do part of this, and probably should be integrated into the body of the paper.\\n2. Another view of \\\"random graphical models\\\" is the sampling trace of a universal PPL. This is studied in, e.g. https://cocolab.stanford.edu/papers/daipptr.pdf so it seems like this deserves at least a brief additional literature review as opposed to simply diving into MFI. Appendix D looks OK: since the action space is discrete, then a fixed point approach becomes feasible.\\n\\nOn the other hand, the experiments are good, the auction approach is a nice idea/novel. The ablation experiment is good, and the comparison against OR tools is also good to have. Insofar as the structure2vec is representation-oriented, it seems like a decent fit to the venue.\\n\\nOn balance, I think the paper needs too much polish and revision to accept at this time.\", \"minor_nits\": \"The word \\\"seminar\\\" is used a couple times, where from context I think \\\"seminal\\\" is intended.\\nSome figure refs are broken.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors propose a reinforcement learning method for multi-robot scheduling problems. They state the method's scalable performance and transferability. My major concerns are as follows.\\n\\n1. The paper is not easy to read. In my understanding, multi-robot scheduling is a very important problem and is very similar to many scheduling problems in complex platforms such as the dispatch system for ride sharing and package delivery. However, I did see any real application in this paper. It is very difficult to understand how this proposed method works and what is the benefit under non trivial environment.\\n\\n2. The experiments (2~8 robots, 20~50 tasks) cannot support the scalable performance or large problems very well. How about thousands and millions of robots/tasks, e.g. routing planning or dispatching for vehicles in a ride sharing platform? \\n\\n3. It is not convincing without comparison with necessary baseline methods.\\n\\n4. There is no in-depth analyses for the transferability.\\n\\n5. There are many typos, such as the missing figure citation with Figure ??.\"}" ] }
HyxTJxrtvr
Learning a Spatio-Temporal Embedding for Video Instance Segmentation
[ "Anthony Hu", "Alex Kendall", "Roberto Cipolla" ]
Understanding object motion is one of the core problems in computer vision. It requires segmenting and tracking objects over time. Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time. We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation. Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time. Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding. Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.
[ "computer", "vision", "video", "instance", "segmentation", "metric", "learning" ]
Reject
https://openreview.net/pdf?id=HyxTJxrtvr
https://openreview.net/forum?id=HyxTJxrtvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "azQBA1vbaM", "B1lvv-6Yor", "Hyexx-TFoH", "HkezDxpFiB", "SJe3pkTFjS", "rkl524DAKH", "SJlSrPVAKB", "rkxg1rP5KH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739968, 1573667167006, 1573667047837, 1573666905625, 1573666755688, 1571873970476, 1571862332663, 1571611864420 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2080/Authors" ], [ "ICLR.cc/2020/Conference/Paper2080/Authors" ], [ "ICLR.cc/2020/Conference/Paper2080/Authors" ], [ "ICLR.cc/2020/Conference/Paper2080/Authors" ], [ "ICLR.cc/2020/Conference/Paper2080/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2080/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2080/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a spatio-temporal embedding loss for video instance segmentation. The proposed model (1) learns a per-pixel embedding such that the embeddings of pixels from the same instance are closer than embeddings of pixels from other instances, and (2) learns depth in a self-supervised way using a photometric reconstruction loss which operates under the assumption of a moving camera and a static scene. The resulting loss is a weighted sum of these attraction, repulsion, regularisation and geometric view synthesis losses.\\nThe reviewers agree that the paper is well written and that the problem is well motivated. In particular, there is consensus that the 3D geometry and 2D instance representation should be considered jointly. However, due to the lack of technical novelty, the complexity of the final model, and the issues with the empirical validation of the proposed approach, we feel that the work is slightly below the acceptance bar.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"Many thanks for your careful review and helpful comments. Here\\u2019s our answer to the concerns you have raised:\\n\\n1a. \\\"I think the major argument I have is this method is lack of technical novelty, since it is straight forward to adopt the loss of Brabandere et.al 2017 to video cases for including pixels in the same group under ground truth tracking, and the self-supervised loss is exactly the same as previous methods.\\\"\\n\\nAlthough it might be straightforward to extend the loss of Brabandere et al. [1] to time, it is challenging to design an architecture that jointly integrates context from motion (3D Causal convolutions) and geometry (self-supervised depth estimation) to learn a spatio-temporal embedding that can consistently segment instances over time.\\n\\n1b. \\\"The fusion between depth and segments are relatively weak since it just ask the embedding to also decode depth, is there any further analysis of visual effect of explaining where the depth helps segments?\\\"\\n\\nIncorporating depth context greatly improves the quality of the embedding as shown in the three examples added in the Appendix (section A.3).\\nWe compare the outputs of the model trained with and without self-supervised depth estimation. For each figure, we have from left to right: RGB image, ground truth segmentation, predicted segmentation, embedding visualised in 2D, embedding visualised in RGB by projecting the three main components in the image space, and depth map.\\n\\n(i) Without depth, the car circled in red is wrongly tracked in frame 5 and 9, while our model correctly tracks it as the network has learned a consistent embedding based not only on appearance, but also on 3D geometry. Also, the RGB projection of the embedding from our model is considerably better and much more structured.\\n(ii) Without depth, the circled car merges into the red-segmented car, while our model does not as there is a significant difference in depth between the two cars.\\n(iii) The model without depth is not able to handle complete occlusion, while ours can.\\n\\n2. \\\"In the experiments, the baseline for comparison over MOTS is fairly old, and I think it makes sense to include the number of MOTS paper, which is currently hard to align with that shown in the paper. In Tab.2, the author only highlight the improved motion metric, while in per-frame AP the results are actually lower than the baselines. It also needs to be well explained.\\\"\\n\\nPlease refer to the general response (point 1 and 3).\\n\\n3. \\\"The paper claims `\\\"it generates temporal consistent segmentation \\\" (which is not guaranteed, maybe just statistically better but not exact).\\\"\\n\\nQuantitatively, we show that our model improves the baselines with IoU correspondence, and qualitatively we can see that the segmentation is temporally consistent as shown in the accompanying video: https://youtu.be/pqRPXRUlQ2I\", \"and_on_more_qualitative_video_examples_here\": \"https://drive.google.com/open?id=1u-kGxQEWIoC6FguUiXFHyxUcOiG2iIIf\\n\\n\\nReferences\\n[1] \\u201cSemantic Instance Segmentation with a Discriminative Loss Function\\u201d Bert De Brabandere, Davy Neven, Luc Van Gool.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your helpful comments and suggestions. Here\\u2019s our answer to the concerns you have raised:\\n\\n1. \\\"Comparisons to other video instance segmentation methods [1,2,3] are missing. The only comparison is done with a single-frame instance embedding method.\\\"\\n\\nWe have updated the paper to include the results of Track R-CNN [4], more details in the general response (point 1). The approach of Yang and Fan [1] is almost identical to Track R-CNN, as they also modify Mask R-CNN to include a Tracking head which assigns an identity vector to each detection. The implementation of Hu et al. [2] is not described in the paper, and there is no publicly available code associated. I have contacted the authors to ask for clarifications in the implementation of their model but they unfortunately haven\\u2019t responded yet. Hu et al. [3] introduced a new synthetic dataset for Video Object Segmentation, but did not propose a new model as they give benchmark results using Mask R-CNN. We have however added a comparison with Mask R-CNN and IoU correspondence to track instances.\\n\\nWe also updated our Related Work section to include [2] and [3].\\n\\n2. \\\"The authors propose to predict depth as an auxiliary task. However, they do not use the predicted depth at test time. This is a missed opportunity. Difference in depth might help in identifying instances. Also, it might be worthwhile to investigate if instance segmentation is helping depth prediction.\\\"\\n\\nWe have actually ran this experiment. In order to more explicitly use depth information, we concatenated the predicted depth map to the segmentation embedding and learned a new embedding from these features. However, the results did not differ from our model, as the shared representation (after the 3D Causal convolutions) between embedding and depth already encodes information from 3D geometry.\\n\\nWe agree that depth prediction and instance segmentation might benefit from each other, and it is one of our future research directions.\\n\\nReferences\\n[1] \\\"Video Instance Segmentation\\\" Linjie Yang and Yuchen Fan.\\n[2] \\\"MaskRNN: Instance Level Video Object Segmentation\\\" Yuan-Ting Hu, Jia-Bin Huang, and Alexander G. Schwing.\\n[3] \\\"SAIL-VOS: Semantic Amodal Instance Level Video Object Segmentation \\u2013 A Synthetic Dataset and Baselines\\\" Yuan-Ting Hu, Hong-Shuo Chen, Kexin Hui, Jia-Bin Huang, Alexander Schwing.\\n[4] \\u201cMOTS: Multi-Object Tracking and Segmentation\\u201d Paul Voigtlaender, Michael Krause, Aljosa Osep, Jonathon Luiten, Berin Balachandar Gnana Sekar, Andreas Geiger, Bastian Leibe.\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"Many thanks for the feedback. Here\\u2019s our answer to the concerns you have raised:\\n\\n1. \\u201cThe authors mention that scenes are assumed to be mostly rigid, and appearance change is mostly due to the camera motion. I would like to see more argument about this, as there are cases if this is obviously not true; for instance, human changes pose significantly. If we limit the range of discussion to some narrow domain, such as self-driving, this might be more valid, but we may want to see some discussion about validity of this assumption.\\u201d\\n\\nOur model learns depth in a self-supervised way using a photometric reconstruction loss, which operates under the assumption of a moving camera and a static scene. When this hypothesis does not hold true, for example when the camera is stationary or some scene objects are in motion, performance can rapidly deteriorate. During test time, objects that are typically seen in motion during training are then assigned an infinite depth value. To overcome this problem, we simply mask during training the pixels that do not change appearance from frame to frame (details in Appendix A.1). Since these pixels are often associated with objects moving at the same velocity as the camera, or to scenarios when the camera stops moving, this masking approach effectively removes the pixels that violates the rigid scene assumption.\\n\\nDuring inference however, our model is able to correctly predict the depth maps of moving objects: see some examples here https://drive.google.com/open?id=1u-kGxQEWIoC6FguUiXFHyxUcOiG2iIIf \\n\\n2. \\\"Some modules are not full explained in detail. For example, what is the background mask network? Which model was used, and how was it trained?\\\"\\n\\nThe background mask network is described in Appendix A.1: it is a ResNet network with a U-net structure and was trained on KITTI.\\n\\n3. \\\"In experiment, the proposed method shows nice score on MOTSA and sMOTSA, but all other metrics, it is on the worse side. The authors are encouraged to discuss more about the metrics and experimental results with the other metrics as well.\\\"\\n\\nPlease refer to the general response (point 3).\"}", "{\"title\": \"General response to the reviewers\", \"comment\": \"We would like to thank the reviewers for their feedback and helpful comments.\\n\\nWe wanted to emphasise that our work presents the first spatio-temporal embedding approach for Video Instance Segmentation. All the other existing methods (Hu et al. [1], Voigtlaender et al. [2], Yang and Fan [3]) follow the region proposal approach, i.e. region of interest detection followed by mask refinement and identification vector assignment to track objects. We believe this is clear demonstration of novelty that is interesting to the ICLR community.\", \"we_propose_a_different_paradigm_more_grounded_with_the_real_world\": \"(i) Our method learns a spatio-temporal embedding integrating cues from appearance, motion and 3D geometry, which can naturally track instances over time, without any complex postprocessing.\\n(ii) Our network runs in real-time and online as our architecture is entirely causal \\u2012 we do not incorporate information from future frames as opposed to previous methods.\\n\\n\\nIn addition to addressing individual reviewer concerns, we have revised the paper and would like to highlight these improvements:\\n\\n1. We report the results of the approach used by Hu et al. [1], and Voigtlaender et al. (Track R-CNN [2]) using the implementation of the authors. We observe that our model is competitive even though a direct comparison with Track R-CNN is not possible as:\\n(i) Their model was pretrained on Cityscapes and Mapillary Vistas while our model was solely trained on KITTI.\\n(ii) Track R-CNN operates on future frames to predict the current segmentation, while our model is causal and only uses past and present frames. \\n\\nIt is possible to further improve our model by using a more powerful mask network, as the quality of the mask has a great influence on performance: for example when using the ground truth mask, our MOTSA metric goes from 0.612 to 0.804.\\n\\n2. We added qualitative examples that illustrates cases where learning 3D geometry is essential to disambiguate between objects, especially in complex scenarios such as partial or total occlusion (see the figures in Appendix A.3). We also observe that the embedding is much more structured when incorporating depth information.\\n\\n3. The static detection metrics (average precision, recall, precision) are evaluated image by image without taking into account the temporal consistency of instance segmentations. As the compared models (Without temporal model, Without depth, Ours) are all using the same mask network, they show similar performance in terms of detection. \\n\\nHowever, when evaluating performance on metrics that measure temporal consistency (MOTSA and sMOTSA), our best model shows significant improvement over the baselines.\\n\\n4. We updated our Related Work section to include the work of [1] and [4] on Video Object Segmentation.\\n\\n5. We clarified how our model learns depth by specifying that during training we mask pixels that violate the rigid scene assumption in the photometric reconstruction loss (more details in Appendix A.1).\\n\\nAll modifications are in green in the paper.\\n\\n\\nReferences\\n[1] \\\"SAIL-VOS: Semantic Amodal Instance Level Video Object Segmentation \\u2013 A Synthetic Dataset and Baselines\\\" Yuan-Ting Hu, Hong-Shuo Chen, Kexin Hui, Jia-Bin Huang, Alexander Schwing.\\n[2] \\u201cMOTS: Multi-Object Tracking and Segmentation\\u201d Paul Voigtlaender, Michael Krause, Aljosa Osep, Jonathon Luiten, Berin Balachandar Gnana Sekar, Andreas Geiger, Bastian Leibe.\\n[3] \\\"Video Instance Segmentation\\\" Linjie Yang and Yuchen Fan.\\n[4] \\\"MaskRNN: Instance Level Video Object Segmentation\\\" Yuan-Ting Hu, Jia-Bin Huang, and Alexander G. Schwing.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents learning a spatio-temporal embedding for video instance segmentation. With spatio-temporal embedding loss, it is claimed to generate temporally consistent video instance segmentation. The authors show that the proposed method performs nicely on tracking and segmentation task, even when there are occlusions.\\n\\nOverall, this paper is well-written. Section 3 clearly explains the loss functions. The main idea is not very complex, but generally makes sense. The authors mention that scenes are assumed to be mostly rigid, and appearance change is mostly due to the camera motion. I would like to see more argument about this, as there are cases if this is obviously not true; for instance, human changes pose significantly. If we limit the range of discussion to some narrow domain, such as self-driving, this might be more valid, but we may want to see some discussion about validity of this assumption.\\n\\nSome modules are not full explained in detail. For example, what is the background mask network? Which model was used, and how was it trained?\\n\\nIn experiment, the proposed method shows nice score on MOTSA and sMOTSA, but all other metrics, it is on the worse side. The authors are encouraged to discuss more about the metrics and experimental results with the other metrics as well. Other than these, the experiment was well-designed and conducted.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper propose a video instance embedding loss for jointly tackling the instance tracking and depth estimation from self-supervised learning.\", \"pros\": \"\", \"1\": \"I think the major argument I have is this method is lack of technical novelty, since it is straight forward to adopt the loss of Brabandere et.al 2017 to video cases for including pixels in the same group under ground truth tracking, and the self-supervised loss is exactly the same as previous methods. The fusion between depth and segments are relatively weak since it just ask the embedding to also decode depth, is there any further analysis of visual effect of explaining where the depth helps segments?\", \"2\": \"In the experiments, the baseline for comparison over MOTS is fairly old, and I think it makes sense to include the number of MOTS paper, which is currently hard to align with that shown in the paper. In Tab.2, the author only highlight the improved motion metric, while in per-frame AP the results are actually lower than the baselines. It also needs to be well explained.\", \"cons\": \"\", \"3\": \"The paper claims `\\\"it generates temporal consistent segmentation \\\" (which is not guaranteed, maybe just statistically better but not exact).\\n\\nOverall, in my opinion I suggest it to be a workshop paper, but the contribution is somehow not significant for a major publication.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\nThe paper presents a method to learn an embedding space for each pixel in a video that indicates the instance id of the objects. They also propose an auxiliary loss based on depth prediction to improve performance. This can be used to both segment and track objects in videos.\\n\\nStrengths\\n1) The proposed approach is simple and general and handles the problem of occluding objects in videos. \\n2) Their causal convolution architecture will be useful for other problems in videos.\\n3) The authors perform a number of ablations to investigate how much each part of their solution contributes to the final performance.\\n4) The paper is well-written and well-motivated.\\n\\nWeaknesses\\n1) Comparisons to other video instance segmentation methods [1,2,3] are missing. The only comparison is done with a single-frame instance embedding method. \\n2) The authors propose to predict depth as an auxiliary task. However, they do not use the predicted depth at test time. This is a missed opportunity. Difference in depth might help in identifying instances. Also, it might be worthwhile to investigate if instance segmentation is helping depth prediction.\\n3) Experiments have been conducted on only one dataset. \\n\\n\\nReferences\\n[1] \\\"Video Instance Segmentation\\\" Linjie Yang and Yuchen Fan.\\n[2] \\\"MaskRNN: Instance Level Video Object Segmentation\\\" Yuan-Ting Hu, Jia-Bin Huang, and Alexander G. Schwing.\\n[3] \\\"SAIL-VOS: Semantic Amodal Instance Level Video Object Segmentation \\u2013 A Synthetic Dataset and Baselines\\\" Yuan-Ting Hu, Hong-Shuo Chen, Kexin Hui, Jia-Bin Huang, Alexander Schwing.\"}" ] }
Hkla1eHFvS
Efficient Exploration via State Marginal Matching
[ "Lisa Lee", "Benjain Eysenbach", "Emilio Parisotto", "Erix Xing", "Sergey Levine", "Ruslan Salakhutdinov" ]
Reinforcement learning agents need to explore their unknown environments to solve the tasks given to them. The Bayes optimal solution to exploration is intractable for complex environments, and while several exploration methods have been proposed as approximations, it remains unclear what underlying objective is being optimized by existing exploration methods, or how they can be altered to incorporate prior knowledge about the task. Moreover, it is unclear how to acquire a single exploration strategy that will be useful for solving multiple downstream tasks. We address these shortcomings by learning a single exploration policy that can quickly solve a suite of downstream tasks in a multi-task setting, amortizing the cost of learning to explore. We recast exploration as a problem of State Marginal Matching (SMM), where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task. We optimize the objective by reducing it to a two-player, zero-sum game between a state density model and a parametric policy. Our theoretical analysis of this approach suggests that prior exploration methods do not learn a policy that does distribution matching, but acquire a replay buffer that performs distribution matching, an observation that potentially explains these prior methods' success in single-task settings. On both simulated and real-world tasks, we demonstrate that our algorithm explores faster and adapts more quickly than prior methods.
[ "reinforcement learning", "exploration", "distribution matching", "robotics" ]
Reject
https://openreview.net/pdf?id=Hkla1eHFvS
https://openreview.net/forum?id=Hkla1eHFvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "RY6KFk3nBC", "SJey_N93oB", "r1gUcfZnsH", "HkeC5QqjjH", "rJl4Cf9soB", "ryxOfbqioB", "Sye85e9jsS", "BklTj6E6tB", "Ske-6qohFB", "SJgrEg0sKr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739939, 1573852262829, 1573814925580, 1573786518117, 1573786316434, 1573785871696, 1573785742433, 1571798437355, 1571760825038, 1571704876814 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2079/Authors" ], [ "ICLR.cc/2020/Conference/Paper2079/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2079/Authors" ], [ "ICLR.cc/2020/Conference/Paper2079/Authors" ], [ "ICLR.cc/2020/Conference/Paper2079/Authors" ], [ "ICLR.cc/2020/Conference/Paper2079/Authors" ], [ "ICLR.cc/2020/Conference/Paper2079/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2079/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2079/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper provides a nice approach to optimizing marginals to improve exploration for RL agents. The reviewers agree that its improvements w.r.t. the state of the art do not merit a publication at ICLR. Furthermore, additional experimentation is needed for the paper to be complete.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for reading the response!\\n\\n1) *Soundness of fictitious play*: Our convergence result assumes that the optimization for each player be solved exactly at each step. In our implementation, we approximated this by taking one gradient step at each time step. Note that this is the same approximation made in the GAN literature. For example, the analysis in Section 4 of [Goodfellow 2014] assumes that the discriminator is optimal.\\n\\n2) *Comparison with IRL*: State Marginal Matching does not require that there exist a policy that can exactly match the target distribution. We do assume (Assumption 1) that the _density_ model is fit exactly to the policy. We emphasize that this is the same assumption made in GAIL and AIRL. In our experiments, we used 1e4 expert states to train GAIL (we also updated Table 3 and Appendix D.5).\\n\\nMoreover, we want to reiterate that our main contribution is not a new algorithm: the idea of matching state marginals was discussed in prior work (e.g., [1]), and (as we show in Section 4) was done implicitly in exploration methods based on predictive error. Our paper contributes an understanding of precisely what these previous exploration methods are doing, and makes the connection between exploration and distribution matching explicit. In some sense, our experiments actually were a false flag, suggesting that we were proposing a new method. The main aim of our experiments was to show that, since all these prior works are approximately optimizing the same objective, all should perform comparably. Our experiments show that State Marginal Matching is a reasonable exploration objective on both simulated and real-world control tasks. \\n\\n3) *State-action distributions*: Yes, there are tasks where exploration in action space is important (e.g., tasks with very large action spaces). Yes, State Marginal Matching is trivial to extend to state-action distributions: we simply modify the policy update (Eq 5) to use $\\\\log p^*(s, a)$ instead of $\\\\log p^*(s)$.\\n\\n[1] \\\"Provably Efficient Maximum Entropy Exploration\\\", Hazan et. al. ICML 2019.\", \"http\": \"//proceedings.mlr.press/v97/hazan19a/hazan19a.pdf\"}", "{\"title\": \"Thanks for the revision\", \"comment\": \"Thank you for your reply and the additional experiments.\\n\\n1) Unfortunately you did not comment on my notes regarding the soundness of the derivation based on fictitious play.\\n\\n2) Regarding the relation between some imitation learning approaches and state distribution matching. I do understand that imitation learning assumes access to samples from a distribution whereas SMM assumes access to a desired distribution (potentially in the form of a reward function). However, apart from this, I do not see any major difference compared to the setting of some imitation learning algorithms.\\nNote that these methods (GAIL, AIRL, etc.) are derived for minimizing a divergence and do not necessarily assume that a divergence of zero is achievable. AIRL even minimizes the (approximately) same divergence as SSM. I believe that evaluating these methods on the SMM-scenario is straightforward by (approximately) sampling from the desired distribution. I suggested a similar experiment to the one that you added. However, I would have used a sampler to obtain samples from p* so that both methods approximately optimize the same target distribution (or reward function). Of course, using an off-the-shelf sampler to obtain samples would lead to additional computational overhead. However, depending on the reward function and the required sample accuracy the overhead might be negligible and you do not evaluate w.r.t. computational time anyway. Your additional experiments also do not seem to provide the number of demonstrations that you provided to GAIL (when using synthetic demonstrations we can and should use a large number of samples to avoid overfitting).\\n\\n3) Regarding 4. I understand that you can maximize state-action entropy. My Question was:\\n\\\"SMM only considers matching state distributions. If I understand correctly, the approach could be easily generalized to state-action distributions, correct? Wouldn't it make sense for some tasks to also explore in the action space?\\\"\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your detailed response. Per your suggestion, we have added a comparison to state-of-the-art exploration baselines on the real-world manipulation task (Fig 4b). We have also added a comparison with GAIL on both the simulated and real-world manipulation experiments (Fig 3c, 4b). GAIL makes slightly different assumptions than State Marginal Matching, and we explain how we compare the methods on a level footing in Appendix D.2. In both cases, State Marginal Matching outperforms all baselines, including GAIL.\", \"we_address_your_questions_below\": \"1. There might be a misunderstanding here: the exploration baselines have access to exactly the same information as SMM. The only supervision given to both is the environment reward (see Appendix D.1). SMM interprets this reward as the log probability of a target distribution.\\n\\n2. MaxEnt RL is maximizing entropy in action space. As shown in Figures 2 and 4, exploration in state space, as done by SMM, is substantially more effective. While SMM is taking a target state distribution and returning a policy, inverse RL is doing the opposite, taking an expert policy and returning a distribution over trajectories. \\n\\n3. You are correct that, in the navigation setting, it is more realistic to consider the setting where goals are observed by the agent. The aim of the navigation setting we constructed was not to be as realistic as possible, but rather to create a testbed for exploration where we can parametrically vary the difficulty of exploration. The key property of hard exploration tasks is that states with high reward (i.e., the goal) is not known apriori, so the agent must explore to find these high-reward states. The navigation environment is designed to mimic these dynamics.\\n\\n4. The navigation task used a purely sparse reward, while the simulated and real-world manipulation environments used a combination of dense and sparse rewards (see Appendix D.1). All of these tasks used in our experiments are challenging precisely because of the sparse components of their rewards.\\n\\n5. We will clarify when in the training process each of the figures was created. Fig 2c was created by just considering the goals visited during the initial exploration phase. Fig 3b shows performance during the task-specific adaptation phase, after all task-agnostic exploration had taken place. Fig 3c and 3d show performance after the task-agnostic exploration phase, but before any task-specific adaptation. Fig 4b shows performance after the task-agnostic exploration phase, but before any task-specific adaptation. Fig 4c shows performance during the task-agnostic exploration phase.\\n\\n6. We likewise expect SAC to get worse as the number of hallways increases. Since all of the SAC numbers are within 2 standard deviations, we attribute the slight rise to random fluctuations. \\n\\n7. We estimated the density of data x as $p(x) \\\\approx decoder(\\\\hat{x} = x | z=encoder(x))$. That is, we encoded x to z, reconstruction $\\\\hat{x}$ from z, and then took the likelihood of the true data x under a Gaussian distribution centered at the reconstructed $\\\\hat{x}$. We have clarified this in Appendix D.2.\\n\\n8. In Appendix D.3 (\\\"Historical Averaging\\\") we discussed how the results change if we sampled checkpoints uniformly vs. if we sampled later checkpoints more frequently. We found that uniform sampling worked less well, possibly due to the policies at early iterations not being trained enough.\\n\\n9. As shown in Fig 4a, in the initial state, the fingers were not placed in the middle of the knob spaces, but rather were close to the knob on one side than the knob on the other side. We attribute the difference in variance to this asymmetry in the initial position.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for the detailed review and suggestions for improvement. We believe that we have already included the comparison to state-of-the-art exploration methods for simulated manipulation (Figure 3). We have also incorporated your suggestion of including the exploration baselines on another task (real-world manipulation in Figure 4b), and including GAIL comparisons to both simulated and real-world manipulation results (Figures 3c, 4b), as noted in the \\\"Revisions Summary\\\" comment above. We also think there may be a misunderstanding about the connection with imitation learning, which we will discuss below.\", \"imitation_learning\": \"We think there might be a misunderstanding regarding the relationship between State Marginal Matching and imitation learning. You are correct in noting that both State Marginal Matching and many imitation learning methods (e.g., GAIL, AIRL) maximize the same objective (a KL divergence between state marginals). However, these methods have different requirements: State Marginal Matching (and other exploration methods) require a reward function, while imitation learning methods require expert trajectories. It is unclear to use how a fair comparison could be done, though we would welcome any suggestions. We have added GAIL results for Manipulation (Fig 3b, 3c) and D'Claw (Fig 4b), and also included an ablation study of GAIL (Fig 6) to explain the different variations of GAIL that we've tried.\\n\\nMoreover, this connection between exploration and distribution matching is, in fact, a significant part of our contribution: prior exploration methods do perform approximate distribution matching. More precisely, we show in Section 4 that a large class of exploration bonuses (those based on prediction error) are all maximizing the same objective: marginal state entropy. Our work makes this connection explicit, and explains how state distribution matching can be performed properly. This observation is useful precisely because many of the underlying ingredients, such as adversarial games and density estimation, have seen recent progress and therefore might be adopted to improve exploration methods.\\n\\nOther Clarifications to the Reviewer's Questions:\\n\\n1. VAEs: We estimated the density of data x as $p(x) \\\\approx decoder(\\\\hat{x} = x | z=encoder(x))$. That is, we encoded x to z, reconstruction $\\\\hat{x}$ from z, and then took the likelihood of the true data x under a Gaussian distribution centered at the reconstructed $\\\\hat{x}$. We have clarified this in Appendix D.2\\n\\n2. Extrinsic Reward: Yes, all exploration methods have access to exactly the same information and the same extrinsic reward (see Appendix D.1 for details). SMM interprets this reward as the log probability of a target distribution.\\n\\n3. Yes, density estimation is challenging, though it continues to lie as a foundation for many exploration methods (e.g., pseudo-counts). By showing that exploration is really a problem of density estimation, our work allows progress on density estimation to be made useful for exploration.\\n\\n4. State-Action Entropy: In fact, our approach does maximize the state-action entropy. The state-action entropy factors as H[a|s] + H[s]. The first term, H[a|s], is maximized by MaxEnt RL methods (e.g., SAC), while the latter is maximized with the fictitious play procedure.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your detailed response. We aim to convince you that, while our paper shares some elements with [Hazan et al], our paper makes a substantial contribution on top of this prior work. We have also added exploration baselines on the real-world manipulation task (Fig. 4b), and observe that our method outperforms these baselines. (See Revisions comment below.)\", \"similarities\": \"Similar to [Hazan et al], our paper suggests that a KL divergence between some target state distribution and the policy state marginal distribution be used as an objective for exploration. The algorithms we introduce are quite similar. Consider the special case of [Hazan et al] where the objective R(d) is a KL divergence, KL(d || Q). The reward function suggested by [Hazan et al], $\\\\nabla KL(d || Q)$ is $log Q(s) - log d_\\\\pi(s) + 1$, which is the same reward that we use (the additive constant 1 does not affect the behavior of the optimal policy).\", \"differences\": \"Our main contribution comes in situating State Marginal Matching in relation to prior work on exploration. More precisely, we show in Section 4 that a large class of exploration bonuses (those based on prediction error) are all maximizing the same objective: marginal state entropy. Said another way, the objective suggested by [Hazan et al] (Section 3.1) was already being (approximately) optimized by existing methods! Nonetheless, our experiments show that an algorithm designed to explicitly maximize this objective (as introduced by [Hazan et al] and ourselves) performs slightly better at maximizing this objective.\", \"our_main_contribution_is_not_a_new_algorithm\": \"the idea of matching state marginals was discussed in prior work (e.g., [Hazan et al]), and (as we show) was done implicitly in exploration methods based on predictive error. Our paper contributes an understanding of precisely what these prior exploration methods are doing. In some sense, our experiments actually were a false flag, suggesting that we were proposing a new method. The main aim of our experiments was to show that, since all these prior works are approximately optimizing the same objective, all should perform comparably. Our experiments show that State Marginal Matching is a reasonable exploration objective on both simulated and real-world control tasks. To the best of our knowledge, our paper is the first to successfully apply an entropy-based exploration algorithm to a real-world robot.\", \"revisions\": \"We have updated the discussion of Hazan in Section 2 (paragraph 2) and have added a noted the similarities in Section 3 (paragraph 4). We believe these revisions address the concern that Hazan was not reviewed thoroughly enough.\"}", "{\"title\": \"Author Response: Revisions Summary\", \"comment\": \"We thank the reviewers for their constructive feedback. Following some of the reviewers' suggestions, we have added the following experimental results:\\n\\n1) We added exploration baseline results (ICM, Pseudocounts, Count) on the real-world manipulation task (Fig 4b). We note that SMM visits a wider range of states than other exploration baselines. We summarize the hyperparameter sweeps for each exploration method in Table 3. \\n\\n2) We added a GAIL comparison for simulated (Fig 3c) and real-world (Fig 4b) manipulation experiments. GAIL makes slightly different assumptions than State Marginal Matching (e.g., GAIL requires expert trajectories), and we explain how we compare the methods on a level footing in Appendix D.2. In particular, we used states sampled from p*(s) to train GAIL. We also tried restricting the GAIL discriminator input to particular state dimensions (e.g., object position), and also tried different state sampling distributions. Out of these, we used the best GAIL model to compare against the exploration baselines in Figure 3c and 4b.\", \"other_changes\": [\"3) We fixed typos regarding the state dimensions of Manipulation & D'Claw tasks (Appendix D.1, Table 1):\", \"ManipulationEnv has state dimension 25, not 10 as previously stated.\", \"D'Claw has state dimension 12 and action dimension 9, not 2 for both.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"### Summary\\n\\nThis paper proposes to optimize the state marginal distribution to match a target distribution for the purposes of exploration. This target distribution could be uniform or could encode prior knowledge about downstream tasks. This matching can be done by iteratively fitting a density model on the historical data from the replay buffer, and training a policy to maximize the log density ratio between the target distribution and the learned density model. Experiments are performed on two domains: a simulated manipulation task and a real robotic control task. Overall, the paper is well-written.\\n\\n\\n### Review\", \"recommendation\": \"weak reject for the reasons below. The main reason is that this paper ignores very similar prior work which is not properly credited.\\n \\n\\nThe algorithm proposed here is very similar to the algorithm proposed in [1]:\\n- the objective proposed in equations (1) and (2) is the same as the second objective in Section 3.1 of [1]. \\n- Algorithm 1 here is almost identical to Algorithm 1 in [1]\\n\\nThe work of [1] is only briefly mentioned in the related work section, and from the description there seems to be a fundamental misunderstanding of it. \\nIt says \\\"their proposed algorithm requires an oracle planner and an oracle density model, assumptions that our method will not require\\\".\\n\\nMaking oracle assumptions is a tool for proving theoretical results, not a feature of an algorithm. An oracle can be any subroutine that one has reason to believe works reasonably well, and how well it works or not is captured in its accuracy parameter (usually \\\\epsilon).\\nThey are used to break down a more complex algorithm into simpler subroutines (called oracles), and deriving a guarantee on the complex algorithm in terms of the quality of the oracles. \\nFor example, [1] assumes a density estimation oracle, which could be instantiated as a kernel density estimator, a VAE, count-based density estimation in the tabular case, etc. \\nIt also assumes a planning oracle, which could be instantiated using any method for learning a policy (PPO, SAC, policy iteration, etc), or some search method if the environment is deterministic. \\nThe accuracy of the oracles are reflected in the \\\\epsilon_0 and \\\\epsilon_1 parameters, which then show up the guarantee for theorem 1. \\n\\nTheorem 1 of [1] also shows that the entropy of the policy mixture (i.e. replay buffer) matches the maximum entropy over the policies in the policy class, which is one of the main theoretical claims of the work here. \\n\\nGiven this, I don't see this paper as making any new algorithmic or theoretical contributions. On the other hand, [1] had a very limited empirical evaluation and it would be valuable to have a more thorough empirical investigation of this type of method in the literature. This paper partially does that in the sense that they apply more modern methods (VAEs rather than counts/kernel density estimators) on more complex tasks (a simulated manipulation task and a real robot), and their experiments seem well-executed with proper comparisons. However, since the primary contribution of this paper seems to be empirical, I don't think the current experiments on two domains are enough. \\n\\nI think this paper could be fine for publication with a fairly significant rewrite placing it in the context of prior work, and expanding the experimental section. \\nMy suggestions are to add experiments on several other continuous control tasks (Mujoco/Roboschool) as well as hard exploration Atari games (Montezuma, Freeway, Pitfall etc), to see how well the density estimation works in pixel domains (and the effect of historical averaging). I would be willing to raise my score if these changes can be made within the rebuttal period. \\n\\n\\n[1] \\\"Provably Efficient Maximum Entropy Exploration\\\", Hazan et. al. ICML 2019.\", \"http\": \"//proceedings.mlr.press/v97/hazan19a/hazan19a.pdf\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe paper proposes to frame exploration in reinforcement learning as a distribution matching problem. More specifically, the proposed method (SMM) aims to minimize the reverse KL between the state distribution induced by the policy and a desired state distribution. The desired state distribution can be used to guide exploration (e.g. by penalizing bad states) or can be chosen uniformly (resulting in a policy that maximizes state entropy).\\nSMM iteratively optimizes the policy and a model for its induced state-distribution. The latter is used for approximating the policies state entropy. The algorithm is framed as a two-player zero-sum game between a \\\"policy player\\\" and a \\\"density player\\\". In order to justify that one of the players is assumed fixed while optimizing the other player, SMM optimizes the players against a historical average of the opponent. Such approach is known as fictitious play in game theory and ensures convergence to a Nash equilibrium.\\nThe density player maximizes the likelihood of the states encountered during all previous roll-outs using a VAE. The policy update is framed as a standard reinforcement learning problem where the log-pdf of the target distribution serves as reward and the log-pdf of the model acts as cost. For practical reasons, the policy update does not consider an historical average but only uses the most recent density model. As the learned exploratory policy should correspond to an historical average exploration for the downstream task is achieved by sampling one of the learned policy at the beginning of each roll-out. The historical averaging also benefits prior exploration methods.\\nNotably, the appendix also describes a modification that learns a mixture of policies in each iteration where a discriminator approximates the responsibilities for each component which is used as additional reward to divide the search space among the components. SMM is compared to prior exploration methods, ICM and Pseudo-Counts and standard SAC, on a simulated robotic box-pushing task and against SAC on simulated point-mass problem and a real-robot valve turning task.\", \"significance\": \"Efficient exploration is arguably the main challenge of reinforcement learning as providing shaped reward functions is difficult even for experts and may lead to undesired behavior. I think that maximizing the state-entropy (or matching target state-distributions if available) is a sound objective for learning exploratory policies and the paper could, thus, be of relatively broad interest.\", \"novelty\": \"Maximizing the state entropy for exploration has already been used by (Hazan et al., 2018, reference from manuscript). However, in contrast to this prior work, SSM does not need to learn oracles that predict optimal state-distributions/policies for any given policies/reward functions. While distribution matching is a common approach to imitation learning, it has been little employed for manually specified distributions. Still, a similar objective has been used to replace reward functions by desired distributions in a RL-like setting [1] (not for exploration). However, their approach is quite restricted by assuming Gaussian target distributions.\", \"soundness\": \"If I understand correctly, fictitious play assumes optimal responses to understand that the state distribution would often be more important than provably converge to a Nash equilibrium. The paper fails to provide stopping criteria for the optimization steps of the individual players; however, I assume that only few gradient steps are used for practical reasons. Hence, I am not sure whether the actual algorithm can be justified by the theory.\\n\\nThe paper mentions that VAEs are used to model the state distribution. Given that VAEs are not likelihood-based, I do not understand how the reward term log(q(s)) can be computed.\", \"clarity\": \"The paper is well-written and the structure seems fine. \\nI think that the density estimation should be better discussed. Models of the state distribution of the policy are often highly desirable, not only for optimizing its entropy, but also, for example, for importance sampling. However, modeling the state distribution is also inherently difficult--especially for large state spaces.\", \"experiments\": \"I like that the paper uses a real robot experiment and the ablations with respect to the historical averaging are interesting. Unfortunately, the paper only compares to standard SAC on the hallway task and on the real robot task. I would not consider the entropy regularization of SAC a proper baseline for a proposed exploration technique.\", \"questions\": \"Is the same extrinsic reward, i.e. log(p*), also provided to the other exploration methods?\\n\\nSome imitation learning methods such as GAIL are also able to match desired state-distributions. I think that these methods could be in principle also applied to the setting considered in the manuscript by using samples from the desired distribution as demonstrations. The paper briefly mentions that IRL methods are not applicable because they require access to trajectories, however, the discriminator of GAIL is only trained on individual state samples. I also do not see a problem of providing unachievable demonstrations to such imitation learning methods because, such like SS\\u1e3eM, they would try to minimize the divergence. I think that GAIL would actually be an important baseline for the proposed method.\\n\\nHow does the method scale with the dimensions of the state? SSM has only be evaluated on relatively low-dimensional problems (compared to some rllab/mujoco tasks with >100 states). I would assume that obtaining meaningful density estimates in such settings might be problematic. May imitation learning methods based on discriminators actually be more promising?\\n\\nSMM only considers matching state distributions. If I understand correctly, the approach could be easily generalized to state-action distributions, correct? Wouldn't it make sense for some tasks to also explore in the action space?\", \"decision\": \"I like the paper in general. The optimization problem seems well-motivated and the algorithm seems reasonable fine. I also like that the paper includes a real robot experiment. However, I am not convinced by the derivation of SMM based on fictitous play and I think that it should be better evaluated with respect to existing exploration methods and also with respect to imitation learning methods. I am therefore slightly leaning towards rejection, currently.\", \"typo\": \"\\\"Note[sic] only does SMM explore a wider range of angles than SAC, but its ability to explore increasing[sic] throughout training, suggesting that the SMM objective is correlated with real-world metrics of exploration.\\\"\\n\\n[1] Arenz, Oleg, Hany Abdulsamad, and Gerhard Neumann. \\\"Optimal control and inverse optimal control by distribution matching.\\\" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Update: I thank the authors for their response and I think the added baselines and more in depth discussion of prior work have improved the paper. However, given the limited novelty of the technical contribution, I believe the experimental section should be further extended (i.e. add a wider variety of domains and make thorough comparisons to relevant methods) in order to draw more general and robust conclusions.\", \"summary\": \"This paper proposes tackles the problem of exploration in RL, with a focus on learning an exploration policy that can be used for a variety of different tasks. They introduce a formal exploration objective that promotes efficient exploration and provides a mechanism for injecting prior knowledge about the task. They also design a practical algorithm to maximize the exploration objective. Finally, they empirically show that their method can outperform other SOTA exploration methods on challenging exploration tasks for locomotion and manipulation, both in simulation and in the real world.\", \"main_comments\": \"I\\u2019ve found the mathematical formulation to be sound and the empirical evaluation convincing. Overall, the paper is clearly written and the authors are quite transparent about the assumptions made. In addition, the problem of learning exploration strategies that are task-agnostic and force the agent to effectively explore within each episode (since the goal or task is not observed) is an important problem for RL and perhaps a more realistic setting than the single fixed-task one. However, I believe some important methodological details are missing from the paper and the empirical evaluation can be improved. In particular, the paper would be more convincing if it contained comparisons against SOTA exploration (e.g. curiosity-driven, pseudo-counts, noisy-networks etc.) and inverse reinforcement learning (e.g. GAIL) methods for all the environments. Such baselines are completely missing in the Navigation and Real-World tasks. \\n\\nHowever, as the authors note, most baselines used for comparison have been designed specifically to learn from sparse rewards in single task settings and do not have any direct mechanisms for including priors or learning to explore well for any task from some distribution. So I wonder if it\\u2019d make sense to include baselines that do make use of prior knowledge such as IRL (i.e. GAIL) or some other state-matching approach. Those could be more appropriate and powerful baselines. \\n\\nAnother potential disadvantage of this method seems to be the need for a prior, which may be difficult to design or even lead to suboptimal policies if it is not well designed. However, as the authors note, it is still a weaker requirement than having access to demonstrations for example and the prior could potentially be learned from human preferences / restricted feedback. \\n\\nOther Comments / Questions:\\n\\n1. SMM uses prior information about the task in the form of the target distribution. Given this, I am worried that the baselines have a clear disadvantage. Did you do anything to provide the baselines with the same type of prior knowledge about the task? It would be useful to see how they would compare if they had access to the task prior (in some way) as well. \\n\\n2. Can you provide more insights into how this differs from variants of MaxEntRL and InvRL? Both analytically and in practice. I believe a more extended discussion of this would be valuable for readers and would alleviate some of the concerns regarding the novelty of this contribution and how its place in the broader literature.\\n\\n3. In the Navigation environment, how would the results change if the goal were visible (i.e. part of the agent\\u2019s observation)? I believe that most baselines would consider that scenario and it would be interesting to see whether the qualitative conclusions hold or not in that case. I would expect other exploration methods to be faster in that case.\\n\\n4. I also wonder if perhaps the reward is actually not that sparse in some of these tasks but because it is not visible, it makes the problem much harder for the baselines, which were designed to deal with very sparse reward. Can you comment on the reward sparsity in these tasks?\\n\\n5. At the top of page 2, you mention that there is a training phase in which the agents learn to optimize the exploration objective and at test time, it is trained with extrinsic reward. Can you please clarify on how these stages reflect in the results and what is the regime used for the other baselines? Are they also pretrained on a variety of tasks with only their exploration bonus / intrinsic reward and then fine-tuned with extrinsic reward?\\n\\n6. In Figure 2 (c), why is it that the gap between SMM and SAC decreases as the number of halls increases? This seems counterintuitive and I would\\u2019ve expected to increase since I do not see why SAC would get better and SMM would get worse. \\n\\n7. How do you learn the density model? You mention the use of a VAE but the details of how this is trained are not specified.\\n\\n8. On page 5 before section 4, you mention that you approximate the historical average of the density model with the most recent iterate. Can you include ablations on how good this approximation is and how the results change if you were using the historical average instead?\\n\\n9. In Figure 4 (b), SMM\\u2019s variance of the positive value of the angle differs significantly from the negative one. This strikes me as counterintuitive. Do you have any intuition on why that is?\"}" ] }
ryl3ygHYDB
Lookahead: A Far-sighted Alternative of Magnitude-based Pruning
[ "Sejun Park*", "Jaeho Lee*", "Sangwoo Mo", "Jinwoo Shin" ]
Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the proposed method consistently outperforms magnitude-based pruning on various networks, including VGG and ResNet, particularly in the high-sparsity regime. See https://github.com/alinlab/lookahead_pruning for codes.
[ "network magnitude-based pruning" ]
Accept (Poster)
https://openreview.net/pdf?id=ryl3ygHYDB
https://openreview.net/forum?id=ryl3ygHYDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Hg7ELoo9Oy", "SJx4Fh7nsB", "S1xq-3m2sB", "rklT2imhoB", "H1gzqsX2iH", "SJe7b2Hjir", "BkgBN_OqsH", "rkg5cECtsH", "HklMrm5tiH", "HygqMm9tsr", "BklvCf5YsH", "SylR_f9tir", "Syl2AZ5KjH", "BkgrU-5tiS", "HJlG50bnqr", "Bkgyu_mq5r", "S1eyoo0F9B", "SJla8mLA_r" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739908, 1573825659900, 1573825537776, 1573825461292, 1573825417653, 1573768186754, 1573713964789, 1573672082336, 1573655353988, 1573655314258, 1573655246560, 1573655157889, 1573654996441, 1573654861436, 1572769417761, 1572644966878, 1572625303074, 1570820948635 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/Authors" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2077/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces a pruning criterion which is similar to magnitude-based pruning, but which accounts for the interactions between layers. The reviewers have gone through the paper carefully, and after back-and-forth with the authors, they are all satisfied with the paper and support acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary of revisions.\", \"comment\": \"Dear reviewers,\\n\\nWe express our deepest gratitude for your constructive feedback and incisive comments on our manuscript.\\n\\nIn response to the questions and concerns you raised, we have carefully revised and enhanced the manuscript with the following additional experiments and discussions.\\n\\n- Optimal brain damage and data-dependent LAP variants (Appendix F),\\n- Computational costs of LAP (Appendix G),\\n- Channel pruning with lookahead cost (Appendix H),\\n- Tiny-ImageNet dataset with VGG, ResNet, and WRN (Section 3.3, Appendix C),\\n- Global Pruning (Appendix I),\\n- Whole-network-LAP (Appendix J),\\n- Performance comparisons with MobileNet (Appendix K).\\n\\nThe revisions made are marked with \\u201cblue\\u201d in the revised manuscript.\\n\\nWe also appreciate your continued effort to provide further feedback until the very end of response/discussion phase. We will make sure to reflect the comments in the final version.\\n\\nThanks,\\nAuthors.\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"Dear reviewer,\\n\\nWe are truly grateful for taking your time to provide additional recommendations and acknowledge our efforts until the very last day of the response/discussion phase.\\n\\nThe comments on the benefits of data-agnostic pruning are completely to-the-point.\\nWe would continue our efforts to enhance the final version with additional experiments on mask transfer and OBD.\\n\\nThanks,\\nAuthors.\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"Dear reviewer,\\n\\nWe are happy to hear that our response was satisfactory for you. We also appreciate your valuable time and efforts to help us improve our manuscript.\\n\\nThanks,\\nAuthors.\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"Dear reviewer,\\n\\nThank you for your continuing effort to provide constructive feedback until the end of the response/discussion phase.\\n\\nTo best respond to your feedback, we will continue our effort to include non-vision experimental results in our final manuscript, with the same level of rigor (5x duplication) as done in vision experiments.\\n\\nThanks,\\nAuthors.\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"I really appreciate the authors' effort during the rebuttal, and most of my concerns are addressed well.\\n\\nHowever, I still have some comments about the responses:\\n\\n- Such data-agnostic approaches have the following clear advantages.\\n(1) I don't think the 'Ease of computation' will be a good reason for using MP/LAP rather than OBD in practice. Because, for pruning, we only care about how it speed-ups the inference time and the accuracy of the pruned network. If OBD can achieve much better performance than MP/LAP, there is no reason to use MP/LAP rather than OBD just because of OBD is expensive to compute.\\n(2) I think the data-agnostic argument is valid, but needs more experiments to support it. Currently, all the experiments are conducted in the data dependent setting. By data dependent, I mean all the results are obtained by pruning a pre-trained network by either MP/LAP, and then fine-tune the network on the *original dataset*, which means we still need to access the original training data. I would suggest authors to complete the experiments as done in Morcos et al., to show that if LAP can be still performant in the data-agnostic setting. This will provide strong evidence for your claimed advantages.\\n\\n- Evaluation with OBD\\nThanks for completing the OBD experiments, I am pretty satisfied with your related responses. I would suggest the authors move the results into the main paragraph. This will be a stronger baseline than MP for people working on pruning. (Clearly, OBD is much better than MP and LAP, but seems worse than LAP-act.)\\n\\n- LFP vs. LBP\\nIt makes sense, thanks.\\n\\n- Further experiments.\\nThanks for including the experiments on Tiny-ImageNet, it makes the results more convincing.\\n\\nThanks so much for your response, and the great efforts to address my concerns. I am pretty satisfied with it. I will increase my rating to 6. \\nIn the meantime, I strongly recommend the authors to include the data-agnostic experiments in the paper to support your argument, and also include OBD in each table instead of just for FCN on MNIST and Conv-6 on CIFAR-10.\"}", "{\"title\": \"Thanks for addressing my comments\", \"comment\": \"Dear authors,\\n\\nThanks for the efforts in providing additional results and discussions to address my comments. \\n\\nI appreciate the efforts in demonstrating the efficiency of the method and the comparison to more efficient baselines.\\n\\nI will keep the rating for now. If the authors have the time to present convincing results on non-vision tasks (e.g. NLP). I will raise the score to 7.\\n\\nThanks.\"}", "{\"title\": \"Thank you for addressing my concerns.\", \"comment\": \"I've read your response and the updated paper. I'm quite satisfied with your response.\\n\\nI recommend to accept the paper though I keep my original rating (if there's an option of 7, I will definitely increase my rating).\"}", "{\"title\": \"Response to R3\", \"comment\": \"We sincerely appreciate your valuable comments, efforts and time. We are grateful for all positive comments: easy to implement (by R2 and R5), good empirical performance (by R3 and R5), good write-up (by you, R2 and R4) and novelty (by R4). In the revised manuscript, we have updated or newly added (Section 2, Section 3, Appendix C, E, F, G, H, I, J, K) according to the reviewers\\u2019 comments and colored them blue. We address each comment in detail, one by one as below.\\n\\n# Major Suggestions\\n(Q1) Computational complexity of LAP.---------------------------------------------------------------------\\n\\n(A1) We thank the reviewer for pointing this out; this is indeed an important issue to be addressed. As the reviewer has correctly indicated, most computationally heavy terms can be reused; indeed, the computation of the lookahead distortion can be done tensor-wise instead of computing separately for each connection.\\n\\nTo persuade the readers further, we have recorded the run-time of pruning operations for models appearing in the submission. Pruning VGG-19 with Intel [email protected] processor, MP takes approximately 0.9 seconds and LAP takes approximately 1.2 seconds, which is negligible compared to the retraining time. More generally, we observe that the computational overhead from computing the lookahead distortion is less than 10% of the computational cost of MP. More computations are required to handle batch normalization layers, but the added computing time did not exceed 50%.\\n\\nThe results, with a more detailed explanation, have been added to Appendix G of the revised manuscript.\\n\\n(Q2) Experiments on other domains.-------------------------------------------------------------------------\\n\\n(A2) Due to requests from reviewers, most of added experiments focus on a more rigorous assessment of LAP on larger datasets and models for computer vision tasks. We would continue to add more experiments on different tasks, as time permits.\\n\\n(Q3) Narrower/Shallow net baseline.-------------------------------------------------------------------------\\n\\n(A3) We compared the CIFAR-10 classification performance of VGG-16, VGG-19, ResNet-18 pruned MP/LAP, with the performance of MobileNetV2 [1], which is a small sized network having only 2.2M number of parameters. We observe that pruned models exhibit better performance compared to MobileNetV2 under the smaller number of parameters. The detailed results are summarized in Appendix K of the revised manuscript.\\n\\n(Q4) Takeaway message from the tables.-------------------------------------------------------------------\\n\\n(A4) Thank you for pointing this out. The main messages that we intended to deliver are as following.\\nLooking ahead a layer helps improving the accuracy of the pruned models, over MP.\\nSequential methods help stabilizing the performance of LAP, which helps improving the performance at extreme sparsity levels. For larger models with batch-norm, however, this advantage is not always present.\\nFollowing the reviewer\\u2019s suggestion, we have updated texts of Section 3 to make this point clearer.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n# Minor suggestions.\\nWe thank the reviewer for making various suggestions to improve our manuscript. We have updated accordingly.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n[1] Sandler et al., \\u201cMobileNetV2, Inverted Residuals and Linear Bottlenecks\\u201d, CVPR 2018\"}", "{\"title\": \"Response to R4\", \"comment\": \"We sincerely appreciate your valuable comments, efforts and time. We are grateful for all positive comments: easy to implement (by R2 and R5), good empirical performance (by R3 and R5), good write-up (by you, R2 and R3) and novelty (by you). In the revised manuscript, we have updated or newly added (Section 2, Section 3, Appendix C, E, F, G, H, I, J, K) according to the reviewers\\u2019 comments and colored them blue. We address each comment in detail, one by one as below.\\n\\n(Q1) Comparison of computational cost-----------------------------------------------------------------\\n\\n(A1) We thank the reviewer for pointing this out. As the reviewer noted, LAP requires more computation than MP. To confirm our initial claim that the overhead is not prohibitively significant, we ran and timed 100 trials of MP and LAP on some neural network models appearing in the manuscript. It turns out that the computation of lookahead distortion introduced only less than 10% overhead over magnitude pruning, with the help of popular tensor-handling frameworks. Handling batch normalization layers required an additional computations, but the overhead did not exceed 50% in this case as well.\\n\\nThe results, with a more detailed explanation, have been added to Appendix G of the revised manuscript.\\n\\n(Q2) OBD / Lookahead for OBD-----------------------------------------------------------------------------\\n\\n(A2) Thank you for suggesting a comparison to OBD. In our revised manuscript, we now\\n1) distinguish the data-agnostic pruning schemes (MP and LAP) from data-dependent schemes (e.g. OBD) more explicitly (in revised introduction and section 2),\\n2) provide two data-dependent variants of LAP; one based on your suggestion of looking ahead with Hessian-based scores (coined OBD+LAP) and another based on the \\u201cactivation probability\\u201d described in the previous manuscript (in a newly added Appendix G), and\\n3) empirically evaluate two variants and compare their performance/computation with optimal brain damage, implemented with recently released \\u201cBackPACK\\u201d package [1] (detailed in Appendix G).\\n\\nAs the reviewer expected, we observed that OBD+LAP variant performing better compared to OBD. Indeed, we also report that another variant based on activation probability works even better, without having to calculate Hessian diagonals via back-prop.\\n\\n(Q3) LAP in non-linear networks.---------------------------------------------------------------------------\\n\\n(A3) We agree with the reviewer\\u2019s concern that the fact that we use the same algorithm for deep linear networks and deep nonlinear networks, was not crystal clear in our previous version of the manuscript. Following the reviewer\\u2019s suggestion, we have added more detailed explanations in Section 2.1 (Section 2.2. has been merged to Section 2.1, with sending some less relevant materials to appendices).\\n\\n(Q4) More experiments on different datasets.----------------------------------------------------------\\n\\n(A4) The scalability of our algorithm is indeed an important issue. We have conducted additional experiments on Tiny-ImageNet (also averaged over 5 trials), and report the results in Section 3.3.\\n\\nWe would continue to add more experimental results, as time permits.\\n\\n-------------------------------------------------------------------------------------------------------------------------\\n\\n[1] Anonymous, \\u201cBackPACK: packing more into backprop,\\u201d under review for ICLR 2020.\"}", "{\"title\": \"Response to R2 (2/2)\", \"comment\": \"(Q2) Global ranking----------------------------------------------------------------------------------------------\\n\\n(A2) In the initial manuscript, the main reason for using the fixed pruning ratio (identical to the setup in Frankle and Carbin [2] except for FCN) was to ensure fairness in comparing MP with LAP. However, to resolve the reviewer\\u2019s concern, we additionally implemented and tested global pruning algorithms based on LAP. The experimental results suggest that LAP could indeed be extended to global pruning (Appendix I of the revised manuscript). Our global pruning schemes outperform MP and optimal brain damage (OBD) [1] baselines, respectively, while OBD score is already computed non-locally. In addition, we note that further tuning of layerwise pruning ratio could improve the performance of LAP and its variants.\\n\\n(Q3) Structured pruning----------------------------------------------------------------------------------------\\n\\n(A3) We thank the reviewer for pointing out the possibility of using LAP criterion for structured pruning. As the reviewer noted, structured pruning is known to be an effective strategy to provide a speedup in network inference.\\n\\nFollowing the reviewer\\u2019s suggestion, we have conducted channel pruning experiments using LAP criterion to replace the magnitude criterion. It turns out that LAP-based channel pruning also outperforms the na\\u00efve magnitude baseline, coherent to our findings in unstructured pruning. The detailed results can be found in Appendix H of the revised manuscript. \\n\\n(Q5) ImageNet-----------------------------------------------------------------------------------------------------\\n\\n(A5) Scalability to bigger datasets is certainly an important issue to be addressed. As the reviewer suggested, we have conducted additional experiments (with five independent trials) with Tiny-ImageNet dataset to confirm the benefits of LAP over MP once again. The results are added to Section 3.3.\\n\\nWe would continue to add more experiments under various setups, as time permits.\\n\\n---------------------------------------------------------------------------------------------------------------------------\\n+) Thank you for making constructive suggestions as a note, which helped us improve the manuscript.\\n\\n[1] LeCun et al., \\u201cOptimal brain damage,\\u201d NIPS 1989.\\n[2] Frankle and Carbin, \\u201cThe lottery ticket hypothesis: finding sparse, trainable neural networks,\\u201d ICLR 2019.\"}", "{\"title\": \"Response to R2 (1/2)\", \"comment\": \"We sincerely appreciate your valuable comments, efforts and time. We are grateful for all positive comments: easy to implement (by you and R5), good empirical performance (by R3 and R5), good write-up (by you, R3 and R4) and novelty (by R4). In the revised manuscript, we have updated or newly added (Section 2, Section 3, Appendix C, E, F, G, H, I, J, K) according to the reviewers\\u2019 comments and colored them blue. We address each comment in detail, one by one as below.\\n\\nBelow, we respond to some of the questions/concerns raised by the reviewer.\\n\\n(Q1) Residual networks----------------------------------------------------------------------------------------\\n\\n(A1) We deeply respect your concern about residual connections. On the other hand, we would like to emphasize that LAP still makes over 15% relative accuracy gain over MP on ResNet-18 (Table 4, at 0.36% sparsity level), even without adding complicated mechanisms to account for residual connections.\\n\\nAlso, we note that more experimental results with ResNet and WRN trained on Tiny-ImageNet data have been added in Table 6-7, 11-12 of the revised manuscript, where LAP consistently outperforms MP.\\n\\n(Q6) Activations after non-linearities----------------------------------------------------------------------\\n\\n(A6) To better respond to this question, we have followed the reviewer\\u2019s suggestion to implement the extension of LAP suggested in Section 2.2. based on the activation probability of the neurons. As an estimation of the activation probability requires an additional use of training dataset, we also implemented optimal brain damage [1] for a fairer comparison.\\nThe algorithm and related experiments are explained in Appendix F of the revised manuscript.\\n\\nWe observe that accounting for nonlinearities of ReLU dramatically improves the performance of LAP methods to provide a better accuracy than the OBD baseline. On the other hand, we note that \\u201cestimating the nonlinearities\\u201d require additional knowledge (and computations) about the data domain beside the trained model, as OBD does. Conversely, assuming linearity can be thought of as accounting for a lack of knowledge about the data. To make this point clear, we revised Section 2.2.\\n\\n(Q4) Baselines-----------------------------------------------------------------------------------------------------\\n\\n(A4) As the reviewer indicated correctly, our primary focus was to provide a better understanding on the magnitude-based pruning methods (which is known to show performance comparable to state-of-the-art methods), by taking a principled perspective toward MP to deduce a better alternative. We have revised introduction and section 2 to make this point clearer.\\n\\nIn addition, we have added Appendix F to the revised manuscript devoted to a discussion of LAP under the setup where training data is available (as described in the response of 6), and made explicit experimental comparisons with optimal brain damage [1].\\n\\n---------------------------------------------------------------------------------------------------------------------------------\\n\\n[1] LeCun et al., \\u201cOptimal brain damage,\\u201d NIPS 1989.\"}", "{\"title\": \"Response to R5 (2/2)\", \"comment\": \"(Q3) Credit to Dong et al. (2017)-----------------------------------------------------------------------------\\n\\n(A3) We thank the reviewer for noting the related work of Dong et al. [6]. We fully agree that the work should be highly accredited for being the first (up to our knowledge) to highlight the role of Frobenius norm of Jacobian matrices for distortion analysis. Still, we remark that our approach steps further by explicitly considering inter-layer dependencies to derive a novel scoring method. We have added a reference to [6] in Section 2.\\n\\n(Q4) Layerwise pruning ratio.--------------------------------------------------------------------------------\\n\\n(A4) The main reason for using the fixed pruning ratio (identical to the setup in Frankle and Carbin [7] except for FCN) was to ensure fairness in comparing MP with LAP. As the reviewer points out, the performance-compression tradeoff of LAP could be improved if more careful sensitivity analyses have taken place, as LAP focuses on local structures only.\\n\\nHowever, we report following experimental results suggesting that such tuning is not always required if one\\u2019s aim is to improve over baselines. In Appendix G of the revised manuscript, we extend LAP and its variant using training data (as introduced in response to 1&2), to a global pruning scheme. In our experiment, LAPs with global pruning outperform corresponding MP and OBD baselines. Nevertheless, further tuning of the layerwise pruning ratio could improve the performance of LAP and its variants.\\n\\n\\n(Q5) LFP vs. LBP.-------------------------------------------------------------------------------------------------\\n\\n(A5) We are glad that the reviewer pointed this out; it is an intriguing observation indeed. We interpret the phenomenon as follows: Whenever the sparsity level is low, the importance of carefully curating the input signal is not significant due to a high redundancy in natural image signals and the corresponding features from over-parametrized models. On the other hand, the prediction based on given features is more directly affected by the layers closer to the output. This causes a relatively smaller gain by looking backward than forward in the low-sparsity regime. When the sparsity level is high, the input signal is scarce, and the relative importance of preserving the input signal is greater. We added notes on this intuition in Section 3.2.\\n\\n(Q6) Entire network as an operator block.---------------------------------------------------------------\\n\\n(A6) We thank the reviewer for proposing an interesting variant of LAP. We have implemented and tested this version under MNIST + 5-layer MLP (5 trials total). To our surprise, the whole-networks-as-a-block variant performed slightly worse than LAP. The experimental results are added in Appendix J of the revised manuscript. We suspect this is because we are ignoring the effect of all nonlinearities and bias terms, which may accumulate over a deep stack of layers.\\n\\n(Q7) Further experiments.-------------------------------------------------------------------------------------\\n\\n(A7) We have performed additional experiments on Tiny-ImageNet dataset on VGG-19, ResNet-50 and WRN-16-8, and have included the results in Section 3.3. Coherent to the previous observations on smaller datasets, the advantages of LAP over MP have been confirmed for Tiny-ImageNet dataset as well.\\n\\nWe would continue to add more datasets and network architectures as time permits.\\n\\n-------------------------------------------------------------------------------------------------------------------------\\n\\n[6] Dong et al., \\u201cLearning to prune deep neural networks via layer-wise optimal brain surgeon,\\u201d NeurIPS 2017.\\n[7] Frankle and Carbin, \\u201cThe lottery ticket hypothesis: finding sparse, trainable neural networks,\\u201d ICLR 2019.\"}", "{\"title\": \"Response to R5 (1/2)\", \"comment\": \"We sincerely appreciate your valuable comments, efforts and time. We are grateful for all positive comments: easy to implement (by you and R2), good empirical performance (by you and R3), good write-up (by R2, R3 and R4) and novelty (by R4). In the revised manuscript, we have updated or newly added (Section 2, Section 3, Appendix C, E, F, G, H, I, J, K) according to the reviewers\\u2019 comments and colored them blue. We address each comment in detail, one by one as below.\\n\\n(Q1) Magnitude-based methods vs. Hessian-based methods.----------------------------------------------\\n\\n(A1) We agree with the reviewer\\u2019s point: Hessian-based methods provide a more direct analysis of loss, while our approach based on Frobenius distortion could only provide an upper bound on the loss by distortion.\\n\\nOn the other hand, a distinguishing property of MP/LAP is its data-agnostic nature. Indeed, distortion-based approach could be interpreted minimizing a worst-case loss without any knowledge on the training data.\\n\\nSuch data-agnostic approaches have the following clear advantages.\\n- Ease of computation: While OBD is known to be efficiently computable via back-propagation, magnitude-based methods are much faster in general. We have implemented OBD and recorded its runtime in the newly added Table 15. Comparing to the test time of MP/LAP (presented in the newly added Table 16), OBD requires more than x200 computation time, even with using a recently introduced \\u201cBackPACK\\u201d package [1] designed for an efficient computation of Hessian diagonal (and using additional GPU).\\n- Flexibility: Data-agnostic approaches can be flexibly used in the problem setups where we do not have additional access to the data that was used to train the model. For instance, Morcos et al. [2] recently studied a \\u201ctransfer\\u201d of subnetwork discovered by MP to a relevant dataset, followed by training in the target domain. Given a trained model from a source domain only, data-agnostic methods (including LAP) can be applied for such tasks without having to access a source domain dataset, unlike their Hessian-based counterparts.\\n\\n(Q2) Evaluation with OBD--------------------------------------------------------------------------------------------\\n\\n(A2) We strongly agree with the reviewer\\u2019s concern that recent papers claiming the near-optimality of magnitude-based methods (usually via dynamic reconnection methods of Zhu and Gupta [3]) often lack a direct comparison to OBD.\\n\\nTo this end, we have implemented and tested OBD on FCN and Conv-6 in Appendix F of the revised manuscript. From the experiments, we make the following observations.\\n- As LeCun et al. [4] claimed, na\\u00efve magnitude-based pruning underperforms OBD. \\n- Somewhat surprisingly, LAP can be refined to outperform OBD by using the training dataset. Drawing inspiration from the comments of Reviewer#2, we designed an LAP variant taking into account the activation probabilities (a.k.a. APoZ [5]) of each neuron, as previously described in Section 2.2 of the initial manuscript. This variant provides better test accuracies than OBD, especially in the high-sparsity regime. The details are presented in Appendix F of the revised manuscript.\\n\\nFinally, we emphasize that our study on LAP is aimed toward a better understanding of why magnitude-based, data-agnostic pruning methods perform sufficiently well, instead of claiming that it is state-of-the-art. To make this point clearer, we have toned down the descriptions appearing in the introduction.\\n\\n\\n------------------------------------------------------------------------------------------------------------------------------------\\n\\n\\n[1] Anonymous, \\u201cBackPACK: packing more into backprop,\\u201d under review for ICLR 2020.\\n[2] Morcos et al., \\u201cOne ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers,\\u201d to appear in NeurIPS 2019.\\n[3] Zhu and Gupta, \\u201cTo prune or not to prune: exploring the efficacy of pruning for model compression,\\u201d ICLR 2018 workshop.\\n[4] LeCun et al., \\u201cOptimal brain damage,\\u201d NIPS 1989.\\n[5] Hu et al. \\u201cNetwork trimming: a data-driven neuron pruning approach towards efficient deep architectures,\\u201d arXiv 2016.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"[Summary]:\\nThis paper interprets the underlying objective of magnitude pruning(MP) as minimizing the Frobenius distortion of a single layer. Then the authors provide a motivating example to show that MP may cause a large Frobenius distortion due to ignoring the inter-layer interactions. Based on this observation, the authors propose a simple modification to MP by explicitly enforcing to minimize the Frobenius distortion of an operator block consisting of multiple linear layers, and demonstrate better performance than MP on CIFAR10 and MNIST datasets.\\n\\n[Pros]:\\n- The proposed algorithm is simple and easy to implement.\\n- The empirical results show that the proposed method beat MP consistently, and in particular for high sparsities.\\n- The ablation study about LAP, LFP and LBP is interesting.\\n\\n[Cons & Questions]:\\n(1) Minimizing Frobenius distortion is not meaningful, and it only minimizes the change in the output to the first order. Moreover, I don\\u2019t think minimizing the change in the output is as meaningful as minimizing the increase in training error as is done in Hessian-based pruning methods, e.g., Optimal Brain Damage (OBD) ( LeCun et al. 1989). My reason is that it is possible that the output changes a lot, but the training error still remains low after pruning.\\n\\n(2) Can the authors elaborate what are the advantages of MP/LAP over Hessian-based pruning, such as OBD? OBD only needs the diagonal Hessian matrix and is also tractable, and MP is only a special case of OBD when the Hessian is an identity matrix. I am not quite convinced MP can achieve state-of-the-art performance, and also Gale et al. (2019) did not include any Hessian-based pruning algorithm into comparisons. Therefore, it would be great if the authors can provide more justifications for why MP/LAP is advantageous to Hessian-based pruning methods, e.g., OBD. Besides, I would be happy to see the authors can include OBD as a baseline in the experiments.\\n\\n(3) The interpretation of the objective of MP as minimizing the Frobenius distortion is well-known, and more general results are already presented in Dong et al. (2017). The authors should discuss it in the main paragraph and give the corresponding credits to Dong et al. (2017).\\n\\n(4) Why do you need to specify the pruning ratio for each layer manually? It makes MP or LAP hard to use in practice, and it usually needs expert knowledge to specify the pruning ratios for different layers. For Hessian-based methods, it reflects the change in loss and thus can be used to automatically determine the pruning ratio at each layer. \\n\\n(5) In table 1 and table 2, LFP is better than LBP when pruning ratio is low, while LBP becomes better for high pruning ratios. Is there any explanations?\\n\\n(6) My understanding of the LAP is that it tries to minimize the Frobenius distortion of the input-output Jacobian matrix of the operator block. In the paper, the operator block consists of 3 consecutive linear layers. I am curious about what is the performance if we treat the entire network as an operator block?\\n\\n(7) The experiments are only conducted on MNIST and CIFAR-10, which are overly simple. Further experiments on larger datasets will make the paper stronger and the results more convincing. Anyway, this is not a big issue, but I encourage the authors can test the proposed method on more challenging datasets and make fair comparisons.\\n\\nOverall, my rating is largely due to the concerns of (1)&(2).\\n\\n[References]:\\nY. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, 1989.\\nT. Gale, E. Elsen, and S. Hooker. The state of sparsity in deep neural networks. arXiv preprint 1902.09574, 2019.\\nDong, Xin, Shangyu Chen, and Sinno Pan. \\\"Learning to prune deep neural networks via layer-wise optimal brain surgeon.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"*Summary*\\nThe paper proposes a multi-layer alternative to magnitude-based pruning. The operations entailed in the previous, current, and subsequent layers are treated as linear operations (by omitting any nonlinearities), weights are selected for pruning to minimize the \\\"Frobenius distortion\\\", the Frobenius norm of the difference between products of the (i-1, i, i+1)-layer Jacobians with and without the selected weight. This simplifies to a cost-effective pruning criterion. In spite of the simplistic linear setting assumed for the derivation, results show the criterion prunes better than weight-based methods at unstructured pruning of a variety of modern architectures with CIFAR-10, particularly excelling at higher sparsity.\\n\\n*Rating*\\nThe paper has some clear positives, particularly:\\n + Clear writing and formatting\\n + Simple method\\n + Good structure for the experimental analysis (with 5x replications!)\\nHowever there are a few limitations, noted below; while none is fatal on its own, in total the limitations have led me to recommend \\\"weak reject\\\" currently.\", \"limitations_of_the_method\": \"(1) Residual networks: The lack of an explicit strategy for handling residual connections (and the accompanying worsened relative performance) is a notable limitation since residual/skip connections are nearly universal in state-of-the-art large networks. The performance was shown to still be *slightly* better than with magnitude pruning.\\n(2) Global ranking: Since connections are pruned layerwise, rather than taking the best-k neurons across the entire network at once, I assume that the LAP pruning criterion doesn't scale reasonably across layers. This implies that the method cannot be used to learn network structure. Instead the user must decide the desired number of neurons at each layer.\\n(3) Structured pruning: There is no mention of pruning entire convolutional kernels or \\\"neurons\\\" at once, so I assume that only individual weights were pruned. Since structured pruning is the simplest way to achieve speedup in network inference (as opposed to merely reduction in model size), how does the LAP criterion perform when adapted for structured pruning, e.g. removing filters/neurons with the best average LAP score?\", \"limitations_of_the_experiments\": \"(4) Baselines: While the paper is explicitly focused on an easy to compute replacement for magnitude-based pruning, there are a wide variety of alternative methods available. These vary in complexity, runtime, etc., but they deserve mention and either explicit comparison in the experiments or reasoning to justify the omission of such comparisons.\\n(5) ImageNet: (Insert obligatory statement about the ubiquity of ImageNet experiments, ...) While it is cliche to request ImageNet experiments and CIFAR-10 is a helpful stand-in, they would be really nice to have.\\n(6) Activations after non-linearities: While Fig. 3 and the remaining experiments present a reasonable case that the presence of non-linearities doesn't prevent LAP from improving upon magnitude-based pruning, it doesn't resolve the issue either. Whether considering negative values clipped by with ReLU or large magnitude values that are squashed by sigmoid and tanh, the linear-only model is a poor approximation for some unknown fraction of neurons for probably most inputs. Does this mean that LAP is underperforming in those cases? Are those cases sufficiently rare or randomly distributed that they are merely noise? Is there another mechanism at play? In practical terms, how much does the activation rate (positive for ReLU, linear/unsquashed for sigmoid/tanh) vary by neuron? This seems like a reasonably simple thing to compute and incorporate into pruning.\\n\\n*Notes*\\nEq. (4): Is (4) simply the one-step/greedy approximation to the optimization in (3)? If so, it may be helpful to state this explicitly. Also, is $w = W_i[j,k]$? If so, this is useful to explicitly state.\\nSec 2.1: Consider noting that the linear-model setup is used to construct the method, but non-linearities are addressed subsequently\\nSec 2.2: Is the activation probability p_j used in practice, or is it merely an explanatory device?\", \"pg5\": \"note that residual connections are discussed in the experiments?\", \"tables_3_6\": \"note that these all use CIFAR-10\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new magnitude-based pruning method (and a few variants) by extending the single-layer distortion minimization problem to multi-layer cases so that the correlation between layers is taken into account. Particularly, the authors take into account the weight tensors of neighboring layers in addition to the original layer. The proposed algorithm looks promising and interesting. Empirically, the authors show that the proposed method consistently outperforms the standard magnitude-based pruning method.\\n\\nOverall, the paper is well-written and I think the algorithm is novel. Therefore, I've given the score of 6.\", \"comments\": \"(1) It seems obvious that the proposed method would increase the computation cost, but the authors didn't give any discussion or results on that. \\n(2) Although the main focus of the paper is magnitude-based pruning, I think the authors should include one baseline of Hessian-based pruning methods for comparison. As I know, the computation overhead of Hessian-based methods (e.g., OBD) is relatively small for the networks used in this paper. In particular, Hessian-based pruning methods can also be interpreted as a distortion minimization problem but in a different space/metric. So I wonder if the authors can extend LAP to Hessian-based pruning methods.\\n(3) The authors introduced LAP with deep linear networks. However, the details of LAP in non-linear network are missing. I encourage the authors to fill in the details in section 2.2 in the next revision.\\n(4) Currently, all experiments are done on CIFAR-10 dataset. I wonder if the author can include one more dataset. For example, comparison between MP and LAP on Tiny-ImageNet or even ImageNet. As I know, experiments of ImageNet can fit into a 4-GPU server for magnitude-based methods.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed Lookahead Pruning (LAP), a new method for model weight pruning to generate neural networks with sparse weights. The authors interpret the conventional magnitude pruning (MP) as pruning each layer individually while the proposed lookahead method considers neighboring layers' weights. Specifically, the proposed methods prune weights which introduce small distortion to the Jacobian matrix of 3 consecutively connected (linear) layers; and the conventional magnitude pruning can be viewed as a degenerated/special case of the LAP. The primary contributions of the paper are: 1) the authors propose the LAP method for fully connected layers; 2) they present empirical applications/extensions to models with non-linear-activations and batchnorms (which are two important components of modern neural networks); 3) The authors empirically show that LAP (and its variants such LAP-forward and LAP-forward-seq) can generate sparse models with better test accuracy than MP, across fully-connected network (FCN), and Conv models (such as VGG and ResNet) on MNIST and CIFAR dataset.\\n\\nI think the method in this paper is well motivated both mathematically (minimizing distortions of Jacobian) and intuitively (considering multiple layers holistically). The empirical benefits of the proposed method is properly validated against MP in various dataset and models. Also the paper is well written. Thus I give weak accept and I am willing to raise the score if convincing clarification on the following questions can be provided in author responses and in the future draft:\\n\\n1. As the LAP method introduces higher computation overhead in pruning weights, I was wondering how it compares to MP in terms of run-time-efficiency. As LAP requires computing a score for each single weight value (though some of the computationally heavy terms can be reused), it is important to discuss how long does LAP pruning take, comparing to the run-time of retraining (after pruning). This will help further evaluate the empirical efficiency of the LAP method.\\n\\n2. The experiment focus on CNNs while the authors advocating versatility of the methods. Thus I was wondering how LAP performs on models in other domains, such as transformer-based NLP models.\\n\\n3. (Relatively minor). The experiment purely focused on comparing pruning methods. To demonstrate the empirical merits of LAP, I think it will be more convincing to also compare with naive baselines of using narrow / shallower networks such as in Sohoni et al.[1]. This will demonstrate that pruning itself and LAP as an instantiation of pruning should be considered over these naive narrow / shallower baselines with the same amount of weight parameters as pruned models.\\n\\n4. The tables (table 1-5) are massive but the take-away message is not crystal clear in the text or captions of the table. Is the take-away message something like 1) one might want to use LAP-forward(backward) over LAP when you have very high sparsity, and 2) the sequential versions can further enhance the performance?\", \"minor_suggestions_to_improve_the_paper\": \"1. Line 5 and 6 in Algorithm 1 is confusing. I suppose the authors mean selecting the weights which trigger small value for L to zero-out. To me the current line 5,6 does not directly reflect this.\\n\\n2. It might be good to provide a proof in appendix on equation 5), it take me quite a few minutes to verify it. Providing a proof can help readers to read more smoothly.\\n\\n3. The order of the content can be slight reorganized, e.g. why talking about the adaptation of LAP on batchnorm after you discussed all the directional and sequential variants of LAP?\\n\\nReference\\n[1] Low-Memory Neural Network Training: A Technical Report. Sohoni et al.\"}" ] }
ryxnJlSKvr
SCELMo: Source Code Embeddings from Language Models
[ "Rafael - Michael Karampatsis", "Charles Sutton" ]
Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.
[ "Transfer Learning", "Pretraining", "Program Repair" ]
Reject
https://openreview.net/pdf?id=ryxnJlSKvr
https://openreview.net/forum?id=ryxnJlSKvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "_bSbiPBqP", "Ske3FZ93ir", "rkgtqN6ujS", "S1l5gN6Oor", "rJl-4yTOjH", "Hyly6N2g5r", "SJgzrYrRFH", "rke1ZgtatB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739879, 1573851524491, 1573602449242, 1573602290069, 1573601064592, 1572025526764, 1571866938486, 1571815415002 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2076/Authors" ], [ "ICLR.cc/2020/Conference/Paper2076/Authors" ], [ "ICLR.cc/2020/Conference/Paper2076/Authors" ], [ "ICLR.cc/2020/Conference/Paper2076/Authors" ], [ "ICLR.cc/2020/Conference/Paper2076/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2076/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2076/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper improves DeepBugs by borrowing the NLP method ELMo as new representations. The effectiveness of the embedding is investigated using the downstream task of bug detection.\", \"two_reviewers_reject_the_paper_for_two_main_concerns\": \"1 The novelty of the paper is not strong enough for ICLR as this paper mainly uses a standard context embedding technique from NLP.\\n2 The experimental results are not convincing enough and more comprehensive evaluation are needed. \\n\\nOverall, this novelty of this paper does not meet the standard of ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Paper Updated\", \"comment\": \"We would like to thank all reviewers for their feedback and insightful comments.\\n\\nWe would like to inform the reviewers that we have revised our submission to include a new section where we discuss whether the idea to add bug-introducing changes to a code dataset has practical usefulness for bug-finding. In the same section we also measure performance on a small dataset of real bugs, which we mined. We would be grateful if you could take a look at this and consider whether this improves your judgement about this submission.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for the feedback and your insightful comments.\\n\\nAlthough the results might be somewhat unsurprising we believe that this work can offer empirical evidence for the effectiveness of this kind of techniques in a new domain, for which there is neither empirical evidence nor pretrained models. We also offer insight in the paper why these techniques would be a good fit for it.\\n\\nThe method can still report bugs in the code even when it achieves 100% accuracy in the synthetic evaluation, because we can still rank code locations by the probability that they contain a bug type, thus obtaining a ranked list of the most suspicious locations in unseen code. Also, we know from other work that this particular bug type (Wrong Binary Operator) is actually fairly rare in practice (we keep this in the evaluation to compare to DeepBugs), so it is not that surprising that the classifier does not identify clear instances of the bugs.\\n\\n Furthermore, we cannot make the strong assumption that misclassifying more instances means that we\\u2019ll find more real bugs as there is no guarantee that the misclassified locations are indeed bugs. Especially, since for industrial tools such as Google\\u2019s Tricorder (Sadowski, 2015) a false positive rate of less than 10% is enforced. As a consequence in an industrial setting it would be prefered to use bug detectors with high precision as this will result in more trustworthy tools for the developers due to less overhead.\\n\\nWe also think that showcasing the practical usefulness of the technique and exploring whether the idea of bug-introducing changes is effective in practice is a very good idea. We will look into this.\\n\\nWe will fix all minor issues.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for the feedback and your insightful comments.\\nRegarding the issues that you highlight.\\n1. The novelty in this work comes from exploring whether contextual embeddings would be effective in this new domain, for which similar techniques have not been applied in the literature.\\n2. This is a good suggestion. We will definitely take it into account for future work.\\n3. Although this is a reasonable thought, we are not aware of pre-trained BERT embeddings in the literature. Unfortunately, training from scratch a BERT model in academia is currently infeasible.\\n4. The training and validation data are available. We will release the test data through an institutional repository DOI. In order to not break anonymity we did not include this to the current version of the paper. The code is already in a private GitHub repository, we\\u2019ll make it public upon acceptance.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for the feedback and your insightful comments.\\nEvaluating the performance of the method on real bugs is a great suggestion. We will look into this.\\nCompilers are indeed great at spotting syntactic errors the proposed approach can go beyond that and detect semantic errors that a compiler would be unable to. We\\u2019ll make that more clear in the paper.\\nWe will fix the table indices and update Listing 2.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an embedding method for source code tokens, which is based on contextual word representation, particularly is based on the method of ELMo. The learned representation is evaluated on the task of bug detection, with promising performance.\", \"strengths\": \"The paper addresses an important and impactful problem. The solution designed for this problem seems very reasonable. Experiments are useful and reasonable and the experimental results are promising and in the favor of the paper.\\nThe paper is well written and clear.\", \"weaknesses\": [\"The data used (in particular the method of buggy code generation applied) seems very specific. It would be interesting to know the performance of the method on real bugs.\", \"The paper is a bit low in technicality.\"], \"decision\": \"Accept\\nI think this paper is overall a good work and can open direction of research even beyond the scope of the paper, for example in combining learning and reasoning, or in source code generation with adversarial models.\", \"minor\": [\"Since compilers can spot errors in code completely, it would be useful to motivate the advantage of learning for bug detection\", \"The table referrals in the body of the paper contains wrong table numbers in Sections 6.1, 6.2, 6.3.\", \"The incorrect Binary Operator example in Listing 2 does not seem to be a well justified bug. It could be a correct piece of code for a different purpose.\", \"which use -> which we use\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper leverage recent advances of ELMo in context embedding and apply it in the source code embedding. With the help of ELMo, source embedding can take the three benefits: (1)\\u00a0\\u00a0Surrounding names provide indirect information about possible values the variable could take; (2) an variable\\u2019s value evolves through the program execution can be captured; (3) open a gate for the reuse of the ptr-trained model. To evaluate the effectiveness of the proposed approach, authors conduct experiments on the downstream task of the bug detection.\", \"pros\": \"1. This work study an interesting problem, which is challenging to solve.\\n2. The application and combination of different techniques in this paper are smart.\\n3. The experiment results show better performance of contextual embedding based method compared with non-contextual embedding based methods.\", \"cons\": \"1.\\u00a0It is a good application of known techniques, but the novelty is limited.\\n2. It is suggested to evaluate the effectiveness of the proposed approach on various source code analysis task such as variable misuse.\\n3. It is suggested to compare with other state-of-the-art baseline methods, e.g. BERT.\\n4. In the end of the introduction section, the authors claim that \\\"we release our implementation and representation...\\\". However, implementation, representation and dataset are missing.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to use ELMO embeddings to improve the precision on the first step of the DeepBugs tasks defined by Pradel and Sen (2018). This first step is an artificial problem created by taking real programs (with and without bugs, but assuming almost all of them are correct) and introducing bugs of certain type into the programs. Then, a classifier needs to distinguish between the real and the artificial set. This classifier is then to be used as a checker for anomalies in code and the anomalies are reported as bugs, however the paper skips this second step and only reports results on the first classification problem.\\n\\nTechnically, the paper improve this first step of DeepBugs by using a standard variant of ELMO. The evaluation is detailed, but the results are unsurprising. The paper simply tech-transfers the idea from NLP to Code. If this work is accepted at the conference, I cannot imagine an interesting presentation or a poster that simply cites the changed numbers. Did we expect ELMO to be worse than more naive or random embeddings?\\n\\nThe work and its results heavily peg on the DeepBugs and increases the precision of its first step by a significant margin, but does not show getting any more useful results. In fact, on one task (Wrong Binary Operator), SCELmo gets to 100% accuracy. This means it will never report any bugs, whereas DeepBugs seems to be performing best on exactly this kind of reports with its weaker model.\\n\\nI would recommend the authors to either work on showing practical usefulness of the technique, showing something for the full bugfinding task (not merely the first, artificial part), or to investigate if (or how) the idea to add bug-introducing changes to a code dataset is conceptually flawed for bugfinding (as this idea is widely used by several other works like Allamanis et al 2018b or by https://arxiv.org/abs/1904.01720 which also don't get to practical tools ). There seems to be some indication of this by the reported 100% accuracy, but right now this remains completely uninvestigated.\", \"minor_issues\": \"\", \"listing_3\": \"Opernad -> Operand\\nPage 5. There is no Table 6.1\"}" ] }
B1esygHFwS
Detecting Change in Seasonal Pattern via Autoencoder and Temporal Regularization
[ "Raphael Fettaya", "Dor Bank", "Rachel Lemberg", "Linoy Barel" ]
Change-point detection problem consists of discovering abrupt property changes in the generation process of time-series. Most state-of-the-art models are optimizing the power of a kernel two-sample test, with only a few assumptions on the distribution of the data. Unfortunately, because they presume the samples are distributed i.i.d, they are not able to use information about the seasonality of a time-series. In this paper, we present a novel approach - ATR-CSPD allowing the detection of changes in the seasonal pattern of a time-series. Our method uses an autoencoder together with a temporal regularization, to learn the pattern of each seasonal cycle. Using low dimensional representation of the seasonal patterns, it is possible to accurately and efficiently estimate the existence of a change point using a clustering algorithm. Through experiments on artificial and real-world data sets, we demonstrate the usefulness of the proposed method for several applications.
[ "Autoencoder", "Change Point Detection", "Timeseries" ]
Reject
https://openreview.net/pdf?id=B1esygHFwS
https://openreview.net/forum?id=B1esygHFwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "AA7icvy-ZA", "B1lt_1QroB", "SkeXgdrT9S", "Skgh0uOCKB", "HyerFBLVtr" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739851, 1573363569473, 1572849642953, 1571879124117, 1571214717046 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2075/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2075/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2075/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2075/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes ATR-CSPD, which learns a low-dimensional representation of seasonal pattern, for detecting changes with clustering-based approaches.\\n\\nWhile ATR-CSPD is simple and intuitive, it lacks novel contribution in methodology. It is unclear how it is different from existing approaches. The evaluation and the writing could be improved significantly. \\n\\nIn short, the paper is not ready for publication. We hope the reviews can help improve the paper for a strong submission in the future.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"I am quite disappointed with the presentation and technical quality of the paper.\\n\\nThere are numerous grammatical errors that make the reading unpleasant. The mathematical notations are also inconsistent throughout different places in the paper.\\n\\nThe extensive literature of modelling time series with seasonality trends, both in the statistics and the machine learning community, is severely under-represented in the motivations and related works. Models like SARIMA have no mention in the paper. \\n\\nThe temporal regularization imposed in section 3.2, coupled with an autoencoder, does not seem very different from the state-space models and their more complex and recent variants that use multi-layered networks (a google search will provide plenty references). \\n\\nThe experiments, with many of the useful baselines missing, are equally unimpressive.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper investigates the important problem of detecting changes in seasonal patterns, and proposes ATR-CSPD to learn a low-dimensional representation of the seasonal pattern and then detects changes with clustering-based approaches. ATR-CSPD achieves improved results on part of synthetic and real-world datasets.\", \"the_paper_may_not_have_enough_contribution_to_be_accepted_due_to_the_following_key_concerns\": \"- The proposed model is not quite novel, and the design needs more justification.\\n - The empirical results are not strong enough to show the effectiveness of ATR-CSPD. \\n\\n# Model design\\n\\nThe idea of using auto-encoder with temporal smoothing to learn a low-dimensional representation of time-series need more justification.\\n- What are the main intuitions of using an auto-encoder? e.g., removing anomaly or denoising. Why will it be easier for the model to detect the changes on the reconstructed time-series?\\n- The temporal smoothing makes adjacent periods similar to each other. However, this may have side effects like low recall. For example, in Figure 2(a), the pattern in Aug 17th (Sat) and that in Aug 18th(Sun) can possibly be different (i.e., a change in seasonal pattern), while the difference is smoothed out by the temporal regularization. Is the model sensitive to the regularization, e.g., $\\\\lambda$? Why L1 regularization instead of L2 is used? It will be helpful to provide more justification/intuition of the model design.\\n- Why only the smoothness regularization between adjacent seasons is used? Other potential regularization includes penalizing the difference between the same phase in different seasons. \\n\\n# Assumption and limitation\\nThe proposed method requires the seasonal period being provided, and also requires a large number of hyperparameters being specified, e.g., 1) the threshold of silhouette score, 2) the hidden representation dimension $q$, 3) the regularization coefficient, 4) $\\\\gamma, \\\\lambda$, 5) hyperparameters for constructing the encoder/decoder and 6) training the models. \\n\\nRegarding the threshold of the silhouette score in the clustering step, is setting this hyperparameter easier than the number of clusters, i.e., the number of changing points? Is ATR-CSPD sensitive to this parameter? How this hyperparameter is tuned? Having too many hyperparameters (that are potentially non-trivial to set/tune) may make the proposed method less robust. \\n\\n# Experimental results: \\nAccording to the results in Table 1, ATR-CSPD is mainly better at detecting Category C/D/E change points, which are mainly caused by changes of height/position of the spike. However, it performs either similarly or worse than the other baselines on other tasks. Besides, the lack of ground-truth data on NYC Taxi dataset and the Azure monitor dataset makes it hard to evaluate the effectiveness of the proposed algorithm. Moreover, only uni-variate time series tasks are investigated. These issues may limit the application domain of the proposed algorithm. \\n\\n# Minor notation and presentation issues\\n- In Definition 2, does CSPD assume F is the same in $G_k$ and $G'_k$? If not, CPD might be a subset of CSPD. \\n\\n- In Equation 1 and 2, $n$ is used to represent the number of observations, while in Definition 1, capital $N$ is used to represent the same concept.\\n\\n- In Figure 2, it might be easier to understand if all the weekdays are drawn using the same color (blue or green) and all the weekends are also drawn in the same color (yellow or red).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a new model for change point detection, using autoencoders with temporal regularization, in order to impose temporal smoothness in the latent codes. To motivate this new model, the authors also provided a toy example to show how the abnormality in a time series is removed in the reconstructed signal using this additional regularization term. Experimental results were provided to support the proposed new model.\\n\\nI have a few concerns about some technical details of the paper, as explained below:\\n1) The paper motivated the new model with difficulty in detecting change points in seasonal time series. However, the proposed model with the temporal regularization is not directly related to the seasonality or periodicity of the input data. It is more related to the smoothness of the latent code. Hence it seems to me that there is a slight disconnection between the motivation and the actually proposed model. It would be nice if the authors can provide more intuitive explanation on why the temporal regularization can handle well change point detection in seasonal temporal series.\\n\\n2) The temporal regularization proposed in this paper is very similar to the total variation penalty used extensively in statistics and image processing. It would be nice if the authors can make a connection between the two. For example:\\nHarchaoui, Z., & L\\u00e9vy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. Journal of the American Statistical Association, 105(492), 1480-1493.\\nBeck, A., & Teboulle, M. (2009). Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE transactions on image processing, 18(11), 2419-2434.\\n\\n3) In the experimental section 4.3, the authors mentioned that \\\"In Table 2, ... we can see that our algorithm outperforms the baseline model\\\". However, in Table 2 the precision of the proposed model (67%) is lower than the baseline model (68%). Hence it is not obvious to the reader that the proposed model outperforms the baseline. \\n\\n4) In the appendix B, the authors explained the architecture of the autoencoder used in the paper. I wonder why the authors chose an asymmetric structure between the encoder and decoder, as most autoencoders have a symmetric structure.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: The paper raises an alarm that state-of-the art change-point detection methods in the ML literature do not handle important practical aspects arising in time-series modeling, namely seasonality. Indeed, methods designed to detect changing distribution under an i.i.d. setting can fail dramatically when the assumption is violated, when the change happens in the seasonal component. The paper proposes to use an auto-encoder to find the \\\"main pattern\\\" within each seasonal window, and to use total variation penalty (l1-norm on the change) of the hidden state in the auto-encoder to encourage a smooth state-sequence which allow breaks. They use k-means clustering to partition data-points, and detect a change-point if two consequent hidden states don't end up in the same cluster.\\n\\nWhile the proposal is sensible and the paper is reasonably readable, I find the paper lacking in several respects, and recommend to reject it. My main concerns are \\n(a) novelty: despite the claims in the paper -- the importance of seasonality is well known and appreciated in time-series literature, and the proposal to look for changes in seasonality is fairly obvious when dealing with practical time-series. I would suggest to do a comprehensive literature search and re-evaluate the novelty of the paper. \\nI believe that recent ML papers e.g. kernel two-sample tests and such, focus on the i.i.d. setting and ignore seasonality (and other messy aspects of practical TS) -- as it is the more challenging statistical problem.\\n(b) The paper considers a setting where the time-series consists of a seasonal component and an i.i.d. component (combined additively or multiplicatively). It doesn't attempt to model any kind of stochastic dynamics -- e.g. at least a simple auto-regressive model instead of iid, and non-stationarity (trends) in the time-series. So despite aiming to look at practical time-series, the paper still considers a simplified model.\\n(c) The paper's presentation is often sloppy in language use, assumptions, mathematical details, and simulations and needs to be significantly improved to be considered for ICLR (or related ML conferences).\", \"detailed_comments\": \"a) The references are severely lacking. There is an extensive literature in modeling time-series with seasonality and classical methods such as SARIMA (seasonal ARIMA), or exponential smoothing can track the evolution and changes in seasonal components. Various nonlinear DL-approaches to TS with seasonality have also started to appear. Once time-series is decomposed into trend, seasonal and stochastic part (using any linear or nonlinear or deep model), it is straightforward to apply anomaly detection algorithms to each component separately. Please take a look at e.g. https://anomaly.io/blog/index.html (from salesforce.com), to see practical change-point or anomaly detection in time-series in practice which does pay attention to seasonality. Also papers by Rob Hyndman pay close attention to seasonality, see e.g. https://otexts.com/fpp2/. \\n\\\"Changepoint Detection in Periodic and Autocorrelated Time Series\\\", https://journals.ametsoc.org/doi/full/10.1175/JCLI4291.1\", \"https\": \"//cran.r-project.org/web/packages/trend/vignettes/trend.pdf (which has a section on seasonal change-point detection)\\nHarvey, Koopman, Penzer, \\\"Messy Time Series: A Unified approach\\\", Adv. in Econometrics, Vol. 13, pp. 103-143., https://www.stat.berkeley.edu/~brill/Stat248/messyts.pdf\\nPerhaps there's relatively less focus on these practical details of change-point detection in recent ML literature and the focus is on the stochastic component, as it is the most challenging for prediction. The use of l1-norm of differences in time-series to detect changes is a natural idea, and has been suggested many papers e.g.in: http://eeweb.poly.edu/iselesni/lecture_notes/TV_filtering.pdf, \\n\\\"Time Series Clustering using the Total Variation Distance\\\", \\nStephen Boyd's trend filtering, https://web.stanford.edu/~boyd/papers/pdf/l1_trend_filter.pdf .\\n\\nWhile I am not aware of a specific prior work on auto-encoder with temporal smoothness for CPD, most of the main ideas are well known, and in my view the contribution is very limited in novelty.\\n\\nb) You're ignoring any memory or dynamics in the stochastic component of the time-series -- e.g. allowing something like a simple AR model rather than iid would be a good step. Detecting changes in the dynamics or correlation structure (temporal or cross-sectional) would make the paper more interesting. Something closer to switching linear dynamical systems, see for example https://arxiv.org/abs/1610.08466. \\n\\nc) The presentation has many issues in language / math / simulations and needs to be improved:\\n\\n1. The setting is not described clearly / formally -- are you trying to detect change-points online or offline, what assumptions are you making on the segments after removing seasonality -- are these just iid / stationary, can they include trends, outliers, e.t.c.\\n2. Baseline methods for detecting seasonal patterns are naive -- clearly applying methods that are not aware of seasonality will fail when there is strong seasonal components. There is one basic attempt at removing the seasonal component by averaging, and applying iid kernel CPD methods -- where it does help. I believe doing something a bit more realistic (like doing a seasonal decomposition) will make the baselines much stronger. \\n3. Citation format is inconsistent with ICLR. \\n4. ATR-CSPD is undefined in the abstract. \\n5. Intro: i, j, k notation inconsistent -- you seem to use i both for i = j*p + k, and also to refer to window id. \\n6. What is a \\\"generative function\\\" of time-series? Do you mean the pdf / cdf? What do you mean by a product of generative functions (which is additive or multiplicative), do you mean adding / taking products of random variables coming from independent distributions? What do you mean that you do not differentiate between additive / multiplicative? Do you claim to handle both within the same model?\\n7. Definition 2 -- do you look for x_jo,k ~ Gk', or x_j for j> j0 ~ Gk'? \\n8. You claim a multi-variate extension is easy -- but is it? How would you tackle e.g. changes in correlation structure?\\n9. \\\"Autoencoders attempt to copy input to output\\\" - isn't this trivial by using an identity function? You should mention some compression / bottleneck as well. \\n10. How do you optimize the total-variation (l1-norm) penalty in your formulation? Just throw it into SGD in keras?\\n11. The discussion in 3.2. is confusing -- you talk about weekly series, but use daily-seasonality, however you then describe detecting weekdays vs. weekends? How can you associate separate weekends without a weakly seasonal model? \\n12. London electricity data-set -- why do you average all weeks within the time-series to find average customer week? This was very surprising. Don't you loose most of the interesting anomaly data this way?\\n13. Figures are not explained well. While there's nice use of color -- it's often hard to understand what is the description pointing at.\", \"typos\": \"Person -> Pearson, Autencoder -> Autoencoder, and many others.\"}" ] }
Hkl9JlBYvr
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
[ "Luisa Zintgraf", "Kyriacos Shiarlis", "Maximilian Igl", "Sebastian Schulze", "Yarin Gal", "Katja Hofmann", "Shimon Whiteson" ]
Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning. A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agent’s uncertainty about the environment. Computing a Bayes-optimal policy is however intractable for all but the smallest tasks. In this paper, we introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncer- tainty directly during action selection. In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty. We further evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher online return than existing methods.
[ "Meta-Learning", "Bayesian Reinforcement Learning", "BAMDPs", "Deep Reinforcement Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=Hkl9JlBYvr
https://openreview.net/forum?id=Hkl9JlBYvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "PwgaU_btCX", "Hkgsq_lhsS", "Hye3gDe2iH", "BylpcZxoor", "rkeFf-CcjH", "SyxKPjfroH", "HkeZujWmsB", "ryeWBjZXiH", "ByxG4q-XiS", "ByeWje_-jB", "HkgOjdCb5r", "SkeDSGonKS", "B1lTqBu_KB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739822, 1573812371053, 1573811955940, 1573745045226, 1573736721503, 1573362529080, 1573227369223, 1573227321334, 1573227049562, 1573122201154, 1572100255757, 1571758655076, 1571485077243 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2073/Authors" ], [ "ICLR.cc/2020/Conference/Paper2073/Authors" ], [ "ICLR.cc/2020/Conference/Paper2073/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2073/Authors" ], [ "ICLR.cc/2020/Conference/Paper2073/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2073/Authors" ], [ "ICLR.cc/2020/Conference/Paper2073/Authors" ], [ "ICLR.cc/2020/Conference/Paper2073/Authors" ], [ "~Frans_Oliehoek1" ], [ "ICLR.cc/2020/Conference/Paper2073/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2073/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2073/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers the problem of transfer learning among families of MDP, and proposes a variational Bayesian approach to learn a probabilistic model of a new problem drawn from the same distribution as previous tasks, which is then leveraged during action selection.\\n\\nAfter discussion, the three respondent reviewers converged to the opinion that the paper is novel and interesting, and well evaluated. (Reviewer 1 never responded to any questions the authors or me, so I have disregarded their review.) I am therefore recommending an accept.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author Reply\", \"comment\": \"Thank you for the additional pointers.\\n\\n1-2) We agree with your sentiment regarding generalisation to out-of-distribution task, and hope to see more research towards this in the future. We added a short discussion on future work to our conclusion to touch on these points with the reader.\\n\\n3) Thank you! Yes, the AntGoal environment seems like a good testbed for visualising the belief also for continuous state spaces. In theory, there is nothing that should prevent variBAD from learning with even larger state / action space, but we haven\\u2019t yet performed these experiments. We will look into scaling variBAD up even further in the future.\"}", "{\"title\": \"Author Reply II\", \"comment\": \"Dear reviewer,\\n\\nWe believe we have addressed all of your concerns in our response above and were wondering if you had the chance to look at it. We would appreciate it if you could reconsider your evaluation and score of our paper, or let us know if you have any other questions at this point.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the additional clarification. I feel as though my questions were satisfactorily answered for the most part. I am inclined to improve my initial score. Below are some follow-ups to the authors' responses.\\n\\nBefore I respond to these points, I do want to elevate one comment made in my initial review, that of showing ablations to the proposed variBAD approach so as to highlight how performance changes relative to the RL^2 baseline, as discussed in the paper.\\n\\n1-2) Your responses helped clarify and correct my misunderstanding of your use of the decoder. As the authors noticed, I mistakenly believed that the decoder was used for planning purposes. This eliminates my concerns about compounding model errors and stability. The additional clarification added to the paper was helpful as well.\\n\\nI agree that the contributions made to improve exploration when operating on in-distribution tasks are important. While I respect the advances made possible by meta-RL, I am disappointed that so much effort (computational and intellectual) is being spent fitting to a narrow distribution of tasks without discussing the limitations of such methods. I appreciate the authors' effort to highlight these limitations in this discussion, I would advocate that such points should be included in the paper. Moving to \\\"generalize\\\" to out-of-distribution tasks *is* an important and exciting area of research. There has been some work along these lines (Nagabandi+Clavera, et al [ICLR 2019]), I wonder if the variational inference framework presented in this paper, or a modified version of VIREL (Fellows+Mahajan, et al [NeurIPS 2019]) could help paper over some of the computational inefficiencies and hand tuned model selection. \\n\\n3) I appreciate the additional efforts taken to provide some consistency in how the authors analyze variBAD between experimental domains. I understand that having a continuous state and action space do complicate a similar demonstration as is used in the grid world experiments.\\n\\nAs a point of suggestion for the authors' proposed additional experiments, the PEARL paper constructed tasks in the Ant domain where the agent was expected to navigate to a particular location. Perhaps having a spatial component of the task could help visualize the agent's belief of the goal state in these continuous environments. I'm not sure whether the added complexity with an increased state and action space would complicate the use of variBAD.\\n\\nAdditionally, Nagabandi, et al [ICRA 2018] (among others, I'm sure) constructed specific trajectories for their agent to follow. Their success however depended heavily on MPC which may not lend itself directly to the proposed approach.\\n\\n4) This was helpful, thank you.\"}", "{\"title\": \"Author Reply\", \"comment\": \"Thank you for your thoughtful review and questions, we very much appreciate the time you took to review our work. We reply to your points below.\\n\\n1) \\n\\nThe decoder is not used at test time; we only roll out the policy (via forward passes through the encoder and the policy network) without any explicit planning. Instead, the policy has learned to act approximately Bayes-optimally during meta-training. Using the decoder to plan is an interesting direction of future work, but is not trivial for the reasons you mentioned (amongst others). We added some clarification to the latest revision of the paper (end of Sec 3.2).\\n\\nThat being said, we indeed only consider meta-learning settings where the training and test distribution are the same, as common in recent meta-RL literature. We believe however that generalising how to explore in new tasks, even from the same distribution, is already a significant accomplishment.\\n\\nTransferring to out-of-distribution tasks is even more challenging, and in particular for variBAD two problems are likely to arise: the inference procedure will be wrong (the prior and/or posterior update) and the policy will not be able to interpret a changed posterior. In this case, further training of both the encoder/decoder might be necessary, together with updates to the policy and/or explicit planning. While this is outside the scope of our paper at this point, this is an interesting direction for future research!\\n\\n2) \\n\\nWe\\u2019re not entirely sure we understand your question correctly. As mentioned above, we do not use the reward/transition model at test time, and do not perform any explicit planning. We believe it is an advantage to not have to do model predictive control, since indeed we do not run into stability problems as you mention. The exploratory actions are deterministically chosen by the policy, and determined by what it has learned to be Bayes-optimal behaviour from meta-training. We hope this answers your question.\\n\\n3)\\n\\nWe added a visualisation of the latent space for two sample rollouts in the HalfCheetahDir (tasks: left/right) to Appendix C.3. Visualising the latent space gives us some insight into how fast the posterior concentrates, and in this example we can see this happening within just the first few environment steps. \\n\\nVisualising the belief in the reward/state space directly is indeed more difficult, since we now have continuous states and actions. What we could do instead, is additionally train a model that predicts a ground-truth task description or ID (separate from the main objective and just for further analysis, since we do not want to use this privileged information for meta-training). This would give us a sense of how certain the agent is about the task (without artefacts such as increasing latent variance in the logspace as we observe in Appendix C.3). We added a note on this to the Appendix as well, and plan to include such visualisations in future revisions of our paper.\\n\\n4)\\n\\nThank you for pointing this out! We tried to clarify this in the paper (by renaming H in the BAMDP to H^+, and an explanation in section 2.2 shortly after Eq (3)). \\n\\nBayes-Optimal behaviour critically depends on the number of time steps given to the agent. I.e., optimally trading off exploration and exploitation can lead to very different behaviour when the agent is given only 10 steps, vs. when it is given 100 steps. E.g. let\\u2019s say the agent can learn something about the task at hand without getting any reward (purely information-seeking actions), then these might be worth it only if there is enough time to exploit that information. If, however, time\\u2019s up after learning that information, a better strategy is to take a gamble and try to solve the most likely task directly.\\n\\nTherefore, when training variBAD, we need to pre-specify for which horizon we want the agent to act Bayes-optimal. In the gridworld, this was three episodes of the original MDP (so H^+=3*15, which is the new horizon in the BAMDP), and in MuJoCo this was one episode (so H^+=1*200). \\n\\nOther)\\n\\nThank you for pointing out the additional literature, this is very much appreciated. We will broaden our review of related work in the paper as we dive into these.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"*EDIT: Score increased after discussion with authors clarified many concerns raised below*\", \"summary\": \"This paper presents an algorithmic approach toward learning Bayes Optimal policies under the uncertainty of the environment. Leveraging meta-learning, the proposed variBAD approximate inference procedure is capable of adapting within the first episode at, what the authors term, meta-test time.\", \"comments\": \"It is my estimation that this paper is well positioned to further current state-of-the-art adaptive RL frameworks or methodologies, whether they are meta-learned, transferred or directly inferred through probabilistic mechanisms. The primary contribution of this paper is in how variBAD learns the variational inference procedure. As noted in the related work section, many contemporary policy learning approaches via variational inference are limited by their construction, selection of prior distributions, etc. The advantage of the proposed methodology is that it is capable of efficiently inferring the current environment and adapting the policy learning procedure accordingly. The experiments successfully compare with relevant baselines and prior approaches. The discussion is well framed in highlighting the benefits and limitations of the proposed methodology in relief to prior approaches. One possible weakness in the experimentation, given how closely RL^2 matches the performance of variBAD, is that the specific contributions of each architectural choice or optimization protocol are unclear. There are a few areas in the paper where the authors suggest that ablating their model in specific ways would recover the core approaches present in RL^2. It would be instructive to see how/if performance degrades or converges toward that of RL^2 as the variBAD methodology is ablated.\\n\\nThe paper is well grounded in the literature, albeit skewed perhaps a bit too far toward recent meta-learning results. This is understandable given the focus of this paper, however there are other approaches that might deserve a mention as they similarly parameterize variation over possible MDPs with some latent variables. Namely, I have in mind a few lines of research such as Contextual MDPs (Hallak, et al, 2015; Jiang, et al, 2017; Dann, et al, 2018; etc.), Successor Features (Barretto, et al, 2016,2017,2019; Lehnert, et al, 2017; etc.) and HiP-MDPs (Doshi-Velez and Konidaris, 2016; Killian, et al, 2017). In particular use of the HiP-MDP framework, Yao, et al (2018) also use the inferred latent variable used to identify the task to condition the policy. While it's always easy to dig into the rabbit holes of related research and distract from the overall objective of a paper, I thought that there was sufficient overlap with these other lines of research that the authors may find interesting. I do not claim that any one of these additional sources of prior work have been overlooked to the detriment of the current paper, they are offered as merely a suggestion to broaden the author's anchoring in the literature.\\n\\nNow, some more specific questions about the paper and proposed approach. Further clarity along any of these questions would greatly improve the presentation of the paper as well as further convince me of the paper's suitability for publication.\\n1) What is the advantage of decoding the entire trajectory? It is well understood that this is advantageous in training as that data is available and allows for better inference of the variational parameters. However, under test conditions where the framework may be operating in environments that lie outside the distribution of MDPs it was trained on, I can imagine that errors in trajectory prediction may compound and throw off the entire inference procedure. The experimental set-up did not allay these concerns as there was no mention for holding out-of-distribution tasks/environments aside and the variation in environments is pretty narrow. \\n2) How is the proposed trajectory decoding more stable than model predictive control? Is stability a large consideration for variBAD when exploring? How much can one trust the exploratory actions under variBAD? \\n3) The visualization and careful explanation in Section 5.1 of how variBAD executes inference and learning was greatly appreciated. However, are these intuitions valid when extending beyond discrete state and action spaces? Can one make the same claims about the overall approach or procedure in the MuJoCo domains? It was mildly disappointing that a similar explanatory effort was not made in more complex environments. Even an acknowledgement of this being unreasonable would help round out the discussion in Section 5.2.\\n4) It is not clear what the connection is between the horizon H and the number of rollouts used for evaluation/inference/training. I spent a bit more time than necessary going over and over these items in the paper to where I think that I may understand but I'm still not 100% confident about what is impacted by the number of rollouts used.\"}", "{\"title\": \"Author Reply\", \"comment\": \"We welcome constructive and fair feedback, positive or negative. Unfortunately, we cannot respond to any of the criticism in this review, since none of the related work which is supposedly missing is named.\\n\\nVariational inference methods are indeed used in various ways for transfer learning (we give a detailed overview of work in this space on page 7), but many of these settings don\\u2019t directly consider the problem of exploration in new tasks. The novelty of our work is that we use VI methods to meta-learn approximately Bayes-optimal policies. We feel like this review disregards large parts of our contribution, discussion of related work, and experimental comparison.\\n\\nWe will be requesting the AC to disregard this unconstructive review.\"}", "{\"title\": \"Author Reply\", \"comment\": \"Thank you for your review and valuable suggestions. We reply to your points below.\\n\\n[Novelty]\\n\\nOur method is one of the few that manages to scale up learning (approximate) Bayes-optimal behaviour to complex environments such as MuJoCo. Unlike existing works, we do not rely on privileged information (such as the task description or ID), and we do not use samples from the posterior (conditioning the policy on the entire posterior like in variBAD makes learning the agent\\u2019s strategy harder, since it has to implicitly do planning in belief space, but can ultimately lead to superior performance).\\n\\nThe current state of the art algorithm on the MuJoCo benchmark, PEARL, is akin to posterior sampling, which is not Bayes optimal. We believe that, by approximating Bayes-optimal exploration at meta-test time, variBAD takes a significant step forward. This is confirmed empirically by variBAD\\u2019s, better test-time exploration behaviour and higher performance in the first rollout. \\n\\nThough we build on concepts that have been explored extensively, variBAD was not straightforward to devise. The exact choice of objective (predicting the future and using the previous posterior as a prior) was crucial in order to get the approximately Bayes-optimal behaviour we want. In that sense, there\\u2019s insight in the proposed approach that we think is useful.\\n\\n[Computational Complexity]\\n\\nThank you for pointing this out. We added a discussion of this to the paper (in Appendix C.2, and briefly in the related work section). Here\\u2019s a summary:\\n\\nIndeed, we assume we can meta-train (both the inference procedure and policy) on a set of related tasks, and this is typically computationally expensive: the policy essentially has to learn many tasks at once (which causes meta-RL algorithms to generally take long to learn) and we have additional model complexity due to the encoder/decoder. This setup allows us to save sample costs at test time, which is a desirable property in many situations.\\n\\nOther existing approximate Bayesian RL methods often rely on sample-based planning (e.g. the work by Arthur Guez), which might include expensive planning steps, and require us to define a prior / belief update on the environment dynamics (which is, e.g., unclear how to do for domains like MuJoCo).\\n\\nWhen comparing existing meta-learning methods in terms of runtime, E-MAML and ProMP are fastest. They have the advantage that they do not have a recurrent part such as variBAD or RL^2. Forward and backward passes through recurrent networks can be slow, especially with large horizons. On the other hand, they allow us to do adaptation online, while interacting with the environment. \\n\\nEven though both variBAD and RL^2 use recurrent modules, we observed that variBAD is faster when training the policy with PPO. This is because we do not backpropagate the RL-loss through the recurrent part, which allows us to make the PPO mini-batch updates without having to re-compute the embeddings (so it saves us a lot of forward/backward passes through the recurrent model). This difference should be less pronounced with other RL methods that do not rely on this many forward/backward passes per policy update.\\n\\nCompared to PEARL, variBAD takes roughly twice as long to train, which is mostly due to variBAD being on-policy whereas PEARL is off-policy (see figures in the Appendix), but on-policy vs off-policy training is an orthogonal issue to our contribution. Doing posterior sampling using off-policy methods also requires PEARL to use a different encoder (to maintain order invariance of the sampled trajectories) which is non-recurrent (and hence faster to train) but restrictive since it assumes independence between individual transitions.\\n\\nPlease let us know if you have any other questions or concerns.\"}", "{\"title\": \"Author Reply\", \"comment\": \"Thank you for your review. We reply to your points below.\\n\\n[Motivation] \\n\\nThe scope of applications of our method is huge, since most of RL requires smart exploration. The only assumption that we make is that the agent has the chance to meta-train on a set of related tasks, an assumption made by all of meta-RL. This applies to many settings including, e.g., video games and sim2real transfer for robotics. Our method outperforms the current state of the art meta-learning method (PEARL) on a popular MuJoCo benchmark, in terms of adapting within a single episode. Thus we believe it is a considerable step towards better exploration for RL algorithms via meta-learning.\\n\\nIn many real-world settings, we care not only about performance but we also want our agent to be robust, fair, and safe. Examples are high-stake applications such as healthcare where we care a lot about patient well-being, and education where, e.g., automated tutoring systems should neither bore nor discourage students (for some examples see [1]-[5]). Applying RL to real-world applications like these requires efficient and safe data gathering, i.e., smart exploration.\\n\\nWe added a short motivating sentence to the introduction of the revised version of our paper.\\n\\n[Theoretical Analysis] \\n\\nOur method is derived from the problem formulation to approximate the Bayes-optimal solution, which gives it a strong theoretical foundation and motivation. The contribution of our paper is to find scalable approximations to BAMDP solutions, which unsurprisingly precludes theoretical guarantees. We do provide intuition about our objective function by discussing its properties in the paper, and we designed the gridworld experiment to showcase what behaviour we expect and achieve.\\n\\nPlease let us know if you have any other questions or concerns.\\n\\n[1] Erraqabi, Akram, et al. \\\"Rewards and errors in multi-arm bandits for interactive education.\\\" 2016.\\n[2] Liu, Yun-En, et al. \\\"Trading Off Scientific Knowledge and User Learning with Multi-Armed Bandits.\\\" EDM. 2014.\\n[3] Koedinger, Kenneth R., et al. \\\"New potentials for data-driven intelligent tutoring system development and optimization.\\\" AI Magazine 34.3 (2013): 27-41.\\n[4] Yauney, Gregory, and Pratik Shah. \\\"Reinforcement learning with action-derived rewards for chemotherapy and clinical trial dosing regimen selection.\\\" Machine Learning for Healthcare Conference. 2018.\\n[5] Hochberg, Irit, et al. \\\"A reinforcement learning system to encourage physical activity in diabetes patients.\\\" arXiv preprint arXiv:1605.04070 (2016).\"}", "{\"title\": \"Criticisms not backed up\", \"comment\": \"I think this review does not meet the quality requirements.\\n\\nIt makes claims of lack of novelty without backing these up. This is not acceptable for a review. The reviewer should have provided explicit pointers to missed papers, and specify what related\\nmethods should be compared to.\\n\\nTherefore, I think this review should be ignored.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper considers a version of reinforcement learning problem where an unknown prior distribution over Markov decision processes are assumed and the learner can sample from it. After sampling a MDP, a standard reinforcement learning is done. Then the paper investigates the Bayes-optimal strategy for such meta-learning setting. The experiments are done for an artificial maze solving tasks.\", \"comments\": \"Considering a Bayesian setting of reinforcement learning is sound and well-motivated in a mathematical or statistical sense. On the other hand, I wonder what kind of practical applications motivate such formulation. Unfortunately, I don\\u2019t have any examples in mind and the paper also shows some artificial experiments. So, the formulation seems, so far, not to be convincing in a practical sense. \\n\\nAnother concern in my mind is that the proposed methods are not supported by any theoretical analyses. I think mathematical papers without practical applications are acceptable if they contain strong mathematical analyses. The present paper, however, does not contain such analyses. \\n\\nAs a summary, I feel that the paper is not strong for theoretical analyses nor practical usefulness, and thus further investigation for either side is necessary.\", \"comments_after_rebuttal\": \"I modified my score according to authors' comments.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a new deep reinforcement learning method that can efficiently trade-off exploration and exploitation. An optimal policy for this trade-off can be solved under the Bayesian-adaptive MDP framework, but in practice, the computation is often intractable. To solve the challenge and approximate a Bayesian-optimal policy, the proposed method VariBAD combines meta-learning, variational inference, and bayesian RL. Specifically, the algorithm learns latent representations of task embeddings and performs tractable approximate inference by optimizing a tractable lower bound of the objective.\\n\\nThe paper is well-written and easy to follow. The combination of meta-learning, variational inference and BAMDP is a clear and neat way to approximate Bayes-optimal policy. The idea also sounds practical for RL as it can approximately solve larger tasks with unknown priors. Experiments on Gridworld and Mujoco show the effectiveness of the proposed method. On Gridworld the performance of the proposed algorithm is close to the performance of the Bayes-optimal policy.\\n\\nOne concern for this paper is the level of novelty, as each major component of the proposed solution has been explored quite extensively in the existing literature (as mentioned in the related work section). \\n\\nIn addition, since comparing many existing Bayesian RL methods, VariBAD meta-learns the inference procedure. This can add additional computation complexity to Bayesian RL, which is not explained or mentioned in neither the method part nor the experiment. I hope the authors can add some discussions on this aspects\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The proposed method represents a single MDP using a learned, low-dimensional stochastic latent variable. On these grounds, given a set of tasks sampled from a distribution, the method jointly trains: (1) a variational auto-encoder that can infer the posterior distribution over the postulated latent variable when it encounters a new task while interacting with the environment, and (2) a policy that conditions on this posterior distribution over the MDP embeddings, and thus learns how to trade off exploration and exploitation when selecting actions.\\n\\nSuch variational inference arguments for transfer learning in the context of MDPs are not new. The authors have not made a good job reviewing the related literature. Most importantly, their experimental evaluations lack substantial comparison to such related methods. This is totally disappointing.\"}" ] }
Bkl5kxrKDr
A Generalized Training Approach for Multiagent Learning
[ "Paul Muller", "Shayegan Omidshafiei", "Mark Rowland", "Karl Tuyls", "Julien Perolat", "Siqi Liu", "Daniel Hennes", "Luke Marris", "Marc Lanctot", "Edward Hughes", "Zhe Wang", "Guy Lever", "Nicolas Heess", "Thore Graepel", "Remi Munos" ]
This paper investigates a population-based training regime based on game-theoretic principles called Policy-Spaced Response Oracles (PSRO). PSRO is general in the sense that it (1) encompasses well-known algorithms such as fictitious play and double oracle as special cases, and (2) in principle applies to general-sum, many-player games. Despite this, prior studies of PSRO have been focused on two-player zero-sum games, a regime where in Nash equilibria are tractably computable. In moving from two-player zero-sum games to more general settings, computation of Nash equilibria quickly becomes infeasible. Here, we extend the theoretical underpinnings of PSRO by considering an alternative solution concept, α-Rank, which is unique (thus faces no equilibrium selection issues, unlike Nash) and applies readily to general-sum, many-player settings. We establish convergence guarantees in several games classes, and identify links between Nash equilibria and α-Rank. We demonstrate the competitive performance of α-Rank-based PSRO against an exact Nash solver-based PSRO in 2-player Kuhn and Leduc Poker. We then go beyond the reach of prior PSRO applications by considering 3- to 5-player poker games, yielding instances where α-Rank achieves faster convergence than approximate Nash solvers, thus establishing it as a favorable general games solver. We also carry out an initial empirical validation in MuJoCo soccer, illustrating the feasibility of the proposed approach in another complex domain.
[ "multiagent learning", "game theory", "training", "games" ]
Accept (Talk)
https://openreview.net/pdf?id=Bkl5kxrKDr
https://openreview.net/forum?id=Bkl5kxrKDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "XEgYShJzuK", "H1g_mNc2iB", "S1xF279njH", "Syl9uX92jB", "r1xXx753jS", "SJxAQ3QqiB", "BkeT857cir", "SkgOTU3Yjr", "BkgXrUhtsr", "rkx3X82tjr", "Byxvy8hKjS", "r1xBh9CaYS", "S1eXzEgTtS", "B1lMGBlhtH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739793, 1573852191705, 1573852081216, 1573852018293, 1573851883513, 1573694502430, 1573694036941, 1573664448263, 1573664314895, 1573664292146, 1573664222788, 1571838637035, 1571779595312, 1571714313954 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2072/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/Authors" ], [ "ICLR.cc/2020/Conference/Paper2072/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2072/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2072/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper analyzes and extends learning methods based on Policy-Spaced Response Oracles (PSRO) through the application of alpha-rank. In doing so, the paper explores connections with Nash equilibria, establishes convergence guarantees in multiple settings, and presents promising empirical results on (among other things) 3-to-5 player poker games.\\n\\nAlthough this paper originally received mixed scores, after the rebuttal period all reviewers converged to a consensus. A revised version also includes new experiments from the MuJoCo soccer domain, and new poker results as well. Overall, this paper provides a nice balance of theoretical support and practical relevance that should be of high impact to the RL community.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Follow-up Response to Reviewer 3: Additional experiments\", \"comment\": \"As promised, we have conducted new experiments on the MuJoCo soccer domain, which demonstrate the effectiveness of the alpha-PSRO training procedure against a self-play training procedure. Specifically, these give insights on performance improvements resulting from PSRO vs. standard population-based training regimes, in addition to the comparisons of PSRO meta-solvers evaluated in the original experiments. Please see Appendix F and Fig. F.11 for these updated results.\\n\\nAdditionally, we have now appended new Poker results to the latest revision. These include an evaluation of our training approach against the rectified Nash solver introduced in Balduzzi et al. (ICML, 2019) in two player games; we have additionally included rectified projected replicator dynamics as a comparison baseline to those experiments. Please see Fig. 3 for the updated experiments. We have likewise updated the text in Section 5 (\\u201cEvaluation\\u201d) to convey insights into these new results. Additionally, due to the rather counterintuitive nature of the rectified Nash experiments, we have appended a new section (Appendix C.5: \\u201cExplanation of Rectified Nash Performance\\u201d) with a walkthrough of the results.\"}", "{\"title\": \"Thanks !\", \"comment\": \"We appreciate your constructive feedback, which definitely helped to improve the paper\\u2019s quality. Thanks also for your kind remarks and for updating the score (although, it appears to not have changed on our end, though maybe it becomes visible after the review process? We wanted to kindly flag this just in case. Thanks again!)\\n\\nRegarding the new experiments, please see our response to your other comment for details.\\n\\nThanks also for the suggestion on the additional works to include, which have all been appended to the related works in the latest revision. Additionally, we have added the following recent related works for interested readers:\\n\\nHernandez-Leal, Pablo, Bilal Kartal, and Matthew E. Taylor. \\\"A survey and critique of multiagent deep reinforcement learning.\\\" Autonomous Agents and Multi-Agent Systems (2019): 1-48.\\n\\nKhadka, Shauharda, Somdeb Majumdar, and Kagan Tumer. \\\"Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination.\\\" arXiv preprint arXiv:1906.07315 (2019).\\n\\nPeng, Peng, et al. \\\"Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games.\\\" arXiv preprint arXiv:1703.10069 (2017).\"}", "{\"title\": \"Followup\", \"comment\": \"That\\u2019s correct, the clones on each team have the exact same weights (i.e., homogeneous teams as in [1] (Gupta 2017)). We now make this difference with respect to the heterogenous teams evaluated in [2] (Liu 2018) clear in the revised paper (Section F). As the primary objective of our paper was to analyze (empirically and theoretically) PSRO\\u2019s performance with novel meta-solvers and in >2-player games, the MuJoCo experiments serve as a preliminary evaluation of the scalability of the approach to more complex domains. We completely agree that investigation of heterogeneous teams (as in [2]) is also interesting, particularly from a behavioral diversity perspective, though leave this for future work.\\n\\nAs promised, we have conducted new experiments on the MuJoCo soccer domain, which demonstrate the effectiveness of the alpha-PSRO training procedure against a self-play training procedure. Specifically, these give insights on performance improvements resulting from PSRO vs. standard population-based training procedures, in addition to the comparisons of PSRO meta-solvers evaluated in the original experiments. Please see Appendix F and Fig. F.11 for these updated results. Indeed our earlier results were for 3v3 teams. We evaluate on 2v2 teams in these new experiments, to bear a closer similarity to [2]. \\n\\nAdditionally, we have now appended new Poker results to the latest revision. These include an evaluation of our training approach against the rectified Nash solver introduced in Balduzzi et al. (ICML, 2019) in two player games; we have additionally included rectified projected replicator dynamics as a comparison baseline to those experiments. Please see Fig. 3 for the updated experiments. We have likewise updated the text in Section 5 (\\u201cEvaluation\\u201d) to convey insights into these new results. Additionally, due to the rather counterintuitive nature of the rectified Nash experiments, we have appended a new section (Appendix C.5: \\u201cExplanation of Rectified Nash Performance\\u201d) with a walkthrough of the results.\"}", "{\"title\": \"Followup Response to Reviewer 1\", \"comment\": \"As promised, we have conducted new experiments on the MuJoCo soccer domain, which demonstrate the effectiveness of the alpha-PSRO training procedure against a self-play training procedure. Specifically, these give insights on performance improvements resulting from PSRO vs. standard population-based training pipelines, in addition to the comparisons of PSRO meta-solvers evaluated in the original experiments. Please see Appendix F and Fig. F.11 for these updated results. Given the differences between our training procedure and that of Liu et al. (ICLR, 2019) and review period timelines, this was the closest we could come to comparing the differences between PSRO-based opponent sampling and the PBT-style opponent sampling method used in Liu et al. (ICLR, 2019), while keeping all other aspects of our method fixed.\\n\\nAdditionally, we have now appended new Poker results to the latest revision. These include an evaluation of our training approach against the rectified Nash solver introduced in Balduzzi et al. (ICML, 2019) in two player games; we have additionally included rectified projected replicator dynamics as a comparison baseline to those experiments. Please see Fig. 3 for the updated experiments. We have likewise updated the text in Section 5 (\\u201cEvaluation\\u201d) to convey insights into these new results. Additionally, due to the rather counterintuitive nature of the rectified Nash experiments, we have appended a new section (Appendix C.5: \\u201cExplanation of Rectified Nash Performance\\u201d) with a walkthrough of the results.\"}", "{\"title\": \"MuJoCo\", \"comment\": \"When you say team of identical agents (clones), you mean they share the exact same weights? This seems more like the setting in [1] with homogeneous agents?\\nMoreover, didn't [2] have two players in the team (2v2)? It seems here you have 3v3? If so I missed this detail from the paper.\\n\\n[1] Gupta, J. K., Egorov, M., & Kochenderfer, M. (2017, May). Cooperative multi-agent control using deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (pp. 66-83). Springer, Cham.\\n\\n[2] Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., & Graepel, T. (2018). Emergent Coordination Through Competition.\"}", "{\"title\": \"Re:\", \"comment\": \"I'm impressed with the response in the rebuttal and looking forward to the updates to your experiments. Meanwhile I have updated my score. Given the interest in applying to continuous action problems, would be useful to cite other related works in the literature like:\\n\\nIqbal, S., & Sha, F. (2019, May). Actor-Attention-Critic for Multi-Agent Reinforcement Learning. In International Conference on Machine Learning (pp. 2961-2970).\\n\\nGupta, J. K., Egorov, M., & Kochenderfer, M. (2017, May). Cooperative multi-agent control using deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (pp. 66-83). Springer, Cham.\\n\\nWei, E., Wicke, D., Freelan, D., & Luke, S. (2018, March). Multiagent soft q-learning. In 2018 AAAI Spring Symposium Series.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the positive and constructive feedback.\\n\\nThank you for the suggestion regarding the empirical results. We clarify the takeaways from the paper below, and have worked this commentary into the most recent version of the paper:\\nValidation of the feasibility of the PBR oracle in normal form games (NFGs): the asymmetric nature of these games, in combination with the number of players and strategies involved, makes them inherently, and perhaps surprisingly, large in scale. For example, our largest NFG involves 5 players with 30 strategies each, making for >24 million strategy profiles in total, which we note is well beyond the scale of canonical NFG domains. Overall, despite their stateless nature, we consider these NFG experiments as key empirical results, in contrast to the toy domains used in our counterexamples.\", \"alpha_psro_lowering_nashconv_in_2_player_poker_experiments\": \"while Alpha-Rank does not seek to find an approximation of Nash, it nonetheless reduces the NashConv yielding extremely competitive results in comparison to an exact-Nash solver in these instances. This result is both non-obvious and quite important, in the sense of establishing Alpha-PSRO as a convenient means of training agents in >2-player games (where Nash is not readily computable).\", \"mujoco_soccer_experiments\": \"Although noted as preliminary, a key observation can be made from these results: upon completion of training, when computing a new play distribution based on a pool of agents trained via the AlphaRank-based training approach vs. a uniform approach, the former attains essentially all of the \\u2018play probability\\u2019. This is evident in the colorbar on the far right of Appendix F, Fig. F.10, which visualizes the post-training meta-distribution over both training pipelines. Overall, we agree this insight should have been provided more clearly in the text.\\n\\nBased on your feedback, we have updated the revision to integrate changes related to the above discussions. Please let us know if further clarification of any of these points are needed.\\n\\nFinally, on note related to Reviewer 1 and 3\\u2019s feedback, we are investigating several additional experiments with the aim to include them in the revision before the author discussion period closes. (We will post an update as soon as applicable regarding any new results.)\"}", "{\"title\": \"Response to reviewer 2 [Part 2]\", \"comment\": \"MuJoCo soccer (true PSRO vs. cognitive hierarchy):\\n\\nThe training approach we used in the experiment comparing PSRO-Alpharank with PSRO-Uniform corresponds to PSRO, rather than DCH, with each \\u2018PSRO step\\u2019 consisting of 1 billion training steps in the underlying game. Specifically, in the MuJoCo setting evaluated, each team was composed of several clones of a unique RL agent. Meta-game evaluations were conducted by composing a team of identical agents, and facing them off against a team of other identical agents. E.g., in a 3-vs-3 game, a pool of 2 agents {A, B} would yield a 2x2 meta-payoff table with the following 4 entries: (AAA vs. AAA), (AAA vs. BBB), (BBB vs. AAA), (BBB vs. BBB). Effectively, the team-vs-team metagame is thus also the agent-vs-agent metagame, thereby enabling us to conduct our analysis on a matrix, instead of a tensor of rank (2 * team size).\\n\\nFor the poker results, per iteration of PSRO, we used 100 simulations per entry of the meta-payoff table. For soccer experiments, the number of simulations per entry were adaptive to alleviate the cost of simulating this significantly more complex domain. An average of 10 to 100 simulations were conducted per entry, with fewer simulations used for meta-payoffs with higher certainty. Payoff uncertainties were estimated by computing the standard deviation of a beta-law of parameter (matches won, matches lost). For the final evaluation matrix reported in Appendix F (Fig. F.10), which was computed after the conclusion of PSRO-based training, 100 simulations were used per entry.\\n\\nWe have updated Sections C.1 and F of the revised paper appendix to include these details.\", \"counterexamples_in_appendix_b3\": \"[Please note that Appendix B.3 is now A.2, due to updates in the revised paper.]\\n\\nThis is a great question. Indeed, the notion of strategy defection underlies both alpharank and correlated equilibria, although in quite different ways; alpharank is motivated by evolutionary dynamics and is built off the notion of unilateral defection from individual strategy profiles, whereas correlated equilibria are defined in terms of defections from distributions over profiles. We expect that these differences (in the manner in which the two solution concepts use the notion of defection) could be used to pinpoint the precise relations between the two, although leave this for future work.\"}", "{\"title\": \"Response to Reviewer 2 [Part 1]\", \"comment\": \"We thank the reviewer for the detailed feedback. We agree that clarifying these points is useful for reproducibility and also building reader intuition on the results. Please find our point-by-point responses below, which have been integrated into the latest revision.\", \"tractability_of_pbr_score_and_pcs_score\": \"This is an important and insightful question regarding the tractability of convergence measures such as PBR- and PCS-Scores. We developed these scores to assess the quality of convergence in our examples, in a manner analogous to NashConv. The computation of these scores is, however, not tractable in general games. Notably, this is also the case for NashConv (as it requires computation of player-wise best responses, which can be problematic even in moderately-sized games). Despite this, these scores remain a useful way to empirically verify the convergence characteristics in small games where they can be tractably computed. \\n\\nWe agree that this is a useful remark for readers interested in implementing these scores, and have revised the paper to do so in Section C.3. Additionally, we now include pseudocode, in the same section, detailing how to compute these scores.\", \"intuition_on_lack_of_convergence_without_novelty_bound_oracle\": \"As the reviewer points out, the lack of convergence without a novelty-bound oracle is precisely related to game intransitivities, i.e. cycles in the game can trap the oracle without the novelty-bound constraint. We show an example of this occurring in the revised paper Appendix B.4 (Figure B.7). Specifically, SSCCs may be hidden by \\u201cintermediate\\u201d strategies that, while not receiving as high a payoff as current population-pool members, can actually lead to well-performing strategies outside the population. As these \\u201cintermediate\\u201d strategies are avoided, SSCCs are consequently not found. Note also that this is related to the common problem of action/equilibrium shadowing (See Matignon et al., 2012, \\u201cIndependent reinforcement learners in cooperative Markov games: a survey regarding coordination problems\\u201d). \\n\\nNote that per Reviewer 1 and 2\\u2019s feedback, we have made several improvements to the descriptions of the above example, specifically appending a paragraph following Proposition 4 to better explain this intuition, updating some of the proof text in Section B.4, and relabeling Fig B.7\\u2019s captions. We hope these changes make the intuition clearer.\\n\\nDependence on $\\\\alpha$ parameter:\\nThanks for pointing this out. Indeed, for all alpharank results, we run a sweep over alpha after each PSRO iteration (as recommended in the original alpharank paper). We have updated Section C.1 (Experimental Procedures) of the revised paper to clarify this. Overall, relative to the other modules of the training pipeline, we did not find this to be a computational constraint, especially for the larger (>2-player) games and when using a sparse representation and solver for computing the alpharank distribution.\\n\\nOn a related note, we have also added more details on the hyperparameters used for the projected replicator dynamics meta-solver to Section C.1.\", \"oracle_in_experiments\": \"The oracles used in the experiments were (exact) best response oracles, computed by traversing the game tree. Specifically, we used OpenSpiel (https://github.com/deepmind/open_spiel) as the backend for the experiments using the exact best response oracle. Specifics of the implementation can be found in https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/algorithms/best_response.py). We\\u2019ve updated Section C.1 (Experimental Procedures) to provide these details. Please let us know if this clarifies things. Many thanks!\", \"br_pbr_compatibility\": \"We thank the reviewer for this question, as the concept of compatibility between objectives benefitted from a clarifying example. In general, BR and PBR optimize different objectives. However, in certain types of games (e.g., win-loss and monotonic games, defined respectively in Propositions 5 & 6), the strategy that maximizes value also maximizes the amount of other strategies beaten. In other words, this makes BR compatible with PBR, in the sense that the BR solution space is a subset of the PBR solution space. \\n\\nTo make these properties clearer for readers, we have added an example comparing BR and PBR in a monotonic game in Figure B.8 of the appendix. In the case of Win-Loss games, PBR and BR optimize exactly the same objective, and therefore have the same solutions.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the detailed feedback, which we address below. We are currently integrating this feedback into the revision.\", \"main_feedback\": \"We are currently running several additional experiments related to those suggested, with the aim to update the paper before the author discussion period closes. We will post an update as soon as new results are available.\\n\\nWe completely agree regarding the related works section, and have moved it back to the main body in the latest revision (Sec. 6).\", \"minor_comments\": \"Thanks for pointing out the issue with figure references, which have been corrected in the revision as you specified (please note that the referenced section and figure are now, respectively, Appendix B.4 and Fig. B.7, due to the related works section being moved out of the appendix). We\\u2019ve also updated the subfigure captions to make the correspondence to the counterexample steps clear. Indeed the strategy space in Step 4 should have included (1,1,2) \\u2014 thanks for catching this!\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Review Update (18/11/2019)\\nThank you for the detailed replies and significant updates to the paper in response to all reviewers. You have comfortably addressed all of my concerns and so I have updated my score. I think the paper has improved significantly through the rebuttal stage and therefore the update in my score is also significant to match the far larger contribution to the community that the paper now represents.\\n\\n--\\nThis paper considers alpha-rank as a solution concept for multi-agent reinforcement learning with a focus on its use as a meta-solver for PSRO. Based on theoretical findings showing shortcomings of using the typical best response oracle, the paper finds a necessity for a new response oracle and proposes preference-based best response.\\n\\nThe theoretical contributions help further the community's understanding of alpha-rank but the method remains somewhat disconnected from other recent related literature. Therefore, I think the paper's subsequent impact could be significantly improved by making more direct comparison to recent results. Specifically:\\n\\n1) In the 2-player games comparisons are currently made to PRD based on its use in Lanctot et al (NeurIPS, 2017) instead of the more recent PSRO Rectified Nash approach proposed by Balduzzi et al. (ICML, 2019). Please make this direct comparison or justify its exclusion.\\n\\n2) The preliminary MuJoCo soccer results in Appendix G significantly increase the relevance of this work to the ICLR community given the prior publication of this environment at ICLR 2019. However, the results are currently incomplete. In particular, to again strengthen the link to existing work, comparison of the method proposed in this paper to the agents trained by population based training in Liu et al. (ICLR, 2019) would be a more informative comparison than the preliminary results presented in comparison to the na\\u00efve uniform meta-solver.\\n\\n3) Appendix A includes a brief literature survey. This is important material to position the paper in relation to existing work, particularly for readers not familiar with the area that will rely on this to understand the paper as a self contained reference. Please move this section into the main body of the paper and expand to fully credit the work this paper builds upon.\", \"minor_comments\": \"In Appendix C.4 should the reference to Figure C.7 be to Figure C.7a specifically? and the reference to Figure C. 7a be to Figure C. 7b-f inclusive? If so, I believe the available joint strategies in step 4 is missing (1,1,2) as shown in Figure C. 7f.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper extends the original PSRO paper to use an $\\\\alpha$-Rank based metasolver instead of the projected replicator dynamics and Nash equilibria based metasolvers in the original. To this end, the paper modifies the original idea of Best-Response (BR) oracle since it can ignore some strategies in $\\\\alpha$-Rank defining SSCC to introduce the idea of _preference-based_ Best-Response (PBR) oracle. The need for a different oracle is well justified especially with the visualization in the Appendix. The main contributions that the paper seems to be going for is a theoretical analysis of $\\\\alpha$-Rank based PSRO compared to standard PSRO. From the PBR's description (especially in Sec 4.3) it seems the paper is intereseted in expanding the population with novel agents rather than finding the \\\"best\\\" single agent which is not well defined for complex games with intransitivities. Nevertheless, it seems that BR is mostly compatible with PBR for symmetric zero-sum two-player games.\\nThe paper performs empirical experiments on different versions of poker. First set of experiments compare BR and PBR with $\\\\alpha$-Rank based metasolver on random games and finds that PBR does better than BR at population expansion as defined. The second set of experiments compare the metasolvers. $\\\\alpha$-Rank performs similarly to Nash where applicable. Moreover it's faster than Uniform (fictitious self-play) on Kuhn. Then the paper tacks on the MuJoCo soccer experiment as a teaser for ICLR crowd.\\n\\nOverall the paper is quite interesting from the perspective of multiagent learning and I would lean towards accepting. However the paper needs to clarify a lot of details to have any chance of being reproducible.\\n\\n** Clarifications needed:\\n\\n- Tractability of PBR-Score and PCS-Score\\nIt's unclear how tractable these are. Moreover these were only reported for random games. What did these scores look like for the Poker games? Could you clarify how exactly these were computed?\\n\\n- It's somewhat unclear what the lack of convergence without novelty-bound oracle implies. Does this have to do with intransitivities in the game?\\n\\n- Dependence of $\\\\alpha$?\\nThe original $\\\\alpha$-Rank paper said a lot about the importance of choosing the right value for $\\\\alpha$. How were these chosen? Do you do the sweep after every iteration of PSRO?\\n\\n- Oracle in experiments?\\nThe paper fails to mention the details about the Oracles being used in the experiments. They weren't RL oracles but more details would be useful. \\n\\n- BR not compatible with PBR, albeit not the other way around, meaning one of the solutions you get from PBR might be BR, but can we say which one?\\n\\n- For MuJoCo soccer was it true PSRO or cognitive hierarchy. In general, the original PSRO paper was partly talking about the scalable approach via DCH. This paper doesn't mention that at all. So were the MuJoCo experiments with plain PSRO? What was the exact protocol there? From the appendix it's unclear how the team-vs-team meta game works with individual RL agents. Moreover how are the meta-game evaluation matrices computed in general? How many samples were needed for the Poker games and MuJoCo soccer?\\n\\n- The counterexamples in Appendix B3 are quite interesting. Do you have any hypotheses about the disjoint support from games' correlated equilibria?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies \\u03b1-Rank, a scalable alternative to Nash equilibrium, across a number of areas. Specifically the paper establishes connections between Nash and \\u03b1-Rank in specific instances, presents a novel construction of best response that guarantees convergence to the \\u03b1-Rank in several games, and demonstrates empirical results in poker and soccer games.\\n\\nThe paper is well-written and well-argued. Even without a deep understanding of the subject I was able to follow along across the examples and empirical results. In particular, it was good to see the authors clearly lay out where their novel approach would work and where it would not and to be able to identify why in both cases. \\n\\nMy only real concern stems from the empirical results compared to some of the claims made early in the paper. Given the strength of the claims comparing the authors approach and prior approaches, it seems that the empirical results are somewhat weak. The authors make sure to put these results into context, but given the clarity of the results in the toy domains I would have expected clearer takeaways from the empirical results as well.\", \"edit\": \"The authors greatly improved the paper, addressing all major reviewer concerns.\"}" ] }
ByeqyxBKvS
Quantum Semi-Supervised Kernel Learning
[ "Seyran Saeedi", "Aliakbar Panahi", "Tom Arodz" ]
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.
[ "quantum machine learning", "semi-supervised learning", "support vector machines" ]
Reject
https://openreview.net/pdf?id=ByeqyxBKvS
https://openreview.net/forum?id=ByeqyxBKvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "NT7cfKvpzjj", "kM1GckIAm5", "HkxAav4nor", "rkehDQNhjS", "SJeQlM42iS", "BJxnkesP5H", "rJedR_MxqS", "HJxO2en0tH" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1650637549175, 1576798739764, 1573828549801, 1573827427648, 1573827051274, 1572478947700, 1571985616322, 1571893424436 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper2071/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2071/Authors" ], [ "ICLR.cc/2020/Conference/Paper2071/Authors" ], [ "ICLR.cc/2020/Conference/Paper2071/Authors" ], [ "ICLR.cc/2020/Conference/Paper2071/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2071/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2071/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Published version\", \"comment\": \"S. Saeedi, A. Panahi T. Arodz. 2021. Quantum semi-supervised kernel learning.\\nQuantum Machine Intelligence, 3(2):1-11.\", \"https\": \"//link.springer.com/article/10.1007/s42484-021-00053-x\", \"doi\": \"10.1007/s42484-021-00053-x\"}", "{\"decision\": \"Reject\", \"comment\": \"Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal with one reviewer hesitating about the appropriateness of this submission to ML venues. The reviewers have raised a number of criticisms such as an incremental nature of the paper (HHL and LMR algorithms) and the main contributions lying more within the field of quantum computing than ML. The paper was discussed with reviewers, buddy AC and chairs. On balance, it was concluded that this paper is minimally below the acceptance threshold. We encourage authors to consider all criticism, improve the paper and resubmit to another venue as there is some merit to the proposed idea.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your insightful comment!\\n\\nIndeed, quantum machine learning papers traditionally emerged from the physics community. However, we can currently observe beginnings of a trend of publishing QML papers at traditional ML conferences (the ICML paper we mentioned, a NeurIPS\\u201919 paper on q-means by Kerenidis et al.), indicating there is an increasing emerging audience, providing potential for shifting the focus in QML to more advanced methods from the classical ML repertoire. We hope that more machine learning experts take note of recent developments in quantum computing that focus on continuous problems instead of discrete problems like search algorithms or factoring. The techniques we use in our paper are based on one such development, the introduction of quantum linear algebra tools involving density matrices. \\n\\nTo make our paper more accessible to the machine learning community, we have expanded \\\"Quantum Linear Systems of Equations\\\" paragraph to include explanation of basics of HHL using classical linear algebra notation and the \\\"LMR Technique for Density Operator Exponentiation\\\" paragraph to provide more details on this fundamental technique in quantum linear algebra. We have also expanded Section 3.2 to provide more intuition behind the proposed approach for solving the semi-supervised SVM using generalized LMR technique.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions!\\n\\nWe have eliminated introductory material on RKHS, and instead expanded:\\n- Section 2.2 paragraph \\\"Quantum Linear Systems of Equations\\\" to offer more insight into the HHL algorithm that underpins the original quantum SVM and our semi-supervised quantum SVM\\n- Section 2.2 paragraph \\\"LMR Technique for Density Operator Exponentiation\\\" with the method that is used in HHL to work with density matrices such as the kernel matrix\\n- Section 3.2, the main contribution of the manuscript.\\nWe aimed to make these more accessible to the machine learning community.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the review and comments!\\n\\n>> Minor issues: Can any experimental study or applications be demonstrated, or only theoretical computational complexity can be compared? \\n\\nCurrently, in the absence of large-scale universal quantum computers, quantum speedups for quantum machine learning algorithms are distinguished using complexity theory measures. While there are simulators such as cirq and quiskit, we have not seen them being used in quantum machine learning papers.\\n\\nBased on our knowledge we introduced the first quantum semi-supervised machine learning algorithm with offering an equivalent computational complexity as quantum LS_SVM algorithm. The quantum LS-SVM in offers exponential speedup $O(\\\\log mp)$ over the classical time complexity for solving SVM as a quadratic problem, which requires time $O(log(\\\\epsilon^{-1})poly(p,m))$, where $\\\\epsilon$ is the desired error.\\n\\n>> 2. In Section 1.1, L is defined as L = G_I G^T_I. In this definition, whether and how the edge weights are considered? Please clarify. \\n\\nCurrently, we have not considered edge weights. However, the approach we propose can be extended in a straightforward way by making the incidence matrix G_I contain nonnegative weights instead of 0/1 values, and calculating L in a similar way as calculating kernel matrix over samples, using partial trace.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper developes a quantum algorithm for kernel-based support vector machine working in a semi-supervised learning setting. The motivation is to utilise the significant advantage of quantum computation to train machine learning models on large-scale datasets efficiently. This paper reviews the existing work on using quantum computing for least-squares svm (via solving quantum linear systems of equations) and then extends it to deal with kernel svm in a semi-supervised setting.\", \"strengths\": \"This is an interesting emerging research topic that has its significance. Also, this paper prodives a nice tutorial on the key ideas of quantum machine learning and provides detailed derivations and analysis on the proposed algorithm.\", \"weaknesses\": \"The novelty of this work seems to be incremental. It largely extends the existing algorithms such as HHL and LMR.\", \"minor_issues\": \"1. Can any experimental study or applications be demonstrated, or only theoretical comptuational complexity can be compared? \\n2. In Section 1.1, L is defined as L = G_I G^T_I. In this definition, whether and how the edge weights are considered? Please clarify.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a quantum computer-based algorithm for semi-supervised least squared kernel SVM. This work builds upon LS-SVM of Rebentrost et al (2014b) which developed a quantum algorithm for the supervised version of the problem. While the main selling point of quantum LS-SVM is that it scales logarithmically with data size, supervised algorithms shall not fully enjoy logarithmic scaling unless the cost for collecting labeled data is also logarithmic, which is unlikely. Therefore, semi-supervised setting is certainly appealing. Technically, there are two main contributions. The first is the method of providing Laplacian as an input to the quantum computer. The second contribution, which is about the computation of matrix inverse (K + KLK)^{-1}, is a bit more technical, and could be considered as the main contribution of the paper.\\n\\nMy main concern about the paper is on its organization. The paper provides a very gentle introduction to both semi-supervised LS SVM and quantum LS-SVM. While this helps readers to be equipped with relevant background, it is at the cost of having less space for the main contribution in Section 3.2. I would suggest to remove the content in page 2; most results about kernel methods are not really relevant to this paper. For a machine learning conference paper, one shall safely start with half-page intro in page 3. Some background in quantum computing offered in page 3, 4, 5 are quite nice, but for a conference paper, I think this is an overkill. I recommend providing the very minimal content needed to discuss Section 3.2, and then use more space to discuss the idea in 3.2 better. Specifically, Generalized LMR technique and Hermitian polynomials in Kimmel et al. (2017) could be discussed in more detail.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to extend a quantum-computing based solution of least-squares support-vector-machine to include use of unlabeled samples. The formulation is analogous to the classical-computing case, in which semi-supervised learning introduces an additional term in the system of equations, which the authors show how to compute in the quantum setting without degrading big-O complexity.\\n\\nI would lean toward rejecting this paper, primarily on account of how the contribution relates to the publication venue. The primary contribution lies in the procedure for preparing and propagating the quantum mechanical states needed to compute on the additional term. Although the application is machine learning, the technique itself is still rather afar from this topic and would not appear to be of general benefit to conference-goers outside of quantum computing. The overwhelming majority of quantum machine learning references in this paper appear in physics journals (all but one, which was ICML 2019). Most of this paper is background material, which yet remains inadequate to convey insights into design decisions in the details of their main contribution, the derivations in section 3.2. (I have a background in physics but not quantum computing.) \\n\\nPerhaps a paper organization more amenable to this venue would be to shift some of the lengthier equations into an appendix and use the space of the paper to discuss a more conceptual and contextual understanding of why this technique is desirable, at each step, relative to other possible quantum techniques. For example, Figure 1 is not explained, and is not decipherable to someone outside the field, so doesn't itself add to the story.\\n\\nCould be really good work, but the presentation doesn't quite come across.\", \"edit\": \"See comments to do with paper revision, which significantly improved the presentation.\"}" ] }
S1et1lrtwr
Unsupervised Meta-Learning for Reinforcement Learning
[ "Abhishek Gupta", "Benjamin Eysenbach", "Chelsea Finn", "Sergey Levine" ]
Meta-learning algorithms learn to acquire new tasks more quickly from past experience. In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks. The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach. Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can in principle be used to train optimal meta-learners. Our experimental results indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and significantly exceeds the performance of learning from scratch.
[ "Meta-Learning", "Reinforcement Learning" ]
Reject
https://openreview.net/pdf?id=S1et1lrtwr
https://openreview.net/forum?id=S1et1lrtwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Mpb1Rjjd7H", "ryeOpwn_jB", "rkgbwD3_jB", "ryellD2OiH", "BylsRL2OoB", "rkeDSjvI5r", "S1lqr4oRFr", "rkgUHZwTKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739737, 1573599168306, 1573599065309, 1573598951767, 1573598931152, 1572399935310, 1571890241853, 1571807549862 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2070/Authors" ], [ "ICLR.cc/2020/Conference/Paper2070/Authors" ], [ "ICLR.cc/2020/Conference/Paper2070/Authors" ], [ "ICLR.cc/2020/Conference/Paper2070/Authors" ], [ "ICLR.cc/2020/Conference/Paper2070/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2070/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2070/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper discusses the relevant topic of unsupervised meta-learning in an RL setting. The topic is an interesting one, but the writing and motivation could be much clearer. I advise the authors to make a few more iterations on the paper taking into account the reviewers' comments and then resubmit to a different venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to R2\", \"comment\": \"We thank the reviewer for their feedback and suggestions! We have added clarifications to the paper based on the suggestions and questions (refer to Section 3.4), as well as added additional comparisons (Section 4.2, Fig 3). Please find detailed comments below:\\n\\n\\u201cWhy trajectory matching is considered as more general? \\u201c\\n-> While it is true that whenever a policy matches a trajectory, it reaches the goal state, the trajectory matching case is more general because, while trajectory matching can represent different goal-reaching tasks, it can also represent tasks that are not simply goal reaching, such as reaching a goal while avoiding a dangerous region or reaching a goal in a particular way. We have added this discussion to Section 3.4\\n\\n\\u201cwhy is it necessary to introduce meta-learning approach? Why not simply learn universal value functions?\\u201d\\n->This is a very interesting question, however it is not specific to the paradigm of unsupervised meta-learning, the same can be asked about any meta-learning algorithm. Using a meta-learner has two major advantages: first, it allows us to operate in the cases where the exact g is not specified, but the task is simply specified through the reward, which is natural in many scenarios. In these cases, a meta-learner would also acquire a more optimal exploration strategy than simply trying different g like a learned Q(s,a,g) would need to do. Second, the meta-learner can optimize an arbitrary reward function and not simply the goal reaching reward which Q(s,a,g) would be restricted to. A similar discussion is added to Section 3.4.\\n\\n\\u201cThe experimental results are not very persuasive. What is the VPG algorithm used?\\u201d\\n-> The experimental comparisons are with other algorithms that learn from scratch to ensure a fair comparison, since the amount of reward supervision is the same as our algorithm. We chose VPG (also called REINFORCE (Williams 92)) to ensure a fair comparison because our meta-learner uses REINFORCE to learn. The choice of specific RL algorithm in the inner loop is orthogonal to the benefits of UMRL, we could replace the inner loop with a more powerful RL algorithm to get similar benefits. We have also included a comparison with another RL algorithm (TRPO) for learning from scratch, which also performs worse than UMRL in Fig 3. We have also added an additional comparison in Fig 3 with finetuning purely from a DIAYN initialization, without any meta-learning involved.\"}", "{\"title\": \"Response to R3\", \"comment\": \"We thank the reviewer for their comments and feedback! We would like to clarify the aim of the proposed method: given a particular environment, automatically learn a RL algorithm that can quickly solve tasks in this environment. We have added an explicit definition of the problem statement in Section 3 (paragraph 1) to clarify. Standard meta-learning does not immediately solve this problem, as meta-learning requires a hand-designed task distribution. Our key observation is that, rather than using a hand-designed task distribution, we can automatically acquire this task distribution using an unsupervised skill discovery algorithm (DIAYN). Theoretically, we prove that this method maximizes worst-case regret on new tasks provided at test time.\\n\\n\\u201c It would benefit a lot if you can clearly define the original meta-learning procedure and then compare that with the one proposed in this paper.\\u201d\\n-> We have attempted to clarify this in Section 3 and Section 3.2. The key difference is the lack of a known task distribution in our case as opposed to a known task distribution in the standard meta-learning case. \\n\\n\\u201cDefine \\u201dhand-specified\\u201d distribution\\u201d\\n-> This means that the reward functions for tasks and the actual distribution of tasks themselves are specified before hand by the human operator and the training and test sets are both drawn from this distribution. We have included this discussion in Section 1 of the paper.\\n\\n\\u201cI am not very sure by what you mean for \\u201ctask-proposal procedure\\u201d, \\u201cgoal-proposal procedure\\u201d\\n-> In order to do meta-learning, you\\u2019d need a task distribution to sample from. If the task distribution is not hand-specified as in our case, it needs to be proposed by the agent itself. So the procedure (in our case a mutual information style procedure like DIAYN) to generate tasks without supervision, which can then be used for meta-learning is the task-proposal procedure. The goal proposal procedure is a special case of the task-proposal procedure for the case of goal-reaching style tasks. We have included this discussion in Section 3.1 of the updated paper.\\n\\n\\n\\u201cIn the first paragraph of the intro: what do you mean by \\u201cspecifying a task distribution is tedious\\u201d, is specifying p(z) also \\u201ctedious\\u201d\\n-> In standard meta-learning, the task distribution is specified by manually crafting a large number of tasks. This involves manually writing down reward functions. We believe this is tedious and time-consuming. In automated skill discovery mechanisms, the prior p(z) is typically uniform, and is therefore trivial to define, as now discussed in Section 3.1.\\n\\n\\u201c2nd paragraph of intro: \\u201cautomate the meta-training process by removing the need for hand-designed meta-training tasks\\u201d. Again, why p(z) is not \\u201chand-designed\\u201d\\n- > In most unsupervised skill discovery algorithms, including the DIAYN algorithm used in our method, p(z) is simply uniform. Hence, while it is chosen manually, it is trivial to \\\"design,\\\" analogously to the prior in a latent variable model.\\n\\n\\u201cWhat do you mean by \\u201cacquire reinforcement learning procedures\\u201d?\\u201d\\n-> This is following the paradigm of meta-reinforcement learning. A meta-reinforcement learning algorithm learns how to learn: it uses a set of meta-training tasks to learn a learning function f, which can then learn a new task. We refer to this learned learning function f as an \\\"acquired reinforcement learning procedure,\\\" following prior work, such as MAML (Finn et al) and RL2 (Duan et al). We have included this in Section 3.1.\\n\\n\\u201c Why compare with the original meta-RL algorithm on p(z) is not fair? \\u201c\\n-> p(z) does not define a task distribution by itself, in the same way that the prior in a latent variable model does not by itself define a likelihood. A task is given by a reward function, i.e. r(s, a, z), while p(z) is just a uniform prior on a latent variable. It needs to have rewards defined in order to meta-RL\\n\\n\\u201cThe controlled-MDP\\u201d setting is actually much easier\\u201d:\\n-> In the absence of a reward function, it is not clear what rewards to be optimizing your policy on. Only once a CMP is combined with a reward do we get a MDP which can be solved with an RL problem.\", \"we_have_added_definitions_for_the_terms_requested_to_the_paper_and_also_below\": \"\\u201cmeta-training time\\u201d: process of learning the fast RL algorithm via a meta-RL algorithm such as MAML or RL2. Added to Section 3.1\\n\\n\\u201cNo-free lunch theorem\\u201d: This states \\u201cAll algorithms that search for an extremum of a cost function perform exactly the same when averaged over all possible cost functions.\\u201d Please refer to Wolpert et al for more detail. \\n\\n\\u201cRegret\\u201d: As described by the optimal meta-learner in Section 3.2 (Equation 1), the regret of the policy is indeed the hitting time, because once it finds the right goal/trajectory, the optimal meta-learner would simply keep going to the goal or replicating the trajectory. By saying that a policy has low regret, we mean that the (learned) learning algorithm f has low regret. We have corrected this in the text.\"}", "{\"title\": \"Response to R1 (2/2)\", \"comment\": \"Response to Comments:\\n\\n1. Regret is simply a standard metric for measuring learning speed. Equation 1 considers meta-learning in settings where the task distribution is known. The key contribution of our paper is to consider the setting where the task distribution is not known. Equation 4 introduces the metric we consider in this setting: regret under the worst-case task distribution.\\n2. This is simply a definition for our didactic goal-reaching case. We have clarified the wording in Section 3.3 to indicate that this is a definition. \\n3. We maximize the mutual information w.r.t. the joint distribution over latent z and terminal state s_T. We have rewritten Section 3.3, including Lemma 1, to clarify this point.\\n4. Yes. We have added a sentence after this definition to clarify.\\n5. In the case where the prior p(z) is uniform and the marginal p(s_T) is uniform (i.e., when the mutual information is maximized), the two reward functions are equivalent, up to an additive constant: log p(z | s) = log p(s | z) + log p(z) - log p(s)\\n6. A Markovian reward function is one that depends only on the current state and action. Reward functions that depend on (say), that action you took 5 steps prior are not Markovian. Not all reward functions are Markovian. The inequality on page 6 say: a policy that does well on all reward functions is guaranteed to do at least as well on the subset of reward functions which are Markovian.\"}", "{\"title\": \"Response to R1 (1/2)\", \"comment\": \"We thank the reviewer for their feedback and suggestions! Below, we emphasize that the setting we consider is actually quite different from that considered in DIAYN. We have updated the paper to clarify the questions raised above, including a new empirical comparison to DIAYN. Please let us know if this addresses your concerns, or if there are further issues you would like us to attempt to fix.\\n\\n\\u201c*Novelty*\\u201d\\n-> We are not proposing a new unsupervised skill discovery scheme or a new meta-learning algorithm. Rather, we argue that previously proposed MI-based skill discovery schemes (e.g., DIAYN) can both practically and theoretically allow us to apply meta-learning to tasks without manually specifying a task distribution. To clarify the contribution of the work, we have added a new comparison with DIAYN in Fig 3. The scheme in DIAYN does not learn a fast learning algorithm that solves new tasks from reward signals, it simply provides a good set of skills that cover the state space \\u2014 how to select which of these skills to then use to solve a new task is a separate problem. The actual reinforcement learning procedure to learn a new task from this initialization can still be very slow (see Fig 3). In contrast, meta-learning algorithms can learn to learn new tasks very quickly, but require manually provided task distributions to meta-train on. We argue that using MI-based skill discovery methods like DIAYN, together with meta-learning, addresses the shortcomings of both methods, allowing for fast adaptation without requiring manually provided task distributions. We believe that this observation is novel and relevant.\\n\\n\\u201c*Technical contributions*\\u201d\\n-> Sections 3.1 - 3.4 aim to justify why DIAYN \\u2014 or any other MI-based skill discover method -- is a reasonable choice for an unsupervised meta-learning task proposal mechanism. It doesn\\u2019t try to justify why DIAYN works, but merely why using that objective provides a meta-learner which has the lowest worst-case regret. We would emphasize that Algorithm 1 does in fact implement an approximation to the principled procedure outlined in Section 3.4, with the following approximations: DIAYN considers states along a trajectory to be conditionally independent and treats them as a bag of states for discrimination, rather than discriminating on entire trajectories. As always, there are a number of approximations that are needed to actually instantiate the theoretically principled method, but we do not believe that the approximations we employ in this regard are especially egregious and this has also been discussed in prior work (Variational Option Discovery Algorithms, Achiam et al 2018). However, if there are specific inconsistencies that you believe would cause major issues, we would be happy to discuss this!\\n\\n\\u201cThe *writing* can be improved a lot\\u201d\\n-> We have rewritten much of the analysis (Section 3), as well as sentences throughout the rest of the paper, to clarify the writing. If there are specific points that would benefit from further clarification, please let us know! We would be happy to make whatever modifications further clarify the exposition.\\n\\n\\u201cThe key ingredient is missing -- the learning procedure f, which was mentioned in eq.(1) and Algorithm 1, but the details are never specified. It is impossible to reproduce the algorithm based on the description in the paper. \\u201c\\n-> We have clarified Section 3.6 to describe the resulting learning procedure. The learning procedure that is returned by MAML is defined by running gradient descent, starting with the initial parameters found by MAML (See \\u201cMeta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm\\u201d (Finn & Levine) for more discussion). In short, the proposed algorithm uses DIAYN to generate a set of self-proposed tasks in an environment, uses the discriminator from DIAYN to provide a reward function to MAML, and then returns a learning procedure f defined by running gradient descent starting initialized at the weights found by MAML. We have also added additional experimental details to Appendix C. \\n\\n\\u201cThe same *experiments* are conducted in DIAYN (Eysenbach et al., 2018).\\u201d\\n-> The experiments in this work are different from those conducted in DIAYN. The plots in the main paper (Fig 3 and Fig 4) consider a meta-learning setting, a setting not considered in DIAYN. While DIAYN does indeed learn a good set of initial skills, subsequent reinforcement learning can still be quite slow. We have added a comparison (Fig 3) to simply initializing with DIAYN and running finetuning as described in Eysenbach et al, and we find that this performs quite poorly on our test-time tasks.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"# Summary of the paper:\\n\\nThis paper formulates conceptually the unsupervised meta-RL problem (to learn a policy without access to any reward function) as a minimization of the expected regret over tasks, and instantiate an algorithm based on DIAYN (Eysenbach et al., 2018) and MAML Finn et al. (2017a). \\n\\n# Brief explanation of my rating:\\n\\n1. *Novelty*: Mutual information based unsupervised RL was proposed by DIAYN (Eysenbach et al., 2018). Meta-model was also considered by DIAYN (Eysenbach et al., 2018), in which they call it \\\"skill\\\". \\n2. *Technical contributions*: Sec 3.1-3.4 try to justify DIAYN. However, the reasoning is not sufficiently rigorous and the proposed Algorithm 1 is inconsistent with the theory built up in these sections. \\n3. The *writing* can be improved a lot -- it's not easy to guess what the author was trying to say until I read DIAYN (Eysenbach et al., 2018). \\n4. The key ingredient is missing -- the learning procedure f, which was mentioned in eq.(1) and Algorithm 1, but the details are never specified. It is impossible to reproduce the algorithm based on the description in the paper. \\n4. The same *experiments* are conducted in DIAYN (Eysenbach et al., 2018). I am still confused on why we suddenly should use meta-RL. \\n\\n# Comments:\\n\\n1. Why we should consider regret? What is the relation between (1) & (4)? It's quite strange you start with (1) but turn to something else, i.e., (4), quickly. \\n2. \\\"This policy induces a distribution over terminal states, p(s_T | z)\\\" Why? \\n3. What are you optimizing over in (5)? The statement in Lemma 2 says \\\"I(s_T; z) maximized by a task distribution p(s_g)\\\". However, you are only able to control p(s_T | z), not the marginal distribution p(s_T). The statement of Lemma should be made more clear.\\n4. The definition of the reward function: r_z(s_T, a_T) = log p(S_T | z), which is independent of the action a_T? \\n5. In Algorithm 1, the reward reuse the definition of DIAYN -- log D(z | s), but which is different from log p(S_T | z). Could you elaborate this? \\n6. What is the definition of Markovian reward? Why does the inequality on page 6 hold?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: this paper claims to design an unsupervised meta-learning algorithm that does automatically design a task distribution for the target task. The conceptual idea is to propose a task based on mutual information and to train the optimal meta-learner. They also use experiments to show the effectiveness of the proposed approach.\", \"overall_comments\": \"I would think this paper requires a major revision. It is written in a very confusing way. Many terms are directly used without a definition. The problem is also not clearly defined. I have tried to understand everything, but I have to give up in Section 3. Overall, I do not think this paper is ready for publication.\", \"detailed_comments\": \"\\u2022 It would benefit a lot if you can clearly define the original meta-learning procedure and then compare that with the one proposed in this paper.\\n\\t\\u2022 Define \\u201dhand-specified\\u201d distribution. This word does not make sense if you claim this is the difference between the meta-learning procedure proposed in this paper and the original meta-learning algorithm. In this paper, you used p(z) to specify a task. I would think p(z) is also \\u201chand-specified\\u201d.\\n\\t\\u2022 I am not very sure by what you mean for \\u201ctask-proposal procedure\\u201d, \\u201cgoal-proposal procedure\\u201d\\n\\t\\u2022 In the first paragraph of the intro: what do you mean by \\u201cspecifying a task distribution is tedious\\u201d, is specifying p(z) also \\u201ctedious\\u201d\\n\\t\\u2022 2nd paragraph of intro: \\u201cautomate the meta-training process by removing the need for hand-designed meta-training tasks\\u201d. Again, why p(z) is not \\u201chand-designed\\u201d\\n\\t\\u2022 Why compare with the original meta-RL algorithm on p(z) is not fair? \\n\\t\\u2022 What do you mean by \\u201cacquire reinforcement learning procedures\\u201d?\\n\\t\\u2022 \\u201cEnvironment\\u201d, \\u201ctask\\u201d are not clear when they first appear\\n\\t\\u2022 The word \\u201clearn\\u201d is used everywhere, and is confusing. E.g. what do you mean by \\u201clearn new tasks\\u201d, \\u201clearn a learning algorithm f\\u201d, \\u201clearn an optimal policy\\u201d, \\u201clearn a task distribution\\u201d \\u2026\\n\\t\\u2022 \\u201cReward functions induced by p(z) and r_z(s,a)\\u201d: isn\\u2019t r_z(s,a) already a reward function? What is \\u201cinduced\\u201d?\\n\\t\\u2022 What is \\u201cmeta-training\\u201d time?\\n\\t\\u2022 What is \\u201cno free lunch theorem\\u201d?\\n\\t\\u2022 The \\u201ccontrolled-MDP\\u201d setting is actually much easier: perhaps you just need to learn the probability distribution. Then for every r_z, we just solve it. Why not compare with this simple algorithm?\\n\\t\\u2022 \\u201cRegret\\u201d is not defined when it first appears\\n\\t\\u2022 \\u201cThe task distribution is defined by a latent variable z and a reward function r_z\\u201d: why \\u201cdistribution\\u201d is defined by an r.v.?\\n\\t\\u2022 In (2), \\u201cregret\\u201d should be the (cost of the algorithm) - (the total cost of an optimal policy) \\u2014 it is not hitting time\\n\\t\\u2022 (3) is confusing, no derivation is given\\n\\t\\u2022 Based on the usual definition of \\u201cregret\\u201d, how can a \\u201cpolicy\\u201d have low regret? Any fixed \\u201cpolicy\\u201d would have linear regret \\u2026\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper develops a meta-learning approach for improving sample efficiency of learning different tasks in the same environment. The author formulates the meta goal as minimizing the expected regret under the worst case, which happens when all the tasks are uniformly distributed. The paper introduces two types of tasks: goal-reaching task and a more general trajectory matching task. Then the author introduces a meta-learning algorithm to minimize the regret by learning the reward function under different sampled tasks. The paper is interesting. Below are my questions/concerns.\\n \\n1. Why trajectory matching is considered as more general? Intuitively, trajectory matching is more restricted in that whenever an agent can match the optimal trajectory, it should also reach the goal state. \\n\\n2. The theoretical results (lemma 2, 3) actually indicates that the previous work universal value function approximator can optimize the proposed meta learning objective with theoretical convergence guarantee in tabular case by learning the value function Q(s, g, a) where s is a state, g is goal state, a is an action (as long as s and g are visited infinitely often) . As a result, why is it necessary to introduce meta-learning approach? Why not simply learn universal value functions? \\n\\n3. The experimental results are not very persuasive. What is the VPG algorithm used? And if you run the algorithm longer, is it finally worse than learning from scratch? Option learning methods/universal value function can be added as baselines.\"}" ] }
SygKyeHKDH
Making Efficient Use of Demonstrations to Solve Hard Exploration Problems
[ "Caglar Gulcehre", "Tom Le Paine", "Bobak Shahriari", "Misha Denil", "Matt Hoffman", "Hubert Soyer", "Richard Tanburn", "Steven Kapturowski", "Neil Rabinowitz", "Duncan Williams", "Gabriel Barth-Maron", "Ziyu Wang", "Nando de Freitas", "Worlds Team" ]
This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.
[ "imitation learning", "deep learning", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=SygKyeHKDH
https://openreview.net/forum?id=SygKyeHKDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "lXj48wajb", "rkeHw5y9or", "SJeB6YkqjB", "Skgbqt19ir", "Bkx4SKkqsH", "rJl2wiVt9S", "HkgqZVuaYS", "rkgS-t2htH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739709, 1573677660601, 1573677500634, 1573677448994, 1573677371630, 1572584291989, 1571812354092, 1571764477434 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2069/Authors" ], [ "ICLR.cc/2020/Conference/Paper2069/Authors" ], [ "ICLR.cc/2020/Conference/Paper2069/Authors" ], [ "ICLR.cc/2020/Conference/Paper2069/Authors" ], [ "ICLR.cc/2020/Conference/Paper2069/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2069/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2069/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper tackles hard-exploration RL problems using learning from demonstrations. The idea is to combine the existing R2D2 algorithms with imitation learning from human demonstrations. Experiments are conducted on a new set of challenging tasks, highlighting limitations of strong current baseline while highlighting the strength of the proposed approach.\", \"the_contribution_is_two_folds\": \"the proposed algorithm which clear outperforms previous SOTA agents and the set of benchmarks. All reviewers being positive about this paper, I therefore recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"To All Reviewers\", \"comment\": \"We would like to thank all the reviewers for their thoughtful comments and feedback. There were multiple questions about releasing the environments. We are planning to release the Hard-Eight tasks before the ICLR conference. We also plan to open source R2D3 in time for the ICLR conference.\"}", "{\"title\": \"Reply to Official Review #2\", \"comment\": \"> On the other hand, the hard-eight task suite is interesting and, if released, could be used as a benchmark by the whole community.\\n\\nThank you. We are working to make these environments available to the community in time for ICLR.\\n\\n> Section 2 presents the algorithm from a very high-level perspective. If space in the final version allows it, I would also suggest adding a more detailed pseudo-code to the main text, so that even a reader who is not completely familiar with the works this method builds upon could better understand and possibly implement the method.\\n\\nThat is a great suggestion. We will add the pseudocode to the final version of our paper.\\n\\n> Since the authors compare to behavioral cloning to prove the benefits over simple imitation methods, why not comparing to stronger baselines such as [1] or [2]?\\n\\n[1] and [2] are both examples of Imitation Learning by Inverse Reinforcement Learning. These methods are very powerful, but we did not try them because there is a strong evidence in the literature that standard versions of these algorithms do not work in the following settings: 1) POMDPs [a], 2) from pixels [b, c], 3) off policy [d] and 4) with variable initial conditions [e]. Our setting combines all of these. Each IL by IRL extension cited above is nontrivial and combining them may present challenges which are beyond the scope of this work.\\n\\nWe are hoping that our release of the Hard-Eight tasks will enable other researchers to try IL by IRL on more complicated tasks.\\n\\n[a] Learning Belief Representations for Imitation Learning in POMDPs. 2019.\\n[b] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations. 2017.\\n[c] Visual Imitation with a Minimal Adversary. 2018.\\n[d] Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning. 2019.\\n[e] Task-Relevant Adversarial Imitation Learning. 2019.\\n\\n> The demo-ratio seems to be the key parameter to make this approach work (and the performance is proven very sensitive to its value). Instead of keeping it fixed across the entire learning process, have you tried to start with a high value and then decay according to a proper schedule? Intuitively, I would expect the benefits of expert demonstrations to be more valuable during the first learning episodes (where they make the agent explore better) and less during the successive phases (where the policy gets closer and closer to optimal).\\n\\nWe agree with this proposal, annealing the demo ratio can be an interesting experiment to try. Our experiments are already compute intensive and doing hyperparameter search for the annealing hyperparameters would require even more compute. As a result, we decided to just fix the demo-ratio throughout the training. As a future work, it would be interesting to evaluate different annealing methods with R2D3.\\n\\n> The way recurrent states are handled with zero-initialization is probably one of the limitations and seems to play an important role in some experiments. Have you tried, at least in simpler domains, to replay whole episodes and see whether that helps?\\n\\nGood point. We have tried two variations: 1) replaying the whole episode as you described, and 2) using stale lstm states as described in R2D2. Both of these variations seem to help to some extent on the the hardest memory task, remember sensor [a], but they do not help on the other tasks. We didn't focus on these variations because they introduce additional complexity [b] and do not change performance on most tasks.\\n\\n[a] the agent seems more reward, but still fails to solve the task 100% of the time\\n[b] 1) multi-gpu training to allow for full unrolls over the whole episode while still maintaining large batch size, and 2) additional \\\"demo\\\" actors that pulled the current policy parameters and used them to calculate relatively fresh lstm states on the demonstrations.\"}", "{\"title\": \"Reply to Official Review #3\", \"comment\": \"> What would be the present-day approximate retail cost for reproducing the experiments in this paper?\\n\\nWe used 256 actors and a single GPU learner. We trained R2D3 approximately for a week on each Hard-Eight task. Based on the numbers provided in Figure 8 of [a] ($0.0475 per cpu per hour, 1.46 per GPU per hour), training R2D3 on a single Hard-Eight task would cost 2288.16 USD. \\n\\n[a] Seed RL: Scalable and Efficient Deep-RL with Accelerated Central Inference\\n\\n> At the action-rate experienced by the human demonstrators (30fps?), how much wall-clock time represented by actor steps? (40 years?) Is this \\\"making efficient use\\\"?\\n\\nYes, it would take approximately 64 years. In our problem setting, one can consider efficiency with respect to demonstrations and/or environment interactions. We claim that our method can make efficient use of demonstrations but at the cost of a large number of interactions with the environment. However, we still need significantly fewer interactions with the environment than pure RL approaches which have seen no reward in the same 64 year period.\\n\\nWe are hopeful advances in off-policy RL and model based RL will improve the interaction efficiency of RL from demonstrations in the future.\\n\\n> Does having highly variable initial conditions really force generalization over environmental configurations or is this wishful thinking / mysticism? To make a direct claim about this, the authors should consider an experimental design where certain classes of initial conditions (e.g. starting on the left side of the map) are withheld during training and evaluated only during testing.\\n\\nSorry for any confusion. There are two types of generalization we can discuss in this setting: Type 1) generalizing from a small number of demonstrations to all initial conditions in the training task and Type 2) generalizing from initial conditions in the training tasks to the initial conditions in a hold out tasks. Type 1, which is not commonly considered, is the type of generalization we are focused on in this work. We agree that Type 2 is quite interesting but we haven't tested it thoroughly in this work.\\n\\n> The finding of small demo ratios as being stronger is exciting, but this result seems to be tied to the specific quantity and quality of demonstrations gathered. Could a more general picture of the role of demonstrations be had by ablating the diversity of representations? The 100 demos in the full case might be degraded to 50, 25, 10, etc while holding the demo ratio fixed. This might effectively vary the weight that demonstrations take in the optimization independently of how often distinct demos are actually seen.\\n\\nThis is a good suggestion. We considered this experiment but it is fairly compute intensive to run this. We ran some preliminary experiments on one of the easier tasks (drawbridge), where we varied the number of demonstrations and R2D3 managed to solve drawbridge even with 20 demos. We did not vary the demo ratio (fixed to 1/256), or try the other tasks.\\n\\nIf you think these experiments would be valuable to include, we can include them in the final version of the paper.\\n\\n> Can these hard-eight scenarios be parametrically scaled up and down in terms of their exploration effort (possibly by just changing the action granularity / movement speed)? With performance on the new benchmark almost saturated in the first paper based on it, there isn't much room to grow here. In the same way that Montezuma's Revenge was found by scanning the culturally-impactful library of Atari games, perhaps more appropriate and lasting challenges can be found by looking one or more generations forward in the history of commercial console games. Can we play Star Fox? What about SimCity?\", \"regarding_scaling_exploration_difficulty\": \"Yes the levels could be modified in simple ways to make them more difficult. The action repeats or speed as you suggested, or by modifying the levels for example by making the rooms larger. Thanks for the suggestion, we may do this when we release the environments.\", \"regarding_saturating_performance\": \"R2D3 achieves the max possible reward for 5 tasks out of 8. However, there are still three tasks that are not completely solved yet. In addition, as was previously noted, these tasks are quite compute / exploration intensive to solve. We believe that the Hard-Eight tasks can be an interesting domain to improve the sample efficiency of RL algorithms. It is also an interesting domain for improved exploration methods.\", \"regarding_lasting_challenges\": \"It is quite interesting to use commercial games as a benchmark to test agents. We are aware of the efforts on Starcraft 2, DOTA, and others. However, those games can be even more compute intensive to run. The Hard-Eight tasks present a middle-ground between the current RL benchmark environments and commercial games in terms of compute required to solve with the existing algorithms and difficulty.\"}", "{\"title\": \"Reply to Official Review #4\", \"comment\": \"> (Novelty related concerns) \\u2026 I like the fact that the authors of this work have chosen quite challenging scenarios, but I think the novelty of this submission is a bit weak to be accepted to the conference.\\n\\nWe understand that this work can be seen as incremental from the algorithmic point of view. In that sense, we showed that it is possible to achieve significant improvements on hard tasks with a novel combination of well-known techniques. We believe that this fits well with the acceptance criteria for ICLR. The reviewer guidelines suggests that papers that present SOTA results on well-studied problems should be given consideration, if they address problems that are of interest to the community.\\n\\nWe believe that our work is interesting to the community, because it shows that these challenging tasks can be solved with only a small number of demonstrations.\\n\\n> (Concern on the imperfect demos) ... For example, POfD [2] assumes sparse-reward tasks with *imperfect* demonstrations, which is difficult to achieve good performance by using RL or IL. \\n\\nAgreed, RL with demos is very interesting in the imperfect demo setting. Our work also falls into this setting (see the average reward of the demonstrations in Table 1). We clearly demonstrate that RL from demonstrations has beaten both RL and IL in this setting. And thank you for pointing out POfD, We will add it to our related work.\\n\\n> (GAIL baseline) \\u2026 In the submission, it was mentioned that \\u201cGAIL has never been successfully applied to complex partially observable environments that require memory\\u201d, but there\\u2019s [3] that successfully uses GAIL in such a setting. \\n\\nWe will fix that statement. We would like to point out that standard GAIL does not work in the following settings: 1) POMDPs [3], 2) from pixels [a, b], 3) off policy [4] and 4) with variable initial conditions [c]. Let us note that [3] only addresses partially observable environments for GAILs. Our setting combines all of these. Each GAIL extension cited above is nontrivial and combining them may present challenges which are beyond the scope of this work.\\n\\n[a] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations. 2017.\\n[b] Visual Imitation with a Minimal Adversary. 2018.\\n[c] Task-Relevant Adversarial Imitation Learning. 2019.\\n\\n> (Batch-RL or BC initialized baseline) ... For a fair comparison, however, I believe R2D2 with BC (or Batch RL) initialization should be considered.\\n\\nThanks for pointing this out, we considered initialization with BC baseline. However, BC was performing very poorly on the Hard-Eight tasks, due to the small number of demos. As a result, we believed the representation learned by the BC may not be useful for the R2D2. We will add a batch-RL initialized R2D2 baseline to the camera-ready version of the paper.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"In this work, R2D3 (Recurrent Replay Distributed DQN from Demonstration), which combines R2D2 [1] with imitation learning (IL), is proposed. Similar to the existing works on \\u201creinforcement learning (RL) with demonstration\\u201d such as DQfD, DDPGfD, policy optimization with demonstration (POfD) [2], hard exploration conditions (sparse reward, partial observability, high variance in initial states) are assumed, which is difficult to achieve good performance with RL without demonstration in general. Eight tasks in such conditions were devised and used to test the performance of R2D3.\\n\\nI like the fact that the authors of this work have chosen quite challenging scenarios, but I think the novelty of this submission is a bit weak to be accepted to the conference. I believe \\u201cRL with demonstration\\u201d becomes meaningful when it beats both RL and IL in some reasonable setting. For example, POfD [2] assumes sparse-reward tasks with *imperfect* demonstrations, which is difficult to achieve good performance by using RL or IL. From such a perspective, I have the following concerns:\\n\\n- Imitation learning baselines: There has been recent advancement in imitation learning. In the submission, it was mentioned that \\u201cGAIL has never been successfully applied to complex partially observable environments that require memory\\u201d, but there\\u2019s [3] that successfully uses GAIL in such a setting. Also, off-policy imitation learning such as DAC [4] is shown to be highly sample-efficient compared to GAIL in MuJoCo domain. However, the submission only considers behavioral cloning (BC) (which shows poor performance at unseen states due to the covariate shift problem) as a baseline among imitation learning method\\n\\n- Reinforcement learning baselines: The submission adopted R2D2 as an RL baseline, and it seems to me that the R2D2 agent starts from random initialization. For a fair comparison, however, I believe R2D2 with BC (or Batch RL) initialization should be considered.\\n\\nIn addition to the above concerns, it seems to me that most of the features in R2D3 simply combines those in either DQfD or R2D2, and I couldn\\u2019t find out its own algorithmic novelty except \\u201cdemo ratio\\u201d parameter. \\n\\nI\\u2019ll increase my score if I made wrong comments or misunderstood the contribution.\\n\\nReferences\\n[1] Kapturowski, Ostrovski, Quan, Munos. and Dabney, \\u201cRecurrent experience replay in distributed reinforcement learning,\\u201d ICLR 2019.\\n[2] Kang, Jie, Feng, \\u201cPolicy optimization with demonstrations,\\u201d ICML 2018\\n[3] Gangwani, Lehman, Liu, Peng, \\u201cLearning Belief Representations for Imitation Learning in POMDPs,\\u201d UAI 2019\\n[4] Kostrikov, Agrawal, Dwibedi, Levine, Jonathan, Tompson, \\u201cDiscriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,\\u201d ICLR 2019\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper addresses the problem of exploiting human demonstrations in hard exploration (RL) problems. A new set of challenge tasks is introduced that destroys the performance of very strong baseline systems while highlighting the strength of the new system.\\n\\nThe approach (rarely but consistently training on separately prioritized human experience replays) is well motivated by the shortcomings of past agents (either in overfitting the demonstrated solution or only working in environments with not-too-hard exploration challenges). Where work by others have overspecialized on specific challenge environments (e.g. Montezuma's Revenge with weak stochasticity and observability challenges), this work intentionally dives into difficult territory.\\n\\nThis reviewer moves to accept this top-quality RL paper. The new agent, R2D3, is the primary contribution in combining and outperforming previous SOTA agents. These eight new environments are minor contributions with limited potential for impact on the field, but still make an independently positive contribution.\", \"questions_for_authors\": [\"What would be the present-day approximate retail cost for reproducing the experiments in this paper?\", \"At the action-rate experienced by the human demonstrators (30fps?), how much wall-clock time represented by 40B actor steps? (40 years?) Is this \\\"making efficient use\\\"?\", \"Does having highly variable initial conditions really force generalization over environmental configurations or is this wishful thinking / mysticism? To make a direct claim about this, the authors should consider an experimental design where certain classes of initial conditions (e.g. starting on the left side of the map) are withheld during training and evaluated only during testing.\", \"The finding of small demo ratios as being stronger is exciting, but this result seems to be tied to the specific quantity and quality of demonstrations gathered. Could a more general picture of the role of demonstrations be had by ablating the diversity of representations? The 100 demos in the full case might be degraded to 50, 25, 10, etc while holding the demo ratio fixed. This might effectively vary the weight that demonstrations take in the optimization independently of how often distinct demos are actually seen.\", \"Can these hard-eight scenarios be parametrically scaled up and down in terms of their exploration effort (possibly by just changing the action granularity / movement speed)? With performance on the new benchmark almost saturated in the first paper based on it, there isn't much room to grow here. In the same way that Montezuma's Revenge was found by scanning the culturally-impactful library of Atari games, perhaps more appropriate and lasting challenges can be found by looking one or more generations forward in the history of commercial console games. Can we play Star Fox? What about SimCity?\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n-------------\\nThe authors propose R2D3, and algorithm for learning from demonstrations in partially-observable environments with sparse rewards. The algorithm combines DQfD with recurrent networks to both leverage expert demonstrations and handle partial observability. Furthermore, the authors propose a suite of eight challenging tasks on which the proposed method is tested and compared to relevant baselines.\\n\\nComments\\n--------\\n\\nScaling RL agents to high-dimensional partially-observable domains with sparse rewards is a fundamental open problem and this work provides a nice contribution towards its solution. The paper is well-written and easy to read. The proposed methodology seems to be a simple combination of existing algorithms and (apologies if I am wrong) I did not see any particular challenge in its design. On the other hand, the hard-eight task suite is interesting and, if released, could be used as a benchmark by the whole community. The experiments seem quite convincing in proving the potential of the proposed method. Some comments/questions follow.\\n\\n1. Section 2 presents the algorithm from a very high-level perspective. If space in the final version allows it, I would also suggest adding a more detailed pseudo-code to the main text, so that even a reader who is not completely familiar with the works this method builds upon could better understand and possibly implement the method.\\n\\n2. Since the authors compare to behavioral cloning to prove the benefits over simple imitation methods, why not comparing to stronger baselines such as [1] or [2]?\\n\\n3. The demo-ratio seems to be the key parameter to make this approach work (and the performance is proven very sensitive to its value). Instead of keeping it fixed across the entire learning process, have you tried to start with a high value and then decay according to a proper schedule? Intuitively, I would expect the benefits of expert demonstrations to be more valuable during the first learning episodes (where they make the agent explore better) and less during the successive phases (where the policy gets closer and closer to optimal).\\n\\n4. The way recurrent states are handled with zero-initialization is probably one of the limitations and seems to play an important role in some experiments. Have you tried, at least in simpler domains, to replay whole episodes and see whether that helps?\\n\\n[1] Ho, J., & Ermon, S. (2016). Generative adversarial imitation learning. In Advances in neural information processing systems (pp. 4565-4573).\\n[2] Finn, C., Levine, S., & Abbeel, P. (2016, June). Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning (pp. 49-58).\"}" ] }
B1gdkxHFDH
Training individually fair ML models with sensitive subspace robustness
[ "Mikhail Yurochkin", "Amanda Bower", "Yuekai Sun" ]
We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.
[ "fairness", "adversarial robustness" ]
Accept (Spotlight)
https://openreview.net/pdf?id=B1gdkxHFDH
https://openreview.net/forum?id=B1gdkxHFDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qWLi0UczvQ", "HyeYKndMoS", "B1xTGh_ziB", "HJx13oOfjS", "ryxUvjOMoH", "Syx0-bXNcB", "r1xmBTKkqH", "SJlzds52FS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739682, 1573190784785, 1573190676691, 1573190567347, 1573190494371, 1572249861786, 1571949882980, 1571756906180 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2068/Authors" ], [ "ICLR.cc/2020/Conference/Paper2068/Authors" ], [ "ICLR.cc/2020/Conference/Paper2068/Authors" ], [ "ICLR.cc/2020/Conference/Paper2068/Authors" ], [ "ICLR.cc/2020/Conference/Paper2068/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2068/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2068/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper addresses individual fairness scenario (treating similar users similarly) and proposes a new definition of algorithmic fairness that is based on the idea of robustness, i.e. by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased.\\nAll reviewers and AC agree that this work is clearly of interest to ICLR, however the reviewers have noted the following potential weaknesses: (1) presentation clarity -- see R3\\u2019s detailed suggestions e.g. comparison to Dwork et al, see R2\\u2019s comments on how to improve, (2) empirical evaluations -- see R1\\u2019s question about using more complex models, see R3\\u2019s question on the usefulness of the word embeddings. \\nPleased to report that based on the author respond with extra experiments and explanations, R3 has raised the score to weak accept. All reviewers and AC agree that the most crucial concerns have been addressed in the rebuttal, and the paper could be accepted - congratulations to the authors! The authors are strongly urged to improve presentation clarity and to include the supporting empirical evidence when preparing the final revision.\", \"title\": \"Paper Decision\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for the thoughtful comments. We answer each reviewer\\u2019s questions individually and we have updated the draft according to the feedback.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for the feedback. We address the key issues you mentioned below and we have updated the draft accordingly.\\n\\nYou are correct that the fairness constraint is not exactly Dwork et al's notion of individual fairness, but it is very similar. We added an explicit statement of our definition in section 2 (see (2.2)). We also added a passage to section 2 comparing the two notions. In summary, we modify Dwork et al's definition in two ways: (i) instead of requiring the output of the ML model to be similar on all inputs comparable to a training example, we require the output to be similar to the training label; (ii) we use the increase in loss value to measure the difference between the outputs of a predictor on the different training sets instead of a metric on the output space of the predictor. The main benefits of these modifications are (i) this modified notion of individual fairness encodes not only (individual) fairness but also accuracy (as you noted in your comments), (ii) it is possible to optimize the fairness constraint efficiently, (iii) we can show this modified notion of individual fairness generalizes (see section 3 for formal statements). The unfortunate side effect is the additional mathematical details.\\n\\nThe detailed description of the metric is in the Appendix B. To help readers find the description, we added references to it where necessary. We also added a summary of how we learn the metric near the beginning of section 2. \\n\\nThe resume screening example at the beginning of section 2 is our motivation for the subsequent derivations, we added a bit to the first paragraph of section 2 to make the connection between the example and the derivations clear. \\n\\nIn the word embedding experiment the application we have in mind is when someone needs to evaluate sentiment of sentences that can contain negative/positive sentiment words and names at the same time. Sentiment of a sentence can be evaluated by averaging sentiments of the corresponding words. This application is motivated by the paper \\\"Mining and summarizing customer reviews\\\" of Hu, M. and Liu, B. (2004). Training and testing dataset of positive and negative words also originates in their paper. From the perspective of individual fairness, when summarizing customer reviews, our sentiment prediction for two hypothetical restaurant reviews \\\"My friend Adam liked their pizza\\\" and \\\"My friend Tashika liked their pizza\\\" should be the same. As our experiment shows, this is achieved with SenSR. Resulting classifier is good at identifying sentiment of words and does not discriminate against names at the same time. It also reduces discrimination beyond names, e.g. \\\"Let\\u2019s go get Italian food\\\" and \\\"Let\\u2019s go get Mexican food\\\" have almost identical sentiment prediction with SenSR and severely biased in favor of the Italian food when using baseline classifier.\\n\\nWe borrowed the term balanced TPR from Romanov et al (2019), but we are not particular tied to the term. We changed all instances of balanced TPR to balanced accuracy.\\n\\nWe corrected the minor mistakes you mentioned.\", \"refs\": \"Romanov et al, What's in a Name? Reducing Bias in Bios without Access to Protected Attributes, NAACL 2019.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the feedback. We address your concerns below.\\n\\n1. The objective that we minimize is the worst-case performance of a predictor on hypothetical training sets that are similar (only differ in irrelevant features) to the observed training set. This leads to fairness because it penalizes predictors that perform well on the observed training set but poorly on similar hypothetical training sets. For example, an unfair resume screening model may perform very well on a set of training resumes from mostly white men, but poorly on resumes from women or minorities. By considering hypothetical sets of resumes from women or minorities during training, the objective we minimize penalizes models that only perform well on white men.\\n\\n2. You can certainly encode group fairness by picking a metric that declares a pair of inputs similar whenever they are from the same group, but this is tangential to our goal of operationalizing individual fairness. We have baselines and metrics for group fairness because group fairness is the prevalent notion in the literature.\\n\\n3. Each of the experiments has a dedicated \\\"Comparison metrics\\\" paragraph. We clarified the definitions of race and gender gaps in the corresponding paragraph. They are the differences between average logits output by the classifier evaluated at Caucasian vs African-American names for the Race gap and Male vs Female names for the Gender gap. Cuisine gap is the difference between logits of the embedded sentences: \\\"Let\\u2019s go get Italian food\\\" and \\\"Let\\u2019s go get Mexican food\\\". Spouse Consistency (S-Con.) and Gender and Race Consistency (GR-Con.) quantify the individual fairness intuition, i.e. how often classifier prediction remains unchanged when we evaluate it on a hypothetical \\\"counterfactual\\\" example created by changing features such as gender and keeping all other features unchanged. For these individual fairness metrics we did not write mathematical definition, but are happy to add one if the reviewer believes it would improve clarity.\\n\\n4. In our experiments we discuss all baselines in the corresponding \\\"Results\\\" paragraphs. Project is the pre-processing baseline where we project data onto the orthogonal complement of the sensitive subspace and then train regular classifier with the projected data. SenSR outperforms this baseline suggesting that simply projecting out sensitive subspace is not sufficient and that robustness to unfair perturbations through SenSR gives better results in terms of fairness. This is analogous to the observation made in the group fairness literature that simply excluding protected attribute is not sufficient to achieve fairness.\\n\\n5. The main point of section 3 is to show that the fairness constraint generalizes; i.e. if you train a model with SenSR, and it performs well on all hypothetical training sets that are similar to the observed training set (i.e. it seems fair on the training data), then it also performs well with high probability (WHP) on all hypothetical test sets that are similar to a test set (i.e. it is fair WHP at test time). \\n\\nWe added the missing reference and clarified what is TV distance in the introduction.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for the feedback. We address the Cons & Questions in what follows.\\n\\n1. On a laptop without GPU, training SenSR on the sentiment data (experiment in Section 4.1) takes about 6 minutes. \\n\\n2. You are correct that the proposed algorithm is similar to adversarial training. We consider this a benefit of our approach because it allows practitioners to borrow algorithms for adversarial training to train fair ML models. Theoretically speaking, the main distinction of our approach is a generalization error bound for data-driven Wasserstein distributed robust optimization (DRO). In most prior work on Wasserstein DRO, the metric is known, so there is no need to study the effect of error in the metric on generalization. In our application, the metric is learned from data, and we show that generalization degrades gracefully with error in the metric (see the third term on the right side of (3.2)).\\n\\n3. We use d_z^2 instead of d_z because it is a common choice in Wasserstein DRO. For example, Sinha et al also use the squared Euclidean distance.\\n\\n4. To answer the question about more complex models, we trained a deep neural network with 10 hidden layers (100 neurons each) on the sentiment prediction task (using exactly same hyperparameters as in the paper). SenSR continues to be effective: test accuracy is 94.3% and race gap is 0.2.\", \"refs\": \"Sinha et al, Certifying Some Distributional Robustness with Principled Adversarial Training, ICLR 2018.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"General:\\nThe authors propose a method to train individually fair ML models by pursuing robustness of the similarity loss function among the comparable data points. The main algorithmic tool of training is borrowed from the recent adversarial training, and the paper also gives the theoretical analyses on the convergence property of their method.\", \"pros\": \"1. They make the point that the individual fairness is important. \\n2. The paper proposes a practical algorithm for achieving the robustness and the indivdual fairness. Formulating that the main criterion for checking the fainess is Eq.(2.1), the paper takes a sensible route of using dual and minimax optimization problem (2.4).\\n3. The experimental results are compelling \\u2013 while the proposed method loses the accuracy a bit, but shows very good individual fairness under their used metric. \\n\\nCons & Questions:\\n1. What is the empirical convergence property of the algorithm? How long does it take to train for the experiments given?\\n2. It seems like the main tools for algorithm and theory are borrowed from other papers in adversarial training e.g., (Madry 2017). Are their any algorithmic alternatives for solving (2.4)?\\n3. Why do you use d_z^2 instead of d_z for defining c(z_1,z_2)?\\n4. What happens when you use more complex models than 1 layer neural net?\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\nThe authors propose training to optimize individual fairness using sensitive subspace robustness (SenSR) algorithm.\\n\\nDecision\\nOverall, I recommend borderline as the paper seems legit in formulating the individual fairness problem into a minmax robust optimization problem. The authors show improvement in gender and racial biases compared to non-individual fair approaches. However, I think some sections are hard to follow for people not in the field.\", \"supporting_argument\": \"1. End of P3, it is not clear to me why solving the worst case is better.\\n2. Though this paper studied individual fairness, can it also work for group fairness? I am not sure whether this is the only work in this direction (baselines are not for individual fairness).\\n3. Some of the metrics in the experiments are not precisely defined such as Race gap, Cuis. gap, S-Con, GR-Con. It is hard to follow from the text description. \\n4. Some baseline models are not clearly defined such as \\u201cProject\\u201d in Table 1.\\n5. Not sure how Section 3 connects with the rest of the paper.\", \"additional_feedback\": \"1. Missing reference: https://arxiv.org/abs/1907.12059\\n2. What\\u2019s TV distance in introduction?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new definition of algorithmic fairness that is based on the idea of individual fairness. They then present an algorithm that will provably find an ML model that satisfies the fairness constraint (if such a model exists in the search space). One needed ingredient for the fairness constraint is a distance function (or \\\"metric\\\") in the input space that captures the fact that some features should be irrelevant to the classification task. That is, under this distance function, input that differ only in sensitive attributes like race or gender should be close-by. The idea of the fairness constraint is that by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased. Thus, this fairness constraint is very much related to robustness.\\n\\n---\\n\\nOverall, I like the basic idea of the paper but I found the presentation lacking.\\n\\nI do think their idea for a fairness constraint is very interesting, but it gets too bogged down in the details of the mathematical theory. They mention Dwork et al. at the beginning but don't really compare it to their idea in detail, even though I think there would be a lot of interesting things to say about this. For example, the definition by Dwork et al. seems to imply that some labels in the training set might be incorrect, whereas the definition in this paper does not seem to imply that (which I think is a good thing).\\n\\nThe main problem in section 2 is that the choice of distance function is barely discussed although that's what's most important to make the result fair. For all the mathematical rigor in section 2, the paragraph that is arguing that the defined constraint encourages fairness is somewhat weak. Here a comparison to other fairness definitions and an in-depth discussion of the distance function would help.\\n\\n(In general I felt that this part was more trying to impress the reader than trying to explain, but I will try to not hold it against this paper.)\\n\\nAs it is, I feel the paper cannot be completely understood without reading the appendix.\", \"there_is_also_this_sentence_at_the_bottom_of_page_5\": \"\\\"A small gap implies the investigator cannot significantly increase the loss by moving samples from $P_*$ to comparable samples.\\\" This should have been at the beginning of section 2 in order to motivate the derivation.\\n\\nIn the experiments, I'm not sure how useful the result of the word embedding experiment really is. Either someone is interested in the sentiment associated with names, in which case your method renders the predicted sentiments useless or someone is not interested in the sentiment associated with names and your method doesn't even have any effect.\", \"final_point\": [\"while I like the idea of the balanced TPR, I think the name is a bit misleading because, for example, in the binary case it is the average of the TPR and the TNR. Did you invent this terminology? If so, might I suggest another name like balanced accuracy?\", \"I would change the score (upwards) if the following things are addressed:\", \"make it easier to understand the main point of the paper\", \"make more of a comparison to Dwork et al. or other fairness definitions\", \"fix the following minor mistakes\"], \"minor_comments\": [\"page 2, beginning of section 2: you use the word \\\"regulator\\\" here once but everywhere else you use \\\"investigator\\\"\", \"equation 2.1: as far as I can tell $M$ is not defined anywhere; you might mean $\\\\Delta (\\\\mathcal{Z})$\", \"page 3, sentence before Eq 2.3: what does the $\\\\#$ symbol mean?\", \"page 3, sentence before Eq 2.3: what is $T$? is it $T_\\\\lambda$?\", \"Algorithm 2: what is the difference between $\\\\lambda^*_t$ and $\\\\hat{\\\\lambda}_t$?\", \"page 7: you used a backslash between \\\"90%\\\" and \\\"10%\\\" and \\\"train\\\" and \\\"test\\\". That would traditionally be a normal slash.\", \"in appendix B: the explanation for what $P_{ran(A)}$ means should be closer to the first usage\", \"in the references, you list one paper twice (the one by Zhang et al.)\"], \"edit\": \"changed the score after looking at the revised version\"}" ] }
BygdyxHFDS
Meta-learning curiosity algorithms
[ "Ferran Alet*", "Martin F. Schneider*", "Tomas Lozano-Perez", "Leslie Pack Kaelbling" ]
We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime. We formulate the problem of generating curious behavior as one of meta-learning: an outer loop will search over a space of curiosity mechanisms that dynamically adapt the agent's reward signal, and an inner loop will perform standard reinforcement learning using the adapted reward signal. However, current meta-RL methods based on transferring neural network weights have only generalized between very similar tasks. To broaden the generalization, we instead propose to meta-learn algorithms: pieces of code similar to those designed by humans in ML papers. Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions. We demonstrate the effectiveness of the approach empirically, finding two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.
[ "meta-learning", "exploration", "curiosity" ]
Accept (Poster)
https://openreview.net/pdf?id=BygdyxHFDS
https://openreview.net/forum?id=BygdyxHFDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "LKKU3_PdW", "BkgcsU4hiS", "SJlc0SVnsS", "SylcTN43sS", "Syehlm42sB", "H1gaST1RYB", "B1lw3b8TFr", "H1gsicx6FH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739653, 1573828257615, 1573828049662, 1573827777663, 1573827316364, 1571843397079, 1571803566632, 1571781282830 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2067/Authors" ], [ "ICLR.cc/2020/Conference/Paper2067/Authors" ], [ "ICLR.cc/2020/Conference/Paper2067/Authors" ], [ "ICLR.cc/2020/Conference/Paper2067/Authors" ], [ "ICLR.cc/2020/Conference/Paper2067/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2067/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2067/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes meta-learning auxiliary rewards as specified by a DSL. The approach was considered innovative and the results interesting by all reviewers. The paper is clearly of an acceptable standard, with the main concerns raised by reviewers having been addressed (admittedly at the 11th hour) by the authors during the discussion period. Accept.\", \"title\": \"Paper Decision\"}", "{\"title\": \"General comment to all reviewers\", \"comment\": \"We thank the reviewers for their useful comments.\\n\\n*** Overview of changes in our submission ***\\n- You can find our cleaned code here: http://bit.ly/meta-learning-curiosity-algs\\n- We added the performance of baselines for GridWorld, Acrobot and LunarLander in figure 4; showing that top programs found on GridWorld have equivalent performance on LunarLander and significantly better performance on Acrobot. We also would like to note that we chose to report average rewards instead of final performance. Had we reported the latter, results for both the baselines and our algorithms would be much higher, but also a lot more noisy, which would have required a lot more episodes (and thus compute) to get statistical significance on thousands of programs.\\n- We added more details on how we predict program performance, including a plot in appendix C comparing predicted versus actual performance.\\n\\n*** Discussion of our benchmarks and competing meta learning approaches ***\", \"we_would_like_to_point_out_that_we_missed_a_relevant_paper_in_our_related_work\": \"Evolved policy gradients (EPG) by Houthooft et al. (https://arxiv.org/pdf/1802.04821.pdf). EPG meta-learns a loss function (similar to our reward function) that helps agents train from scratch to achieve a certain goal. While they meta-learn the parameters of a neural network that computes the loss function, we instead meta-learn interpretable learning algorithms. This allows us to generalize in much broader ways.\\n\\nEPG shares our goal of increasing meta-learning generalization. However, their generalization is limited to different targets within a single environment. They meta-train their algorithm to move to eastward target locations, and then meta-test on westward target locations; showing that MAML and RL^2 fail to adapt to the new task but EPG succeeds in doing so. We believe this type of generalization stands in stark contrast to our programs that generalize to completely new environments, with different action dimensions, continuous action spaces and even from image inputs to vector inputs. \\n\\nTransferring parameters (such as in MAML, RL^2 or EPG) between environments with different observation and action spaces is challenging. Dimensionality transfer can be achieved through summary statistics, random embeddings, etc. but these hacks heavily constrain the type of functions that can be learned. Transferring between vector-based observation spaces and image-based observation spaces is even more problematic. In contrast, our algorithms instantiate networks of the proper type and dimensionality automatically depending on the environment.\\n\\nRather than comparing against approaches not designed to transfer parameters across environments, we decided to compare against human-designed algorithms. These algorithms were designed for general environments and we believe this makes them significantly stronger competitors. Since our initial submission, we have verified this belief by designing and training a baseline variant of EPG to meta-learn across environments. Due to varying environment I/O specs, we encoded states into fixed-length vectors through randomly initialized neural networks. This variant of EPG performed significantly worse than the human-designed curiosity algorithms that we used as challenging baselines.\"}", "{\"title\": \"Thank you for your detailed review!\", \"comment\": \"We compared against human algorithms instead of previous meta-learning algorithms because human algorithms are a much stronger baseline. We go over this decision in more detail in a separate answer for all reviewers.\\n\\nThe reviewer is right that we could have used many techniques from the existing literature to find good algorithms, such as RL as in [R4] or mentioned techniques from bandit settings to decide which program to evaluate next. We chose to keep the algorithm search as simple as possible and limit the number of operations for two reasons. \\n1. Since (to the best of our knowledge) we are the first to meta-learn learning algorithms (instead of network weights or network architectures) we wanted to make sure we gained an understanding of the problem setting. For instance, by evaluating a big part of the space of programs up to a certain size, we realized that program performance doesn\\u2019t form a continuum, but instead consists of about 1% of programs performing statistically much better than the statistically-indistinguishable set of all other programs. \\n2. Limiting the size of the programs in our search space allowed us to better interpret the best programs found by our search, which is useful to add confidence to our experimental results and gain algorithmic insight. Improving the efficiency of our search by including insights from other fields is an interesting avenue for future work.\\n \\nIt is worth noting, however, that many techniques in the NAS community do not apply to search in algorithm space. There are three main challenges: first, curiosity algorithms form a dynamical system that intertwines with the RL agent and the RL environment. Therefore, NAS algorithms that reuse learned weights from previously tried architectures, such as [R5], are not immediately applicable because we want an algorithm that helps the RL agent learn from scratch. Second, many NAS algorithms (such as [R5]) assume each individual architecture is end-to-end differentiable, which is not the case for our algorithms. Finally, in NAS, the goal and loss function are the same for all architectures, which means that all substructures are likely to have similar representations and weights. In contrast, our algorithms also define their own optimization function, which strongly affects the composability of the substructures.\\n\\nWe now describe how we chose the environments we ran on, for which we never considered the performance of our own algorithms. Gridworld was first selected because it measured exploration and was very simple and very cheap to run compared to other movement based environments like DeepMind labs or VizDoom. Then, we selected two standard OpenAI Gym environments by focusing on environments that were cheap to run. If one explores https://gym.openai.com/envs, they will observe that most other environments take either an order of magnitude more compute, are very similar to our chosen environments or are about keeping something stable (CartPole), for which curiosity is clearly detrimental. For all 3 environments, we chose the number of time-steps to train each RL agent by maximizing the difference between the performance of published works and our \\u201cdumb\\u201d baselines of fixed or pure-noise rewards. \\n\\nWe then moved to MuJoCo tasks because they were an order of magnitude more expensive (but not two orders, like Atari games) and tested 3 environments: Ant, Hopper and Walker2d, the latter was discarded because published methods didn\\u2019t statistically outperform the dumb curiosity algorithm baselines. For Ant and Hopper we again selected the number of training time-steps to run by maximizing the distance between published results and the weak baselines. Therefore, we believe environment selection helped the strong baselines more than our algorithms. We were, unfortunately, unable to test on more environments because running thousands of agents on a new environment takes a significant portion of our compute budget.\\n\\nFinally, we note that LunarLander and Acrobot are also held-out test environments. Our initial intention was indeed to use LunarLander and/or Acrobot for meta-training; however, when we saw that the top programs in Gridworld did great in the other environments, we decided it was a much stronger message to show that we only needed a single very simple task as meta-training to find good algorithms. Therefore, what we refer to as top-16 is always the same set of programs found in Gridworld. We then show they also perform well in Lunar Lander, Acrobot and MuJoCo; even if the environments are very different.\"}", "{\"title\": \"Thanks for your helpful review!\", \"comment\": \"With respect to evidence for predicting performance directly from program structure, we had a plot in appendix C showing we could find 88% of top programs after searching through half the program space, but we did not point to it within section 3.3. We have now extended the details of section 3.3, properly made a pointer to appendix C and added a second plot to the appendix showing the correlation between predicted performance and actual performance.\\n\\nFinally, we have added a general answer with comments we find relevant to all three reviewers.\"}", "{\"title\": \"Thanks for your helpful review!\", \"comment\": \"With respect to evidence for predicting performance directly from program structure, we had a plot in appendix C, but forgot to point to it within section 3.3. We have now extended the details of that section, properly made a pointer to appendix C and added a second plot to the appendix with further details.\\n\\nAs the reviewer mentions, our framework describes searching curiosity algorithms that do well on multiple tasks, but in the experiment section we only search in one task. We note, however, that this analysis could easily be performed by a simple extension; for example, we could combine the data from the \\u201cGridworld vs. Lunar Lander\\u201d plot in Figure 4, normalize the performance in each environment, and return algorithms sorted by their mean standardized performance. We chose to keep 1 meta-training task because we think the fact that one can transfer only from a single, simple, unrelated task is a stronger message (and a result we did not initially expect!). We have added a paragraph at the end of section 4 to clarify this point.\\n\\nAlthough it is true that our baselines can be expressed in our language, we note that they are widely regarded as very strong algorithms within the curiosity literature, which then caused us to design the language around them and use them as the strongest benchmark we could think of. Another alternative, which we deemed to be much weaker, would have been comparing to previous meta-learning algorithms (none of which have been shown to be capable of transferring to radically different environments). We add more details about this choice in a separate answer for all three reviewers.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes to meta-learn a curiosity module via neural architecture search. The curiosity module, which outputs a meta-reward derived from the agent\\u2019s history of transitions, is optimized via black box search in order to optimize the agent\\u2019s lifetime reward over a (very) long horizon. The agent in contrast is trained to maximize the episodic meta-reward and acts greedily wrt. this intrinsic reward function. Optimization of the curiosity module takes the form of an epsilon-greedy search, guided by a nearest-neighbor regressor which learns to predict the performance of a given curiosity program based on hand-crafted program features. The program space itself composes standard building blocks such as neural networks, non-differentiable memory modules, nearest neighbor regresses, losses, etc. The method is evaluated by learning a curiosity module on the MiniGrid environment (with the true reward being linked to discovering new states in the environment) and evaluating it on Lunar Lander and Acrobot. A reward combination module (which combines intrinsic and extrinsic rewards) is further evaluated on continuous-control tasks (Ant, Hopper) after having been meta-trained on Lunar Lander. The resulting agents are shown to match the performance of some recent published work based on curiosity and outperforms simple baselines.\\n\\nThis is an interesting, clear and well written paper which covers an important area of research, namely how to find tractable solutions to the exploration-exploitation trade-off. In particular, I appreciated that the method was clearly positioned with respect to recent work on neural architecture search, meta-learning approaches to curiosity as well as forthcoming about the method\\u2019s limitations (outlining many hand-designed curiosity objectives which fall outside of their search space). There are also some interesting results in the appendix which show the efficacy of their predictive approach to program performance.\\n\\nMy main reservation is with respect to the empirical validation. Very few existing approaches to meta-learning curiosity scale to long temporal horizons and \\u201cextreme\\u201d transfer (where meta-training and validation environments are completely different). As such, there is very little in the way of baselines. The paper would greatly benefit from scaled down experiments, which would allow them to compare their architecture search approach to recent approaches [R1, R2], black-box optimization methods in the family of evolution strategies (ES, NES, CMA-ES), Thompson Sampling [R3] or even bandits tasks for which Bayes-optimal policies are tractable (Gittins indices). These may very well represent optimistic baselines but would help better interpret the pros and cons of using neural architecture search for meta-learning reward functions versus other existing methods. Conversely, the paper claims to \\u201csearch over algorithms which [...] generalize more broadly and to consider the effect of exploration on up to 10^5, 10^6 timesteps\\u201d but at the same time does not attempt to show this was required in achieving the reported result. Pushing e.g. RL2 or Learning to RL baselines to their limits would help make this claim.\\n\\nAlong the same line, it is regrettable that the authors chose not to employ or adapt an off-the-shelf architecture search algorithm such as NAS [R4] or DARTS [R5]. I believe the main point of the paper is to validate the use of program search for meta-learning curiosity, and not the details of the proposed search procedure (which shares many components with recent architecture search / black-box optimization algorithms). Using a state-of-the-art architecture search algorithm would have made this point more readily.\\n\\nAnother important point I would like to see discussed in the rebuttal, is the potential for cherry-picking result. How were the \\u201clunar lander\\u201d and \\u201cacrobot\\u201d environments (same question for \\u201cant\\u201d and \\u201chopper\\u201d) selected? From my understanding, it is cheap to evaluate learnt curiosity programs on downstream / validation tasks. A more comprehensive evaluation across environments from the OpenAI gym would help dispel this doubt. Another important note: top-16 results reported in Figure 4 and Table 1 are biased estimates of generalization performance (as they serve to pick the optimal pre-trained curiosity program). Could the authors provide some estimate of test performance, by e.g. evaluating the performance of the top-1 program (on say lunar lander) on a held-out test environment? Alternatively, could you comment on the degree of overlap between the top 16 programs for acrobot vs lunar lander? Thanks in advance.\\n\\n[R1] Learning to reinforcement learn. JX Wang et al..\\n[R2] RL2: Fast Reinforcement Learning via Slow Reinforcement Learning. Yan Duan et al.\\n[R3] Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search. Guez et al.\\n[R4] Neural Architecture Search with Reinforcement Learning. Barrett et al.\\n[R5] DARTS: Differentiable Architecture Search. Liu et al.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an algorithm to generate curiosity modules for reinforcement learning. The authors define a program language which can represent many possible curiosity modules that include training neural networks, replay buffers, etc. It also presents an approach to searching for the best curiosity module in this set, which is some various ways to do pruning and to determine which methods to try.\\n\\nThe paper is very novel - the idea of developing a domain specific language full of building blocks to represent various curiosity modules is unique and interesting.\\n\\nThe search over curiosity modules is a bit mis-represented I think. In the introduction, it gives the impression that part of the algorithm is to search over these curiosity modules, and also that it's to find the best one that works across a wide set of tasks. Instead the search method is a separate procedure outside of the algorithm and most of the search steps are performed on individual tasks instead of over a set of tasks.\\n\\nIn Sec 3.3, you say that \\\"perhaps surprisingly, we find that we can predict performance directly from program structure,\\\" but you never provide any evidence of doing so. \\n\\nThe simple environment that you used is a bit contrived, rather than taking a normal task, the goal itself is to do complete exploration (maximize the total number of pixels visited). It seems like the combination of the intrinsic curiosity program here with the reward combiner is that the intrinsic curiosity program should be only about complete exploration, and then the combiner is responsible for balancing that with task rewards. You should be more explicit that in this first part of the search you're only looking at the intrinsic curiosity program, without the combiner, and therefore do not want a task with extrinsic rewards. This breakdown of searching for the intrinsic curiosity program first and the combiner later seems like another important aspect of making your search efficient. \\n\\nThe main drawback of this paper is that there are little comparisons to related work. The only methods compared to are ones where the curiosity method is expressible in the language of the method.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Instead of hand-designing exploration bonuses and intrinsic reward, the paper proposes to view the curiosity algorithms as programs described with domain-specific language (DSL), then search programs which allows RL agents to optimize the environment reward combined with the curiosity reward generated by the program. The search produces programs similar to algorithms proposed in literature and also some strategies which generalize well. In order to make the search tractable, the authors develop a few criteria: 1) evaluate the program on relatively simple and short-horizon domains; 2) predict the performance of the program and rank them; 3) stop the agents if the learning curves do not look good after long enough of training steps.\\n\\nThis is a very interesting idea, and it's partially inspired by the architecture search line of research. It would be great if the authors could provide more information about \\\"predicting algorithm performance\\\" section. \\n\\nI find it very interesting and exciting to see programs like Figure 3. They are quite interpretable. The results on Table 1 are really exciting, they show that the searched programs could be generalized into other unseen tasks.\"}" ] }
rylwJxrYDS
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
[ "Alexei Baevski", "Steffen Schneider", "Michael Auli" ]
We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.
[ "speech recognition", "speech representation learning" ]
Accept (Poster)
https://openreview.net/pdf?id=rylwJxrYDS
https://openreview.net/forum?id=rylwJxrYDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "giKeDQy9-n", "H1gRCgthsH", "ByeQLeYhsH", "BJlsGxtnsS", "BkxWkgY2sB", "ryllakFnsS", "rJxLRp2s9H", "B1xXOITI9H", "HyxsH-ORFH", "r1e9Ipmo_r" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739623, 1573847253950, 1573847114674, 1573847058579, 1573847001360, 1573846967613, 1572748750220, 1572423275481, 1571877186754, 1570614609537 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2066/Authors" ], [ "ICLR.cc/2020/Conference/Paper2066/Authors" ], [ "ICLR.cc/2020/Conference/Paper2066/Authors" ], [ "ICLR.cc/2020/Conference/Paper2066/Authors" ], [ "ICLR.cc/2020/Conference/Paper2066/Authors" ], [ "ICLR.cc/2020/Conference/Paper2066/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2066/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2066/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2066/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a new self-supervised pre-trained speech model that improves speech recognition performance.\\n The idea combines an earlier pre-training approach (wav2vec) with discretization followed by BERT-style masked reconstruction. The result is a fairly complex approach, with not too much novelty but with a good amount of engineering and analysis, and ultimately very good performance. The reviewers agree that the work deserves publication at ICLR, and the authors have addressed some of the reviewer concerns in their revision. The complexity of the approach may mean that it is not immediately widely adopted by others, but it is a good proof of concept and may well inspire other related work. I believe the ICLR community will find this work interesting.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Paper updates\", \"comment\": [\"We just updated the paper to incorporate the reviewer comments. Specifically, the updated version includes:\", \"Improved discussion of related work and better situation of our contribution in the existing literature\", \"Extended conclusion & future work\", \"Improved results for sequence to sequence learning (Table 4) + more results from the literature, e.g., Irie et al. \\u201819\", \"For the vq-wav2vec ASR experiments in Section 6.1, we clarified that we input the dense representations associated with the discrete units.\", \"Big thank you to the reviewers for their comments!\"]}", "{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your fruitful comments.\\n\\n>> 1. [...] One observation from the submission is that the token set may need to be very large (from tens of thousands to millions) for the system to work well, making the BERT training computationally expensive [...] I think some more motivation or exploration (what kind of information did BERT learn) is needed to understand why that is the case.\\n\\nOur BERT vocabulary sizes (13.5k for the gumbel version and 23k for the k-means version) compare favorably to the setups commonly used in NLP where vocabularies are double or triple of our sizes. \\n\\nWe agree that it would be interesting to perform an in-depth analysis on the embeddings learned by BERT and we will investigate this in future work. Here we focus on a new quantization method evaluated via downstream performance in phone and speech recognition settings by employing models that worked well (and were extensively tuned) in NLP contexts.\\n\\n\\n>> 2. A more economical approach is to use BERT-trained model as initialization for acoustic model training, which is the classical way how RBMs pre-training were used in ASR.\\n\\nYes, this is an interesting avenue for future work! We did not follow this direction due to two motivations: first, our aim is to contribute a new quantization scheme for audio data that is trained to predict the context in a self-supervised way. Second, we wanted to show that good performance can be achieved with discretized audio on actual speech tasks.\\n\\n\\n>> 3. One concern I have with discrete representation is how robust they are wrt different dataset.\\nWe agree that an ablation study on robustness of the embeddings across different datasets would be very interesting. \\n\\nHere we are mostly focusing on relatively clean data (WSJ, TIMIT, Librispeech) following the original wav2vec paper but we would be interested in exploring robustness in the future. However we note that representations transfer at least well across datasets from the \\u201cclean speech\\u201d domain: vq-wav2vec and BERT is only trained on Librispeech and never tuned on TIMIT/WSJ.\\n\\n>> 4. Another curious question is whether the features would still provide as much improvement when a stronger ASR system than AutoSeg (e.g., Lattice-free MMI) is used.\\n\\nThe original wav2vec paper (Schneider et al., 2019) reports better results than LF-MMI on the WSJ benchmark, however, the two setups are not strictly comparable. In some sense, the LF-MMI result has an edge because it is based on a phoneme-based ASR system which is typically stronger than the character-based ASR system used with wav2vec. We agree that evaluation on stronger baselines is an important future direction though.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your fruitful comments.\\n\\n>> What would make it stronger imo is to address the issue of how much is gained from a discrete vs. continuous representation. \\nDiscrete representations by themselves are not better than continuous ones (cf. Table 1, wav2vec vs. vq-wav2vec). However, discretization enables the application of existing algorithms from the NLP literature which were designed for discrete inputs. We show that the BERT model can be directly applied to discretized speech. BERT can better model context than (vq-)wav2vec.\\n\\n>> The authors take it as a given that discrete is good because it allows us to leverage work in NLP. That makes sense -- but at what cost?\\nChaining vq-wav2vec and BERT requires more computational effort than just wav2vec, however, it does improve accuracy as our results show (cf. Table 1). Running BERT requires roughly as much computational overhead as just vq-wav2vec.\\n\\n>> The state of the art on LibriSpeech is not Mohamed at al. 2019. See e.g. Irie et al. Interspeech 2019 for better result.\\nThanks for pointing this out, we fixed this in the updated version of the paper we just posted.\\n\\n>> The Conclusion is very sparse.\\nWe broadened conclusion and delineated additional future work.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your comments!\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for the fruitful comments!\\n\\nWe addressed your main concern and updated Section 1 of the paper to better situate it in the existing literature.\\n\\n>> Would it be possible to train the vq-wav2vec model jointly with BERT, i.e. as one model? [...] Similarly to the above question, would there be a way to incorporate the BERT principles directly into an end-to-end model, e.g. by randomly masking some of the continuous input speech?\\n\\nThe focus of this paper is a quantization approach for audio. Replacing the two-step training process by an adaptation of BERT to continuous data (using a wav2vec/CPC-like objective function instead of the cross entropy) is an interesting direction for future work (and we amended the future work section accordingly). However, our current paper is a proof of concept that a pre-training scheme based on masked inputs (BERT) can improve over previous methods in the speech domain.\\n\\n\\n>> What exactly does \\\"mode collapse\\\" refer to in this context?\\n\\nIn several configurations (especially for one and two groups) considerably less codewords than theoretically possible are used. We loosely refer to mode collapse as the phenomenon when very few codewords per group are used (cf. Appendix A).\\n\\nWe updated the paper to also refer to the appendix where we outline the number of codewords that the model uses. We observed that in the \\u201cfew group regime\\u201d (G=1...4), only a few of the available centroids per group are used and refer to this phenomenon as mode collapse \\u2014 for BERT training, this is actually favorable e.g. in the G=2, V=320 setting as it yields a codebook of acceptable size for NLP model training (13.5k/23k).\\nMode collapse could potentially be circumvented by strategies like embedding re-initialization used in classical k-means and this is an interesting avenue for future work.\\n\\n\\n>> [...] BERT is required on top of the vq-wav2vec discrete symbols. Is it possible that the output acoustic model is simply better-matched to continuous rather than discrete input (direct vq-wav2vec gives discrete while BERT gives continuous)? Would it make sense to train the wav2vec acoustic model on top of the vqvae codebook entries (e) instead of directly on the symbols?\", \"we_actually_did_what_you_suggest\": \"when we train acoustic models on top of vq-wav2vec, we input the dense embedding vectors corresponding to the discrete codewords. On the other hand, we also trained an NLP sequence to sequence (Section 6.3) which takes the quantized audio codes as input and then generates the transcriptions. This gives reasonable accuracy and suggests that the discrete codes by themselves, and without the learned continuous representations, are useful. We clarified this in the updated version of the paper.\\n\\nWe believe the reason the dense embeddings for the discrete codewords work less well is because they do not encode as much detailed context information as a representation built by wav2vec or BERT. The information in the codebook is ultimately less detailed than a context vector specific to the current input sequence.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents a method for unsupervised representation learning of speech. The idea is to first learn discrete representation (vector quantization is done by Gumbel softmax or k-means) from audio samples with contrastive prediction coding type objective, and then perform BERT-style pre-training (borrowed from NLP). The BERT features are used as inputs to ASR systems, rather than the usual log-mel features. The idea, which combines those of previous work (wav2vec and BERT) synergetically, is intuitive and clearly presented, significant improvements over log-mel and wav2vec were achieved on ASR benchmarks WSJ and TIMIT. Based on these merits, I suggest this paper to be accepted.\\n\\nOn the other hand, I would suggest directions for investigation and improvements as follows.\\n\\n1. While I understand that vector quantization makes the use of NLP-style BERT-training possible (as the inputs to NLP models are discrete tokens), there are potential disadvantages as well. One observation from the submission is that the token set may need to very large (from tens of thousands to millions) for the system to work well, making the BERT training computationally expensive (I noticed that the BERT model is trained on 128 GPUs). Also, without BERT pre-training, using directly the discrete tokens seems to consistently give worse performance for ASR. I think some more motivations or explorations (what kind of information did BERT learn) are needed to understand why that is the case.\\n\\n2. Besides the computational expensive-ness of the three-step approach (vector quantization, BERT, acoustic model training), the combined model complexity is large because these steps do not share neural network architecture. A more economical approach is to use BERT-trained model as initialization for acoustic model training, which is the classical way how RBMs pre-training were used in ASR.\\n\\n3. One concern I have with discrete representation is how robust they are wrt different dataset. The ASR datasets used in this work are relatively clean (but there does exists domain difference between them). It remains to see how the method performs with more acoustically-challenging speech data, and how universally useful the learned features are (as is the case for BERT in NLP).\\n\\n4. Another curious question is whether the features would still provide as much improvement when a stronger ASR system than AutoSeg (e.g., Lattice-free MMI) is used.\\n\\nOverall, while I think the computational cost of the proposed method is high, rendering it less practical at this point, I believe the approach has potential and the result obtained so far is already significant.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Though rather dense in its exposition, this paper is an interesting contribution to the area of self-supervised learning based on discrete representations. What would make it stronger imo is to address the issue of how much is gained from a discrete vs. continuous representation. The authors take it as a given that discrete is good because it allows us to leverage work in NLP. That makes sense -- but at what cost?\\n\\n\\\"Table 4 shows that our first results are promising, even though they are not as good as the state of the art.\\\" The state of the art on LibriSpeech is not Mohamed at al. 2019. See e.g. Irie et al. Interspeech 2019 for better result\\n\\nThe Conclusion is very sparse. \\\"In future work, we are planning to apply other algorithms requiring discrete inputs to audio data\\\": can you elaborate?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a way to pre-train quantized representations for speech. The approach proposed is a two-stage process: 1. train a quantized version of wav2vec [my understanding is that wav2vec is the same thing as CPC for Audio except for using a binary cross-entropy loss instead of InfoNCE softmax-cross entropy loss]. the authors propose to use gumbel softmax / VQ codebook for the vector quantization.\\n2. once you have a discrete representation, you could train BERT (as if it were a seq of language tokens). this makes a lot of sense especially given that CPC / wav2vec recovers phonemes and quantizing the phonemes will recover a language-like version of the raw audio. And running BERT across those tokens will allow you to capture the dependencies at the phoneme level. \\n\\nAfter pre-training, the authors use the learned representations for speech recognition. They compare this to using log-mel filterbanks. \\n\\nThe results (WER / LER) is lower for the proposed pipeline compared to using dense wav2vec representation for n-gram and character LM. It also makes sense that BERT helps for the k-means (vq) setting since the number of codes is large. \\n\\nThe authors also cleverly adopt/adapt span-BERT which is more suited to this setting.\\n\\nI think this paper presents a useful contribution as far as improving speech / phoneme recognition using self-supervised learning goes, and also has useful engineering aspects in terms of combining CPC and BERT. I would like to see this paper accepted.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"\", \"overview\": \"This paper considers unsupervised (or self-supervised) discrete representation learning of speech using a combination of a recent vector quantized neural network discritization method and future time step prediction. Discrete representations are fine-tuned by using these as input to a BERT model; the resulting representations are then used instead of conventional speech features as the input to speech recognition models. New state-of-the-art results are achieved on two datasets.\", \"strengths\": \"The core strength of this paper is in the results that are achieved on standard speech recognition benchmarks. The results indicate that, while discritization in itself does not give improvements, coupling this with the BERT-objective results in speech features which are better in downstream speech recognition than standard features. I think the main technical novelty is in combining discritization with future time step prediction (but see the weakness below).\", \"weaknesses\": \"The main weakness of the paper is that it does not situate itself within existing literature in this area. Over the last few years, researchers in the speech community have invested significant effort in learning better speech representations, and this is not discussed. See e.g. [1]. Even more importantly, very recently there has been a number of papers investigating discrete representations of speech; see the review [2]. Some of these papers specifically use VQ-VAEs [3]. [4] actually compares VQ-VAE and the Gumbel-Softmax approach. These studies should be mentioned. This paper is different in that it incorporates future time step prediction. But context prediction has also been considered before, also for speech [5, 6, 7]. This paper can be situated as a new contribution combining these two strands of research. In the longer run it would be extremely beneficial to the community if this approach is applied to the standard benchmarks as set out in [2].\\n\\nAs a minor weakness, some parts of the paper is not described in enough detail and the motivation is weak or not exactly clear (see detailed comments below).\", \"overall_assessment\": \"I think the results as well as the new combination of existing approaches in the paper warrants publication. But it should be amended significantly to situate itself within the existing literature. I therefore award a \\\"weak accept\\\".\", \"detailed_questions_and_suggestions\": [\"Section 1: As motivation for this work, it is stated that \\\"we aim to make well performing NLP algorithms more widely applicable\\\". As noted above, some NLP-like ideas (such as prediction of future speech segments, stemming from text-based language modelling) have already been considered within the speech community. Rather than motivating the work in this way, it might be helpful to focus the contribution as a combination of future time step prediction and discretization (both of which have been considered in previous work, but not in combination).\", \"Section 4: Would it be possible to train the vq-wav2vec model jointly with BERT, i.e. as one model? I suspect it would be difficult since, for the masking objective, the discrete units are already required, but maybe there is a scheme where this could work.\", \"Section 2.2: Similarly to the above question, would there be a way to incorporate the BERT principles directly into an end-to-end model, e.g. by randomly masking some of the continuous input speech?\", \"Section 3.3: What exactly does \\\"mode collapse\\\" refer to in this context? Would this be using only one codebook entry, for instance?\", \"Section 6: It seems that in all cases to obtain improvements from discritization, BERT is required on top of the vq-wav2vec discrete symbols. Is it possible that the output acoustic model is simply better-matched to continuous rather than discrete input (direct vq-wav2vec gives discrete while BERT gives continuous)? Would it make sense to train the wav2vec acoustic model on top of the vqvae codebook entries (e) instead of directly on the symbols?\", \"Typos, grammar and style:\", \"\\\"gumbel\\\" -> \\\"Gumbel\\\" (throughout; or just be consistent in capitalization)\", \"\\\"which can be mitigated my workarounds\\\" -> \\\"which can be mitigated *by* workarounds\\\"\", \"\\\"work around\\\" -> \\\"workaround\\\"\"], \"missing_references\": \"1. Versteegh, M., Anguera, X., Jansen, A. & Dupoux, E. (2016). The Zero Resource Speech Challenge 2015: Proposed Approaches and Results. In SLTU-2016 Procedia Computer Science, 81, (pp 67-72).\\n2. https://arxiv.org/abs/1904.11469\\n3. https://arxiv.org/abs/1905.11449\\n4. https://arxiv.org/abs/1904.07556\\n5. https://arxiv.org/abs/1904.03240\\n6. https://arxiv.org/abs/1807.03748 (this paper is cited)\\n7. https://arxiv.org/abs/1803.08976\", \"edit\": \"Based on the feedback from the authors, I changed my rating from a 'weak accept' to an 'accept'.\"}" ] }
rJx8ylSKvr
Leveraging Entanglement Entropy for Deep Understanding of Attention Matrix in Text Matching
[ "Peng Zhang", "XiaoLiu Mao", "XinDian Ma", "BenYou Wang", "Jing Zhang", "Jun Wang", "DaWei Song" ]
The formal understanding of deep learning has made great progress based on quantum many-body physics. For example, the entanglement entropy in quantum many-body systems can interpret the inductive bias of neural network and then guide the design of network structure and parameters for certain tasks. However, there are two unsolved problems in the current study of entanglement entropy, which limits its application potential. First, the theoretical benefits of entanglement entropy was only investigated in the representation of a single object (e.g., an image or a sentence), but has not been well studied in the matching of two objects (e.g., question-answering pairs). Second, the entanglement entropy can not be qualitatively calculated since the exponentially increasing dimension of the matching matrix. In this paper, we are trying to address these two problem by investigating the fundamental connections between the entanglement entropy and the attention matrix. We prove that by a mapping (via the trace operator) on the high-dimensional matching matrix, a low-dimensional attention matrix can be derived. Based on such a attention matrix, we can provide a feasible solution to the entanglement entropy that describes the correlation between the two objects in matching tasks. Inspired by the theoretical property of the entanglement entropy, we can design the network architecture adaptively in a typical text matching task, i.e., question-answering task.
[ "Quantum entanglement entropy", "Attention Matrix" ]
Reject
https://openreview.net/pdf?id=rJx8ylSKvr
https://openreview.net/forum?id=rJx8ylSKvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "gRuZ-MtCg8", "SJe-R5ehir", "ByefA3LHsH", "S1glscrHsH", "SJlrd6mUcS", "HJe8f0JRKr", "ryeIgj5pFB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739594, 1573812937485, 1573379274308, 1573374616000, 1572384108562, 1571843598485, 1571822317580 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2064/Authors" ], [ "ICLR.cc/2020/Conference/Paper2064/Authors" ], [ "ICLR.cc/2020/Conference/Paper2064/Authors" ], [ "ICLR.cc/2020/Conference/Paper2064/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2064/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2064/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper advocates for the application of entanglement entropy from quantum physics to understand and improve the inductive bias of neural network architectures for question answering tasks. All reviewers found the current presentation of the method difficult to understand, and as a result it is difficult to determine what exactly the contribution of this work is. One suggestion for improving the manuscript is to minimize the references to quantum entanglement (where currently is it asserted without justification that entanglement entropy is a relevant concept for modeling question-answering tasks). Instead, presenting the method as applications of tensor decompositions for parameterizing neural network architectures would make the work more accessible to a machine learning audience, and help clarify the contribution with respect to related works [1].\\n\\n1. http://papers.nips.cc/paper/8495-a-tensorized-transformer-for-language-modeling.pdf\", \"title\": \"Paper Decision\"}", "{\"title\": \"Authors' response to reviewer 1\", \"comment\": \"We thank the reviewer for the detailed suggestions. We will revise our manuscript according to your suggestions on the paper\\u2019s presentation.\"}", "{\"title\": \"Authors' response to reviewer 2\", \"comment\": \"We thank the reviewer for the time and feedback.\\n1 We would like to in the experiment, we did not compute an average evaluation result on two sub-datasets. Instead, we combine the QA-pair\\u2019s matching score files from the two sub-datasets into one file and calculate the MAP and MRR of the entire dataset based on this file. This is not different from the general practice of evaluating the entire dataset. Detailed experiments are described in Section 4.2.\\n\\n2 About the quantum many-body problem, in Physics, Quantum Many-body Wave Function (QMWF) can model the interaction among many particles and the associated basis vectors. In the language scenario, by considering a word as a particle, different meanings (or latent/embedded concepts) as different basis vectors, the interaction among words (or word meanings) can be modeled by the tensor product of basis vectors, via the many-body wave function.\\n.\"}", "{\"title\": \"Authors' response to reviewer 4\", \"comment\": \"We thank the reviewer for the time and feedback; Our responses are as follows.\\n1 QMWF-LM is peer-reviewed in CIKM (Zhang et al. CIKM 2018), where QMWF-LM is an abbreviation for A Quantum Many-body Wave Function Inspired Language Modeling. \\n2 There is no problem with the quantum terminology in our paper. We have already explained this issue in the first paragraph of Section 2. The formulation in this paper makes it easier for the reader to understand.\\n3 In the experiment, the TREC-QA and YAHOO-QA are two typical Q&A datasets on question answering tasks. \\n4 Quantum Many-body Wave Function Inspired Language Modeling reveals the inherent necessity of using Convolutional Neural Network (CNN) in quantum-inspired language modeling. In our experiments, we compare our methods with two state-of-the-art CNN-based QA methods. \\n5 One of the main contributions of this paper is to prove the equivalence between the Attention Matrix and quantum entanglement under certain conditions.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This work extends the Quantum Many-body Wave Function inspired language model (QMWF-LM) of Zhang et al by proposing some quantum entanglement entropy computation to separate the data into long range correlation and short range correlation. The authors report improved results on the TREC-QA and YAHOO-QA datasets.\\n\\nI am not an expert on Quantum physics, hence I am unable to judge the merits of the quantum entanglement approach proposed in the paper. However, the work that this paper builds on - the \\\"QMWF-LM of Zhang et al\\\" - doesn't seem to have been vetted by proper peer review in either an ML conference or journal. I am also skeptical of the quantum terminology introduced in the paper and the experiments are reported on only two QA datasets - TREC and YAHOO which aren't super standard. If Quantum inspired language models are the next big progress in language modeling, I would like to see more experiments on some language modeling datasets such as LM1B (Chelba et al), Wikitext-2/103 (Merity et al) etc. Besides, this approach should also work for SQuAD, GLUE, SuperGLUE and all the other established NLP benchmarks that benefit from improved language modeling capabilities.\\n\\nI am also skeptical of the progress in TREC-QA, the state-of-the-art claimed by the authors is Kamath et al which uses an RNN + pre-attention. A stronger and more modern baseline would be something like BERT (Devlin et al.) or any of the subsequent improvements to it.\\n\\nThis paper would benefit from a clearer exposition minus the quantum mechanics jargon and from experiments on the above stated benchmarks to be more convincing.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, a method for leveraging entanglement entropy for understanding attention matrices is proposed. Specifically, the paper aims at solving two problems: 1) to study the theoretical analysis of entanglement entropy for the matching of two objects (question-answering pairs), and (2) to qualitatively calculate the matching matrix. The introduced approach is based on fundamental connections between the entanglement entropy and the attention matrix. The main goal of the paper is to show that a low-dimensional attention matrix can be derived from a high-dimensional matching matrix. Results are shown for a text matching task on two datasets (TREC-QA, YAHOO-QA).\\n\\nMy main concerns with this paper are that the approach is not well described and that the proposed contribution appears quite narrow. Overall it is not clear what the actual contribution of this work really is. Several sections of the paper appear convoluted and could be more concise; the text is often written unnecessarily complicated which makes it hard to follow and to comprehend details. While the experiments seem sound the overall improvement of 2.9% compared to SOTA work appear rather shallow. Therefore, in its current state I cannot recommend accepting this paper.\", \"detailed_comments\": [\"The abstract of the paper is very cryptic and the motivation for the proposed approach is not sufficiently described.\", \"The paper is not well-written and difficult to follow. Several sentences and sections are left too unclear. E.g. in the introduction sentences like \\\"... but such an indicator only reflects the intricate correlation structures of each single input object (e.g., an image or a text)\\\" or \\\"This is due to the fact that the tensor product occurs in the quantum many-body function for representing the image and text \\\" are confusing as they come without much context.\", \"It is not clear what is meant by \\\"matching problem\\\".\", \"A few sentences in the introduction are exact copies of the abstract, which makes the text appear redundant. It would help to clearly state what the contribution of this work is, instead of repeating sentences.\", \"It is not clear what is meant by \\\"relatively-deeper\\\" and \\\"relatively-shallower\\\" layers.\", \"The text shows several spelling and grammar issues (such as \\\"for the more complex the inputs\\\").\", \"Several sentences don't make sense and are difficult to read (e.g. \\\"Since our work is mainly for the text matching task of a sentence pair, we briefly introduce a recent Quantum Many-body Wave Function inspired Language Modeling\\\").\", \"The notation and equations in Section 2 are mostly common knowledge and could be moved to the appendix.\", \"In Section 3.1. what is meant by \\\"subsystem in deep neural networks\\\"? Later in the text in becomes more clear. So it would be helpful to rearrange the text.\", \"\\\"probability amplitude distributions \\\" is not clear.\", \"Sentences like \\\"which often correspond to the important information hidden in the matrix\\\" should be accompanied with a reference.\", \"Section 3.2 is titled \\\"Network Design Based on Entanglement Entropy\\\", but the section does not actually describe a network architecture, but instead just describes how to obtain the attention matrix and the sample differences.\", \"Section 4.1 (first paragraph): it would help to discuss the connection of many-body wave functions to represent questions and answer sentences as two subsystems more clearly and earlier in the text.\", \"Section 4.2 (second paragraph) appears quite repetitive. Many sentences have been used in previous sections of the text.\", \"The results and discussions shown in Sections 4.4 and 4.5 are interesting and seem sound. However, it is not clear what exactly are the \\\"adaptive settings for kernels\\\" (e.g. in Figure 2)\", \"Limitations of the approach are not sufficiently discussed.\", \"It is not clear what is meant by \\\"we will investigate the entanglement entropy under high-order conditions\\\" in the conclusion.\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper deals with the understanding of deep learning under a physical point of view, related to entanglement entropy.\\nAs far as i understand the paper explains how the computation of the entanglement entropy may be performed and how this measure may be used to design the neural network architecture. \\n\\nI was pretty much interested by the paper but I was unfortunately not able to follow the \\u00ab\\u00a0theoretical\\u00a0\\u00bb part of the paper, which in my opinion is not well introduced. The paper cites papers from physical letters and physical reviews without introducing all necessary background. From this point of view the paper is not self content enough for the audience of the conference. \\nI don\\u2019t know what is the quantum many-body problem, never heard of StartEnd separation rank\\u2026 mentioned in the first paragraph of the paper.\\n\\nIn addition to the fundamental contribution that i could not summarize well, the paper includes an experimental validation of the contribution on the question answering problem on benchmark datasets, showing very relevant results outperforming the baselines, which look like state of the art methods in the field. \\n\\nNot sure to fully understand the experimental setting though. In particular i understand that the dataset are divided in isolated parts on which different models are learned. And averaged results are reported. But is it a fair comparison with the baselines ? \\n\\nAt the end the paper looks quite interesting and promising but in its current shape it doesn\\u2019t seem to me fully accessible to ICLR audience.\"}" ] }
rkgU1gHtvr
Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies
[ "Xinyun Chen", "Lu Wang", "Yizhe Hang", "Heng Ge", "Hongyuan Zha" ]
We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies. Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation. We propose estimated mixture policy (EMP), a novel class of partially policy-agnostic methods to accurately estimate those quantities. With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction. In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.
[ "off-policy policy evaluation", "multiple importance sampling", "kernel method", "variance reduction" ]
Accept (Poster)
https://openreview.net/pdf?id=rkgU1gHtvr
https://openreview.net/forum?id=rkgU1gHtvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "O71KGil1YK", "SJe5dmAijS", "H1e6OxCjjH", "Bkl8G1RsjB", "BJgZdaaoiB", "Sygmew3xqH", "SJx9Xl7aYB", "SklbL9CnFB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739565, 1573802866339, 1573802101204, 1573801742392, 1573801321468, 1572026090523, 1571790881648, 1571773001380 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2063/Authors" ], [ "ICLR.cc/2020/Conference/Paper2063/Authors" ], [ "ICLR.cc/2020/Conference/Paper2063/Authors" ], [ "ICLR.cc/2020/Conference/Paper2063/Authors" ], [ "ICLR.cc/2020/Conference/Paper2063/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2063/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2063/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors present a method to address off-policy policy evaluation in the infinite horizon case, when the available data comes from multiple unknown behavior policies. Their solution -- the estimated mixture policy -- combines recent ideas from both infinite horizon OPE and regression importance sampling, a recent importance sampling based method. At first, the reviewers were concerned about writing clarity, feasibility in the continuous case, and comparisons to contemporary methods like DualDICE. After the rebuttal period, the reviewers agreed that all the major issues had been addressed through clarifications, rewriting, code release, and additional empirical comparisons. Thus, I recommend to accept this paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #1\", \"comment\": [\"Response to Major Comments:\", \"1.\", \"We got the released code and carried out a comparison study with DualDICE.\", \"We implemented a variation of BCH method by Liu et al for mutliple-behavior-policy cases, called BCH (pooled), using the exact value of all behavior policies. We now report the comparison results of our method EMP, BCH and DualDICE in both single- and multi-behavior-policy settings.\", \"We use the estimation results by on-policy oracle with large trajectory length and number as the \\u201ctrue-value\\u201d. This is how we compute the MSE for EMP and other benchmark OPPE methods.\", \"2.\", \"We have correct this typo on page 2.\", \"We have unified the notation so that all indicies start from 1.\", \"We have correct these typos. $\\\\mathcal{E}_\\\\theta$ is the correct notation for parameter space.\", \"We also went through the mathematical part of the paper and corrected the typos we have found.\", \"2.5\", \"We now cite the assumptions in the statement of the theorems. The detailed assumptions are stated and explained in Appendix B. We didn\\u2019t directly include the full assumptions in the statement of the theorem mainly because of the space limit. Most of the assumptions actually involve technical details about the kernel method that solves the min-max problems, and they need space of half to one page.\", \"We have removed the SADL algorithm in the revision, as it was used as an substitute of DualDICE in the previous version of the paper. So we also removed (the original) Theorem 2, which follows a very similar proof as the kernel-method derivation in Liu et. al.\", \"Proposition 4 (now 3) has been proved in Veach & Guibas (1995). We include it mainly for self-containedness and refer to the original paper for the proof. We now cite Veach & Guibas (1995) more explicitly in the statement of Proposition 3.\"], \"response_to_questions\": \"Q1 What are the properties of SADL? Why was this used instead of DualDICE in the comparison? \\n- We are not aware of the DualDICE code releasing when we first submitted this paper. So we use SADL as a substitute of DualDICE. They are both policy-agnoistic and learn the state-action joint distribution correction.\\n\\nQ2 How would BCH do if we used the mixture policy as the behavior policy in the multiple behavior policy case? How would it compare to your method? This could be an interesting comparison, just to test if the lower MSE argument holds up in the multiple behavior policy case. \\n- We implemented a multiple-behavior-policy version of Liu et al, using the information of all behavior policies. We call it BCH (pooled) and report the comparison results of our method EMP, BCH (pooled) and DualDICE.\\n\\nQ3 What is the meaning of partially-policy-agnostic? It is unclear to me how it is different from the policy-agnostic approaches. If all that is different is you are estimating the behavior to use in the usual infinite-horizon approach, should this be in a separate category from policy-agnostic approaches? (I would say probably not, but I think you could make a case for it). \\n-We call EMP a partially policy-agnostic method in the sense that, although EMP does not require any information on the \\u201cphysical\\u201d behavior policies, it learns a \\u201cvirtual\\u201d policy, which is the mixture policy, defined formally in Section 4.1, and contains aggregated information about the \\u201cphysical\\u201d behavior polices. As a consequence, the accuracy of the \\u201cvirtual\\u201d policy learning (conceptually, whether the algorithm can effectively extract the aggregated information about the behavior policies) will affect the performance of EMP. We now add this explanation to the introduction part.\\n\\nQ4 \\\"Then, a single (s,a,s') tuple simply follows the marginal distribution...\\\". Is this trivially true? \\n- We added a new Subsection 4.1 to more formally state our assumptions on the sample data collected from different behavior policies and the relevant distributions.\", \"response_to_minor_comments\": [\"We have corrected all the typos accordingly. As to the related work part, we added a few sentences of explanation at several places to improve the logic flow.\"]}", "{\"title\": \"Response to Review #2\", \"comment\": \"Response to Technical Concerns:\\nIn our experiment, we use a neural network to approximate the mixture policy in continuous case and estimate the model by MLE. We have added a paragraph explaining this for the general algorithm in Section 4.2 and for the numerical experiment in Appendix D.2. We agree with the reviewer that when model is not precise, the bias will overwhelmed the variance reduction. We explicitly explained that the performance of EMP relies on the accuracy of policy learning in the introduction part. We think the problem of model uncertainty is interesting and more challenging, and probably requires a different theoretic framework, such as robust optimization, so we will leave this for further study. Therefore, in most part of the paper, to study the effect of policy learning on OPPE performance, we would like to focus on the cases that the policy can be well approximated by some parametric model, especially for theoretic analysis.\", \"response_to_clarity_concerns\": [\"We have modified Section 4 according to the comments of the reviewers. We added a subsection to more formally define the mixture policy pi_M and the mixture distribution d_M (to better distinguish from the single-behavior policy, we have also changed the notation.) We also provided additional theoretic properties of \\\\pi_M and d_M to provide more intuition behind the algorithm design.\", \"We have removed equation (3) and (6). But we keep equation (7) (now becomes (6)) to make the description of EMP algorithm more self-contained.\", \"We have also modified the experiment part. First, we added new comparison results to the state-of-art policy-agnostic method DualDICE. Second, we reorganized experiment part to make it more consistent with the theoretic analysis, hope this could convey clearer messages of the numerical experiments.\"], \"response_to_questions\": \"1)How many repetitions you apply for each figure? It seems not smooth enough. \\n-we have increased the number of repetitions by 3 times and updated the numerical results.\\n2)Which estimator you use for you regression? Maximum Likelihood Estimation? Which model you are using for each environment (I know for tabular MLE is count based)?\\n- For the three discrete environment, we used count-frequency to estimate the policy. For the continuous environment, we used a neural network to model the policy and MLE to estimate the model parameters. We have added a paragraph explaining our model and estimation methods in Appendix D.2. (E.2 in the previous version).\\n3)What is in section E.3 equation (12)? How do you compute KL divergence in this equation for empirical distribution? \\n- Equation (12) is used to defined the adjusted proportion of data from each policy. It involves the KL-divergence, which is computed by estimating the behavior policy by a parametric model. We have added some explanation in Section D.3 (E.3 in the previous version.)\", \"response_to_minor_issues\": \"1.You should replace 'for all' to when writing equation, like equation before (2), equation (6) and equation in proposition 4. \\n- We have replaced \\u2018for all\\u2019 with \\u2018\\\\forall\\u2019 in the equations.\\n2.You'd better to separate legend with figure in order to make the legend larger and figure clearer. \\n- We have changed the format of figures accordingly.\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"- We have added the missing citation Prenup et al 2000.\\n\\n- To certain extend, our method shows the performance improvement of OPPEval algorithm by using certain representation model (especially in continuous case) to learn the specific information about the underlying behavior policies from data. Besides, there are applications where policy evaluation is the ultimate goal in itself [1]. Policy evaluation algorithms are also important to study because they are often key parts of larger algorithms where the ultimate goal is to find an optimal policy (one such example is the class of actor-critic algorithms, see [2] for a survey). We feel that ICLR seems to be expanding its scope and covering more general topics in ML and the topic of this paper fits this trend. \\n\\n[1] P. Balakrishna, R. Ganesan, and L. Sherry, \\u201cAccuracy of reinforcement learning algorithms for predictingaircraft taxi-out times: A case-study of tampa bay departures,\\u201dTransportation Research Part C: EmergingTechnologies, vol. 18, no. 6, pp. 950\\u2013962, 2010.\\n\\n[2]I. Grondman, L. Busoniu, G. A. Lopes, and R. Babuska, \\u201cA survey of actor-critic reinforcement learning:Standard and natural policy gradients,\\u201dIEEE Transactions on Systems, Man, and Cybernetics, Part C(Applications and Reviews), vol. 42, no. 6, pp. 1291\\u20131307, 2012\"}", "{\"title\": \"Revision Summary\", \"comment\": \"1. We carry out a comparison study with DualDice in the revision.\\n2. We re-organize the theoretic derivation to highlight the key ideas of EMP algorithm. In particular, we explain more explicitly the definition of the mixture policy in the multiple-behavior-policy setting, and its connection with the sample data distribution, to better explain our notion of \\\"partially policy-agnostic\\\".\\n3. We re-organize the experiment part to make it more consistent with the theoretic analysis results. We also did more experiment repetitions to get better MSE estimation.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose here a method for off-policy policy evaluation (OPPEval), to use the reinforcement reinforcement learning on infinite horizon problems from previously-collected trajectories.\\n\\nThe authors frame their work that much of the focus in the OPPEval field has been on, as they call, \\\"policy-aware\\\" methods that use information from the policy to improve estimates to cope with the mismatch between then behaviour and the estimated target policy (such as IS, WIS, etc) when computing state stationary distribution. These contrast \\\"policy agnostic\\\" methods (DualDICE, Nachum et al, 2019) that suffer from the curse of dimensionality in estimating the much higher dimensional state-action stationary distributions but do not depend on policy information. \\nThe manuscripts novelty rests in a comparing and relating these agnostic/aware approaches and propose a partially policy-agnostic method (EMP) that strives to combine advantages from both approaches by following a mixture approach (effectively a mixture of weighted policies). The authors provide a derivation of their methods bounds and show that their method outperforms policy-aware methods as well as policy-agnostic methods. In the comprehensive experiments they compare recent methods by Liu et al (BCH) and WIS (I suppose they mean weighted importance sampling following Prenup et al 2000, as no citation given) ), as well a a new policy-agnostic method they propose here (SADL). In all cases the results favour the proposed new method (EMP). The results advance the field by providing a pathway to improved estimation results (lower uncertainty) by using policy mixtures. \\nWhile I have not checked the derivations line-by-line the results are consistent and interesting, although not entirely clear to me why this is an important contribution to a representational learning conference. \\n\\nA key question to this paper (and the OPPEval field) is to evaluate their methods more consistently in closed-loop agent experiments - after training on the historical data. Perhaps for a representational learning conference this would be more appropriate way to convince one of the strength of the results.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"After rebuttal:\\nThank author for the clarification. The new version looks better and I tend to accept the paper in the current version.\\n=========\\nThis paper provides a algorithm to solve infinite horizon off policy evaluation with multiple behavior policies by estimate a mixed policy under regression, and follows the same method of BCH. The intuition of using an estimated policy comes from Hanna et al. (2019) which shows that an estimated policy ratio can reduce variance even it introduces additional bias. The authors provide theoretical proof on that and arguing that their method is not worse than BCH one. Empirical results show that in general their method performs as good as previous baseline. I believe this method is novel and natural and worth investigating.\", \"technical_concerns\": \"The major concern I have is in continuous case, it is almost impossible to pre-assume a model for learning the mixed policy $\\\\hat{\\\\pi_0}$. For example, if the sample policies $\\\\pi_j$ are all Gaussians, then $\\\\pi_0$ according to equation (4) would be a complicated mixture distribution (not even a Gaussian mixture since it involves ratio of $d_{\\\\pi_j}$ which is hard to compute). If the model is not precise, we cannot achieve the bias/variance tradeoff where the bias introduce by model mismatching can be arbitrarily large.\\nAnd according to the experimental details in appendix E, I didn't find any useful model assumption to address that issue. So my question would be: what model are you using when doing regression for $\\\\pi_0$?\", \"clarity_concern\": \"The written of the paper is not satisfactory. The major contribution should be highlight in section 4, which from my first time reading is very unclear. The key observation of equation (4) uses a recursive definition, where we define $\\\\pi_0$ using an undefined $d_{\\\\pi_0}$. I think you should rewrite $d_{\\\\pi_0}$ as the weighted summation of $d_{\\\\pi_j}$. And you should avoid repeating similar equation like (2) (3) (6) and (7) where you can cite equation (2) in general or use a short notation for that equation, otherwise it is hard to contract the contribution of the paper.\\nThe experiment part is also unclear. Here's some questions: 1) How many repetitions you apply for each figure? It seems not smooth enough. 2) Which estimator you use for you regression? Maximum Likelihood Estimation? Which model you are using for each environment (I know for tabular MLE is count based)? 3) What is $\\\\pi_k$ in section E.3 equation (12)? How do you compute KL divergence in this equation for empirical distribution?\", \"some_minor_issues\": \"1. You should replace 'for all' to $\\\\forall$ when writing equation, like equation before (2), equation (6) and equation in proposition 4.\\n2. You'd better to separate legend with figure in order to make the legend larger and figure clearer.\\n\\nOverall, I think this work is very novel and natural, but should give more consideration on the model selection. I tend to reject the paper by the current version and encourage the authors to submit to another conference after the revision.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"*Synopsis*:\", \"The main contribution of this paper is the development of estimated mixture policy (EMP), which takes ideas from the new off-policy policy evaluation infinite horizon estimators (i.e. Liu) and from a recent development in more traditional importance weight approaches using regression importance sampling (i.e. Hannah). This new method provides a nice extension of Liu's algorithm to many policies, and to when the policy is unknown. The paper provides some nice analysis inspired by Hannah. Finally, they provide empirical results.\", \"*Review*:\", \"While I think the method has potential interest to the community, I found the empirical results lacking (particularly in missing competitors). I also have some concerns about the theory as there seems to be many typos and consistency issues making much of it hard to follow. Overall, I think this paper is not quite up for publication, but if the authors address my consistency/missing proofs issues and provide a comparison to DualDice I will increase my score.\", \"1. I don't find the reason provided for not including DualDICE compelling and think it is an important competitor here, as there are many similarities between the two methods.\", \"It would also be interesting to reproduce the results provided by Liu et al with the model based approach, and the on-policy oracle. I don't think these are as pressing as DualDice but still interesting.\", \"2. There are many consistency issues with respect to notation, and some odd notation choices as compared to the rest of the literature:\", \"What is script \\\\epsilon in the equation in 2.1? It looks like it should be an expectation over d_\\\\pi, but I've never seen this notation before.\", \"The indicies of sums and sets often change between one based and zero based indexing. This should be unified (preferably to one based). For example, section 2.1 \\\\pi_j(j=0,1,2,...,m) and m=1 for one policy doesn't work. Either \\\\pi_j(j=0,1,2,...m-1) or m=0. This occurs throughout the proofs in the appendix as well.\", \"What is script \\\\Epsilon_\\\\theta? Do you mean script F_\\\\theta? And then what does it mean for \\\\theta_0 \\\\in E_1? There seems to be many definitions missing.\", \"2.5. There are also some issues with the presentation of the theory over the consistency issues already mentioned.\", \"The assumptions and conditions for the theorems presented in the main text should be clearly specified in the theory statement.\", \"The proof to theorem 2 (i.e. in the appendix) should be provided.\", \"The proof of proposition 4 seems to be missing as well.\", \"*Questions*\", \"Q1 What are the properties of SADL? Why was this used instead of DualDICE in the comparison?\", \"Q2 How would BCH do if we used the mixture policy as the behavior policy in the multiple behavior policy case? How would it compare to your method? This could be an interesting comparison, just to test if the lower MSE argument holds up in the multiple behavior policy case.\", \"Q3 What is the meaning of partially-policy-agnostic? It is unclear to me how it is different from the policy-agnostic approaches. If all that is different is you are estimating the behavior to use in the usual infinite-horizon approach, should this be in a separate category from policy-agnostic approaches? (I would say probably not, but I think you could make a case for it).\", \"Q4 \\\"Then, a single (s,a,s') tuple simply follows the marginal distribution...\\\". Is this trivially true?\", \"*Minor comments not taken into account in the review*\", \"section 2.1 \\\"target policy \\\\pi via a pre-collected...\\\" -> remove \\\"a\\\"\", \"The layout of the related works section is a bit hard to follow.\", \"\\\"Recently, Nachum et. al. (2019) proposes DualDice\\\": proposes->proposed\", \"\\\"by their estimated values in two folds\\\": do you mean in two ways? This is unclear.\", \"Section 3.1: \\\"notation abusion\\\" -> notation abuse\", \"Equation right after equation 1: \\\"d_\\\\pi_0(s)\\\\pi(a|s)\\\" -> \\\"d_\\\\pi_0(s)\\\\pi_0(a|s)\\\"\", \"\\\"The derivation of kernel method are put in...\\\" -> \\\"The derivation of the kernel method is put in...\\\"\", \"\\\"we need introduce more notation\\\" -> \\\"we need to introduce more notation\\\"\", \"Section 4: \\\"detailed description on the data sample\\\": \\\"on\\\"->\\\"of\\\"\", \"\\\"In this light by pooling the data together....\\\" These two sentences should be put together.\", \"I would like if your theorems were restated in the appendix, for ease of reading.\", \"-----------\", \"Post Rebuttal\", \"I'd like to thank the author for their thorough response! Given the major additions to the paper including empirical comparisons and clarity for the theory I've decided to update my score to reflect my new feelings (i.e. to a 6). I think this paper is well worth accepting in its current form.\"]}" ] }
r1eU1gHFvH
Under what circumstances do local codes emerge in feed-forward neural networks
[ "Ella M. Gale", "Nicolas Martin" ]
Localist coding schemes are more easily interpretable than the distributed schemes but generally believed to be biologically implausible. Recent results have found highly selective units and object detectors in NNs that are indicative of local codes (LCs). Here we undertake a constructionist study on feed-forward NNs and find LCs emerging in response to invariant features, and this finding is robust until the invariant feature is perturbed by 40%. Decreasing the number of input data, increasing the relative weight of the invariant features and large values of dropout all increase the number of LCs. Longer training times increase the number of LCs and the turning point of the LC-epoch curve correlates well with the point at which NNs reach 90-100% on both test and training accuracy. Pseudo-deep networks (2 hidden layers) which have many LCs lose them when common aspects of deep-NN research are applied (large training data, ReLU activations, early stopping on training accuracy and softmax), suggesting that LCs may not be found in deep-NNs. Switching to more biologically feasible constraints (sigmoidal activation functions, longer training times, dropout, activation noise) increases the number of LCs. If LCs are not found in the feed-forward classification layers of modern deep-CNNs these data suggest this could either be caused by a lack of (moderately) invariant features being passed to the fully connected layers or due to the choice of training conditions and architecture. Should the interpretability and resilience to noise of LCs be required, this work suggests how to tune a NN so they emerge.
[ "localist coding", "emergence", "contructionist science", "neural networks", "feed-forward", "learning representation", "distributed coding", "generalisation", "memorisation", "biological plausibility", "deep-NNs", "training conditions" ]
Reject
https://openreview.net/pdf?id=r1eU1gHFvH
https://openreview.net/forum?id=r1eU1gHFvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qC-gDRaMcI", "rygJbaI6cS", "ByeN3hTwcB", "BJeh3af0FH", "Syet1-9jtH" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739535, 1572855031302, 1572490411848, 1571855796042, 1571688673502 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2062/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2062/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2062/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2062/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper studies when hidden units provide local codes by analyzing the hidden units of trained fully connected classification networks under various architectures and regularizers. The reviewers and the AC believe that the paper in its current form is not ready for acceptance to ICLR-2020. Further work and experiments are needed in order to identify an explanation for the emergence of local codes. This would significantly strengthen the paper.\", \"title\": \"Paper Decision\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the emergence of local codes in neural networks on a synthetic dataset. From my understanding, a neuron is counted as a local code if there is a class A such that its activations of data points from A are linear separable from its activations of data points from all other classes. However, is this definition for data points in the training set, or in the test dataset, or union of them? I did not find the exact definition in the paper.\\n\\nIt designed experiments to study the number of local codes. It made 7 empirical findings by the experiments on a synthetic dataset, listed in Section 1.1. It's findings are purely empirical. The authors may clarify this work's novelty and importance.\\n\\nThis paper seems to be finished in rush, because there is question masks, e.g., \\\"Summary statistics and Kolmogorov-Smirnov hypothesis tests are reported in tables ?? in the appendix\\\" in Page 8, \\\"Results are shown in figure ?? and table 2.\\\" in Page 12, \\\"As can be in seen tables ?? and figure ?? low values of dropout are likely the same distribution\\\" in Page 12. The paper is very difficult to read for me, partly due to its writing in a language (local codes) that I'm not familiar with. I think that its presentation can be greatly improved for general audience. \\n\\nI'm not familiar with the concept of \\\"local codes\\\", and I do not understand part of the paper.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"I particularly liked the simple dataset the authors construct in determining whether local codes emerge in hidden units, especially since deep networks and dataset used in practice are overly complex to gain insight for this behavior. However, I find the overall message to be a bit confusing, especially in regard to using the analysis to construct networks with emergent local codes. In particular, I feel that the authors could strengthen this work greatly by using their findings to train a deeper neural network for which local codes do emerge on a more realistic dataset. Furthermore, this work would be significantly more impactful if a network with more local codes does generalize better, but that is unclear as of now (especially since local codes seem to not emerge in practical settings even though these networks are state of the art). \\n\\nCriticisms/Questions:\\n(1) Main: I'm somewhat confused about the main takeaway from this work in terms of understanding when local codes actually emerge in deep neural networks. The authors seem to have a number of very specific conditions that are both architecture and dataset dependent, and overall I feel the message would be much stronger if the authors were able to rigorously study perhaps just a few of these conditions across many more settings. For example, even just studying the impact of activation and providing some conditions/theory or a clearer understanding of which nonlinearities lead to more local codes would be insightful. The current work seems to be more broad instead of tackling one of these properties in depth. \\n\\n(2) I am a bit confused about the thresholds used by the authors in determining whether a hidden unit provides a local code or not. Do you just determine if there is some threshold given by the unit that separates out all points of one class from the rest? \\n\\n(3) After several experiments, there are some heavy conjectures trying to rationalize the result of the experiment. As an example, the authors provide statements like \\\"ReLU is a more powerful activation function than sigmoid.\\\" However, this statement in particular is not exactly correct, since given enough width, networks with either activation function should be able to interpolate the training data. Another example of this is at the bottom of page 7, when the authors provide 5 possible explanations as to why local codes don't emerge in modern training settings. It is unclear which of these explanations are true, but it would be great if the authors could actually provide a cleaner rationalization.\", \"minor_criticisms\": \"(1) I've seen a number of different conventions for how to refer to the depths of networks, and I believe what you refer to as 3 layer networks would conventionally be referred to as 2-layer networks for theory audiences (as there are 2 weight matrices involved) or 1-hidden layer networks for empirical audiences. I think adding a figure in the appendix for your architecture would clear up any confusion immediately. \\n(2) Some of the formatting is a bit awry: there are references to figures that appear as ?? (see page 8 paragraph 3). \\n(3) It would be nice to provide a consistent legend in some of the figures. For example, Figure 4b has no indication for which settings the colors represent. \\n(4) As there seem to be a lot of experiments numbered 1-12, I think it would be much more readable to have different subsections on the different settings and outline the experiments in the subsection more clearly. Referring back to these numbers on page 3 & 4 constantly makes it less readable. \\n(5) I quite liked Figure 8 in the Appendix. I feel that this would have been a great figure to put towards the front of the paper to provide an example of local codes emerging.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors studied the local codes in neural networks through a set of controlled experiments (by controlling the invariance in the input data, dimensions of the input and the hidden layers, etc.), and identified some common conditions under which local codes are more likely to emerge.\\n\\nThe fact that local codes tend to emerge as a response to invariance is interesting but not surprising, especially given that convolution operations are designed to capture location invariance. It would be useful if the authors can clarify their contributions and compare against existing works in the literature.\", \"experiments_are_conducted_at_a_relatively_small_scale\": \"On a synthetic dataset with binarized vectors and on MNIST, which a predefined rule for noise injection (Figure 1). The controlled experiments conducted in the paper are still informative, but the overall message would be much stronger if the empirical analysis can be extended to common benchmarks such as CIFAR and/or ImageNet.\\n\\nAll of the experiments are based on very shallow networks (3-4 layers), and as the result, the study ignores batch normalization and skip connections which are common ingredients in state-of-the-art convolutional networks. It remains unclear whether the presence of those components would change the emergence behavior of local codes, and hence affect some of the conclusions in the paper.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"I have a lot of questions about the data used in the experiments. They are created according to the method explained in \\u201cData design\\u201d (p.2). It is also summarized in the last paragraph of the first section as follows: \\u201dthere are 1/10 input bits that are always 1 for each class and these are the invariant bits, the 0s of each prototype are then filled in with a random mix of 1 and 0 of a known weight\\u201d. What is the intention behind this way of creating data? How general are the data created in this way as well as the analyses based on them? It seems to me that the data and thus the analyses lack the generality needed for the purpose of understanding behaviors of neural networks on real tasks/data.\\n\\nThe same is true for \\u201cNeural network design\\u201d (p.3), in which 13 experiments conducted in this study are explained. I think their explanations are too condensed; each explanation is very short and it is hard to understand the motivation and purpose of each experiment, i.e., what is the hypothesis to be verified and in what way it is verified? \\n\\nIn Experiment-12, MNIST is used as data unlike other experiments, and they are modified as \\u201cwith added 20 pixel invariants\\u201d. What is the purpose of this modification? There is a statement in a footnote of p.5 \\u201cNo LCs were seen in the standard MNIST runs\\u201d, which agrees with the above concern about the lack of generality.\\n\\nAdditionally, I do not understand the statement in p.5 \\u201cIncreasing the difficulty of the problem (by increasing n_x, \\u2026\\u201d. Why does the use of more training data make the problem harder? It should usually be the opposite; the smaller, the harder.\"}" ] }
HkeryxBtPB
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
[ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ]
We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary. Our study shows that maximizing margins can be achieved by minimizing the adversarial loss on the decision boundary at the "shortest successful perturbation", demonstrating a close connection between adversarial losses and the margins. We propose Max-Margin Adversarial (MMA) training to directly maximize the margins to achieve adversarial robustness. Instead of adversarial training with a fixed $\epsilon$, MMA offers an improvement by enabling adaptive selection of the "correct" $\epsilon$ as the margin individually for each datapoint. In addition, we rigorously analyze adversarial training with the perspective of margin maximization, and provide an alternative interpretation for adversarial training, maximizing either a lower or an upper bound of the margins. Our experiments empirically confirm our theory and demonstrate MMA training's efficacy on the MNIST and CIFAR10 datasets w.r.t. $\ell_\infty$ and $\ell_2$ robustness.
[ "adversarial robustness", "perturbation", "margin maximization", "deep learning" ]
Accept (Poster)
https://openreview.net/pdf?id=HkeryxBtPB
https://openreview.net/forum?id=HkeryxBtPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "0pr-ULQpOZ", "H1xiiPCosr", "HklombujiB", "B1lW_kdosr", "SJlA4H8UiB", "r1er1S8UoB", "S1ey_4ULsH", "BJeaEE88sr", "Hyet07UIoS", "ByxNq7ULjH", "H1llwmULiS", "SJxahVPkcS", "SyenNZBnFS", "r1xsfMaxFr", "Skg9K9pV_r", "Hklcw_H1uH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798739506, 1573803939281, 1573777698860, 1573777257336, 1573442870341, 1573442780761, 1573442663314, 1573442612769, 1573442512899, 1573442443946, 1573442392433, 1571939509146, 1571733812070, 1570980371148, 1570196097618, 1569835106132 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2061/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "ICLR.cc/2020/Conference/Paper2061/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2061/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2061/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2061/Authors" ], [ "~Anthony_Wittmer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This work presents a new loss function that combines the usual cross-entropy term with a margin maximization term applied to the correctly classified examples. There have been a lot of recent ideas on how to incorporate margin into the training process for deep learning. The paper differs from those in the way that it computes margin. The paper shows that training with the proposed max margin loss results in robustness against some adversarial attacks.\\nThere were initially some concerns about baseline comparisons; one of the reviewers requesting comparison against TRADES, and the other making comments on CW-L2. In response, authors ran additional experiments and listed those in their rebuttal and in the revised draft. This led some reviewers to raise their initial scores. At the end, majority of reviewers recommended accept. Alongside with them, I find extensions of classic large margin ideas to deep learning settings (when margin is not necessarily defined at the output layer) an important research direction for constructing deep models that are robust and can generalize.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Thanks for your clarification and response. I am happy with your answers, and have upgraded the rating to 6 (Weak Accept).\"}", "{\"title\": \"Response to AnonReviewer3 on CW-L2 results\", \"comment\": \"We run CW-L2 attack on the first 1000 examples (due to constraints on computational resources) for models trained with $\\\\ell_2$ attacks. Therefore under each model, we have the minimum distance that CW-L2 needs (to make the prediction wrong) for each example.\\n\\nFor each model, we show the clean accuracy, also mean, median, 25th percentile and 75th percentile of these minimum distances in the table below.\\n\\nLooking at CW-L2 results, we have very similar observations as compared to the observations we made in the paper (Appendix F) based on robust accuracies at different $\\\\epsilon$'s and the AvgRobAcc (average robust accuracy).\\n\\n----------\\n**MNIST models trained with $\\\\ell_2$ attacks**\", \"std______________acc\": \"95.1%, mean: 0.09, 25th: 0.05, median: 0.08, 75th: 0.13\\n\\nPGD-0.5 acc: 89.2%, mean: 0.80, 25th: 0.32, median: 0.76, 75th: 1.21\\nPGD-1.0 acc: 83.8%, mean: 1.01, 25th: 0.30, median: 0.91, 75th: 1.62\\nPGD-1.5 acc: 76.0%, mean: 1.11, 25th: 0.04, median: 0.96, 75th: 1.87\\nPGD-2.0 acc: 71.6%, mean: 1.16, 25th: 0.00, median: 0.96, 75th: 2.03\\nPGD-2.5 acc: 65.2%, mean: 1.19, 25th: 0.00, median: 0.88, 75th: 2.10\\n\\nPGDLS-0.5 acc: 90.5%, mean: 0.79, 25th: 0.34, median: 0.76, 75th: 1.17\\nPGDLS-1.0 acc: 83.8%, mean: 1.00, 25th: 0.30, median: 0.91, 75th: 1.59\\nPGDLS-1.5 acc: 77.6%, mean: 1.10, 25th: 0.11, median: 0.97, 75th: 1.83\\nPGDLS-2.0 acc: 73.1%, mean: 1.16, 25th: 0.00, median: 0.98, 75th: 2.02\\nPGDLS-2.5 acc: 66.0%, mean: 1.19, 25th: 0.00, median: 0.88, 75th: 2.13\\n\\nMMA-1.0 acc: 88.4%, mean: 0.85, 25th: 0.32, median: 0.80, 75th: 1.27\\nMMA-2.0 acc: 84.2%, mean: 1.03, 25th: 0.26, median: 0.92, 75th: 1.62\\nMMA-3.0 acc: 81.2%, mean: 1.09, 25th: 0.21, median: 0.92, 75th: 1.73\\n\\n----------\\n\\nOn CIFAR10, we observed that MMA training is fairly stable to $d_\\\\max$, and achieves good robustness-accuracy trade-offs.\\nWith carefully chosen $\\\\epsilon$ value, PGD-1.0/PGDLS-1.0 models can also achieve similar robustness-accuracy tradeoffs as compared to MMA training. However, when we perform PGD training with a larger $\\\\epsilon$, the clean accuracy drops significantly.\"}", "{\"title\": \"Response to AnonReviewer2 on comparison to other baselines\", \"comment\": \"Regarding [2], there is no publicly available pretrained models and we are not able to finish the training and evaluation before the rebuttal deadline. We will add the comparison to the paper later when ready. Nevertheless, [2]'s reported robustness is lower than those in TRADES [1], therefore we believe missing comparison to [2] will not affect the evaluation of our model significantly.\\n\\n----------\", \"comparing_to_trades\": \"We evaluate the downloaded pretrained TRADES [1] models for both MNIST-$\\\\ell_\\\\infty$ and CIFAR10-$\\\\ell_\\\\infty$ cases. For each model, we show the clean accuracy and robust accuracies at different $\\\\epsilon$'s.\\n\\n**MNIST models trained with $\\\\ell_\\\\infty$ attacks**\\n\\nMMA-0.45 acc: 98.9%, 0.1: 97.9%, 0.2: 96.0%, 0.3: 92.6%, 0.4: 85.2%\", \"trades_____acc\": \"99.5%, 0.1: 98.8%, 0.2: 97.3%, 0.3: 94.1%, 0.4: 0.1%\\n\\nOverall TRADES outperforms MMA except that TRADES fails completely at large perturbation length of 0.4; This may be because TRADES is trained with attacking length $\\\\epsilon = 0.3$. We are not able to train and evaluate a TRADES model with $\\\\epsilon=0.4$ before the rebuttal deadline.\\n\\n**CIFAR10 models trained with $\\\\ell_\\\\infty$ attacks**\", \"mma_32___acc\": \"84.4%, 4: 64.8%, 8: 47.2%, 12: 31.5%, 16: 18.9%, 20: 10.2%, 24: 4.8%, 28: 2.0%, 32: 0.8%\", \"trades____acc\": \"84.9%, 4: 71.0%, 8: 52.9%, 12: 33.0%, 16: 18.2%, 20: 8.3%, 24: 3.6%, 28: 1.4%, 32: 0.7%\\n\\nHere we compare MMA-32 to TRADES because their clean accuracies are similar. Similar to the results in MNIST, TRADES outperforms MMA on the attacks of lengths that are less than $\\\\epsilon = 12/255$, but it sacrifices the robustness under larger attacks.\\n\\nWe would also like to reiterate that the focus of this paper is not to achieve the best robustness toward some fixed attack magnitude. \\nOur contribution is a direct margin maximizing perspective for adversarial robustness, and it is connection to adversarial training. \\nBased on our analysis, we propose MMA that can achive robustness across different attacking lengths based on their intrinsic robustness.\\nOur idea and the idea in TRADES of optimizing a calibration loss are progress in different directions and could potentially be combined.\"}", "{\"title\": \"Response to AnonReviewer3, part 4\", \"comment\": \">> 2. which norm is $d_\\\\theta$ defined on? concerns on $\\\\delta^{*} = \\\\arg\\\\min_{\\\\delta:L_\\\\theta^{LM}(x+\\\\delta, y) \\\\geq 0} \\\\|\\\\delta\\\\|$ in Theorem 2.1\\n\\n$d_\\\\theta$ is defined on $\\\\ell_p$ norm with arbitrary $p>0$, which includes both $\\\\ell_\\\\infty$ and $\\\\ell_2$ norm. Therefore, we just used the general notation of norm $\\\\|\\\\cdot\\\\|$.\\n\\nIn Theorem 2.1, $\\\\delta^{*}$ is not a norm, it is the $\\\\delta$ that minimizes the norm $\\\\|\\\\delta\\\\|$ under the constraint that $L_\\\\theta^{LM}(x+\\\\delta, y) \\\\geq 0$.\\nWe don't think there is a mistake. Please let us know if the confusion remains.\\n\\n\\n>> 3. The minimum margin \\\\delta^{*} is a bit confusing, is it used in maximization or just in the outer minimization?\\n\\nIn our terminology, $\\\\delta^{*}$ is not the \\\"minimum margin\\\". It's norm $\\\\|\\\\delta^{*}\\\\|=d_\\\\theta(x, y)$ is the margin. $\\\\delta^{*}$ is the shortest successful perturbation, which is the minimizer in Equation (2). $\\\\delta^{*}$ is then used in the outer optimization in Equation (7).\\n\\n\\n>> 3. inconsistency of $L(\\\\theta, \\\\delta)$ and $L(\\\\delta, \\\\theta)$\\n\\nThanks for pointing this out. We will unify the notation to be $L(\\\\delta, \\\\theta)$.\\n\\n\\n>> 4. Why do we need the \\\"gradients of margins to model parameters\\\" analysis from Proposition 2.1 to remark 2.2? Why don't go directly from Theorem 2.1 to Proposition 2.4?\\n\\nTheorem 2.1 is about margin maximization based on the LM loss, while Equation (7) is about maximizing a lower bound of the margin using the CE loss, we need some technical developments in between to make this transition. \\n1) Theorem 2.1 is a summarization of Proposition 2.1 and Proposition 2.2. You are right about the rest of section 2.1 after Theorem 2.1. \\nIt can be removed and the rest of the paper would not be affected. However, we believe this theoretical result is interesting and significant, since it rigorously shows how to **directly** maximize margin. This is arguably the most **direct** way of improving adversarial robustness under $\\\\ell_p$ adversarial perturbation, but it was not discussed in literature to the best of our knowledge. \\nAlthough minimizing the loss over $\\\\delta^{*}$ seems an intuitive step from the perspective of adversarial training, we believe it is not obvious that the margin's gradient wrt $\\\\theta$ is a scaled version of the loss' gradient wrt $\\\\theta$ at $\\\\delta^{*}$, as shown by Proposition 2.1.\\n2) Section 2.2, including contents before Proposition 2.4, is necessary for explaining the transition from the LM loss, which defines the margin, to the SLM/CE loss, which can be used to maximize margin's lower bound.\"}", "{\"title\": \"Response to AnonReviewer3, part 3\", \"comment\": \">> 5. Clarity of experimental settings. What CIFAR10-$\\\\ell_{\\\\infty}$ means? How the test attacks were generated? How the $m$ models were trained?\\n\\nSorry about the confusion. Due to space limit, some important descriptions of the experiment settings are pushed to the appendix. We will modify the paper to make things clear in the main body.\\n\\nCIFAR10-$\\\\ell_{\\\\infty}$ means that the model is trained on the CIFAR10 dataset with $\\\\ell_\\\\infty$ attacks, and also tested with $\\\\ell_\\\\infty$ attacks on CIFAR10, as stated in Appendix C: \\\"Here all the models are trained and tested under the same type of norm constraints, namely if trained on $\\\\ell_\\\\infty$, then tested on $\\\\ell_\\\\infty$; if trained on $\\\\ell_2$, then tested on $\\\\ell_2$.\\\"\\n\\nWe take CIFAR10-$\\\\ell_\\\\infty$ as an example to explain the test settings. The complete list of models trained can be found in Table 3 (Appendix F), which contains 32 models ($m=32$). These include models trained with PGD/PGDLS with different $\\\\epsilon$ (and their ensembles), models trained with MMA/OMMA with different $d_\\\\max$, a standardly trained model, and the downloaded (Madry et al. 2018) PGD trained model.\\nAssume that we want to evaluate the robustness of MMA-12 at perturbation magnitude 8/255. For each test example, we perform 10 PGD attacks with different random initializations (random starts) at this magnitude, and use the strongest attack among them to evaluate the robustness of the model, which is the typical \\\"standard setting\\\". \\nAt the same time, we also did this to all the 32 models that we trained, when we evaluate their robustness at perturbation magnitude 8/255. Therefore, we have $m \\\\cdot N = 32\\\\times 10 = 320$ attacks in total under 8/255, for each test example. For MMA-12, 10 of them are whitebox PGD attack with random starts, the other 310 attacks are transfer attacks, as they are generated from attacking other models on the same test example under the same perturbation magnitude.\\nTherefore, for each model and each test example, we have 320 attacks, and if any one of them succeeds, we consider the model is not robust. \\nBecause of additional transfer attacks, our test setting is stronger than the standard setting. We also applied this testing protocol on the downloaded (Madry et al. 2018) model (bottom row of Table 3). Under CIFAR10-$\\\\ell_\\\\infty$ with perturbation magnitude 8/255, our test setting gives 44.68% robust accuracy, which is lower than 45.8% (originally reported in their paper).\\nAll the accuracies calcualted on the entire 10K test images.\", \"reference\": \"Madry et al. \\\"Towards deep learning models resistant to adversarial attacks.\\\" ICLR 2018\\n\\n\\n>> 5. How are the $d_\\\\max$ determined? and what are their relationship to standard $\\\\epsilon$?\\n\\nIn MMA training, we try to maximize each example's margin until the margin reaches $d_\\\\max$. In standard adversarial training, each example is trained to be robust at $\\\\epsilon$. \\nTherefore in MMA training, we usually set $d_\\\\max$ to be larger than $\\\\epsilon$ in standard adversarial training.\\nIt is difficult to know the \\\"correct\\\" value of $d_\\\\max$, therefore we tested different $d_\\\\max$ values. But different from PGD, MMA training is insensitive to this hyperparameter. A large $d_\\\\max$ only slightly affect clean accuracy and robustness to small perturbations.\\nIn contrast, when $\\\\epsilon$ is large in standard adversarial training, many examples that are not able to be $\\\\epsilon$-robust are simply \\\"gave up\\\" (Figure 2, 4). Therefore clean accuracy and robust accuracies at small perturbations is largely impacted (Table 1).\\n\\nWe will improve the clarity of the paper accordingly.\"}", "{\"title\": \"Response to AnonReviewer3, part 2\", \"comment\": \">> 8. The proposed PGDLS is interesting, and similar to \\\"dynamic training\\\" in [3].\\n\\nThank you for finding PGDLS interesting. We will add a discussion about [3] and PGDLS.\\nAlso, we are confused about that this item is a \\\"con\\\" for our paper. Note that PGDLS is only a stronger baseline we proposed in the paper. MMA is the main contribution of this paper. Could you please clarify the question if you do think there's a con here?\\n\\n\\n>> 9. The gradient-free SPSA helps confirm the improvements of MMA under large perturbations are not a side effect of gradient masking.\\n\\nThank you for the comment. Again, we are confused about that this item is a \\\"con\\\" for our paper. Could you please clarify the question if you do think there's a con here?\"}", "{\"title\": \"Response to AnonReviewer3, part 1\", \"comment\": \"Thank you for your detailed reviews and also valuing our contributions.\\n\\nWe would like to carify with R3 (and other readers) that many items under \\\"Cons\\\" are not strictly \\\"disadvantages\\\" of our method. Most of them seem to be clarification questions, especially that item 8 and 9 seem to be comments with positive sentiment.\\n\\nPlease let us know if any of our answers does not resolve the confusion, and we are happy to further elaborate.\\n\\n\\n>> 1. \\\"shortest successful perturbation\\\" appears like a type of weak training attack, and similar to deepfool [1] or confidence 0 CW-L2 attack [2].\\n \\nYou are right that the \\\"shortest successful perturbation\\\" is similar to deepfool and CW-L2. More precisely, deepfool, CW-L2 and our proposed AN-PGD are all algorithms for approximating the \\\"shortest successful perturbation\\\", as we mentioned in Section 2.3 \\\"Other attacks that can serve a similar purpose can also fit into our MMA training framework\\\".\\nSpecifically, the deepfool attack is not strong enough (e.g. as shown in Rony et al. 2019), probably due to it only uses first order approximation to find $\\\\delta^*$, and therefore is not suitable for MMA training.\\nOn the other hand, CW-L2 is likely strong enough, but too expensive to compute during training.\\n\\nMoreover, while the \\\"shortest successful perturbation\\\", $\\\\delta^*$, is a \\\"weak\\\" training attack if measured in the adversarial loss, it is the **right** attacks fo training. Attacks with magnitude larger than the margin $\\\\|\\\\delta^*\\\\|$ could be stronger than $\\\\delta^*$, our margin maximization theory suggests that training on \\\"longer\\\" (and thus stronger) perturbations does not necessarily increase the margin (Section 3 and Figure 2).\", \"reference\": \"Rony et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses, CVPR 2019\\n\\nPlease let us know if we do not directly address your concern out of this comment.\\n\\n\\n>> 6. Fairness of comparison\\n\\nPlease see response to all reviewers. Also please let us know if we do not fully address your concern, and we are happy to further elaborate on this.\\n\\n\\n>> 6. Results on strong, unrestricted attacks like CW-L2\\n\\nWe reported AvgRobAcc, the average robust accuracy over different perturbation magnitudes, including those with very large magnitudes. Therefore, AvgRobAcc serves similar purpose to average norm of the strong unrestricted attacks.\\n\\nWe are working on experiments using CW-L2 to test our models trained with $\\\\ell_2$ attacks, and will report the results when ready.\\n\\n\\n>> 7. Why MMA does not improve robustness, but improves clean accuracy? This means the theoretical analysis only stand under certain circumstances.\\n\\nThis result does not contradict our theory. In contrast, it is very well aligned with our theory. \\nFor wrongly classified examples, MMA training focuses on getting them classified correctly. For correctly classified examples, our theory suggests that MMA training tries to enlarge the margins of all of them, based on their intrinsic robustness (i.e. how difficult for a model to achieve a large margin on different points may be different). \\nOn the other hand, PGD training fails to adapt to the intrinsic robustness of different points, and thus significantly sacrifices its clean accuracy in order to achieve the slight additional robustness for large perturbation. This observation also echoes the sensitivity of PGD to its fixed (and arbitrary) perturbation magnitude. We will make this argument more clear in the paper.\\n\\nPlease let us know if we need to further clarify.\\n\\n\\n>> 7. Why do we need robustness against large $\\\\ell_\\\\infty$ perturbations? when perturbation goes large, the L2 attack (eg. CW-L2) makes more sense than PGD-$\\\\ell_\\\\infty$.\\n\\nWe agree with R3 that perturbations that cause perceptual differences shall not be included to test the robustness of the model. However, it is hard to determine the boundary of \\\"perceptual differences\\\" in terms of the perturbation magnitude. \\nCompared to PGD training, MMA provides a natural way in dealing with this dilemma: user can set $d_\\\\max$ represents the magnitude that is \\\"too large\\\". Below $d_\\\\max$, MMA training enlarges the margin of each individual example based on its robustness under the current model, to the maximium capacity of the model. In contrast, the fixed $\\\\epsilon$ in PGD training need to be \\\"large enough but not too large\\\", which is much harder or even impossible to set, since each example could have different intrinsic robustness. As a result, MMA training is fairly insensitive to $d_\\\\max$, but PGD training is very sensitive to $\\\\epsilon$.\\n\\nIn terms of what norm to measure the perturbation magnitude, we believe that it is more reasonable to evaluate the model using the norm that the model is trained on, namely \\\"train on $\\\\ell_2$ test on $\\\\ell_2$\\\", and \\\"train on $\\\\ell_\\\\infty$ test on $\\\\ell_\\\\infty$\\\". We will add CW-L2 results to models trained with $\\\\ell_2$ attacks when they are ready.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your efforts in reviewing our paper and the questions. Please let us know if we do not directly address your concern in this response. We are happy to hear further feedbacks from you.\\n\\nBesides the contributions that you've summarized, we would also like to point out that\\n1) Our MMA training algorithm is not just a heuristic algorithm. The seemingly intuitive formulation of \\\"minimizing cross-entropy loss on shortest successful perturbation\\\" is backed up by our theories on **direct** margin maximization, and non-trivial construction of the margin's lower bound (Section 2).\\n2) Section 3 analyzes the how does the fixed $\\\\epsilon$ in standard adversarial training influence training, from a margin maximization perspective. Our theoretical predictions are supported by results in Sections 4.\\n\\n\\n>> 1. why use the AvgRobAcc?\\n\\nWe believe AvgRobAcc, the average robust accuracy, is a more comprehensive measure than the robust accuracy under a fixed (and arbitrary) perturbation magnitude.\\nIn practice, it is difficult to argue to what attack magnitude, the robust accuracy is more important. When there is a tradeoff, it is difficult to decide if a model with higher robust accuracy to $8/255$ attacks but lower accuracy to $16/255$ attacks is more robust or less robust. Another example would be the tradeoff between robustness and clean accuracy. It seems more rasonable to measure the \\\"area under the curve\\\", which is approximated by AvgRobAcc.\\n\\n>> 1. does it make any sense to combine black-box results and white-box results?\\n\\nOur intention is to have the strongest attack on each model to approximate the \\\"true\\\" robustness of the model. Therefore we report robust accuracy against the strongest attack among both white-box and black-box attacks.\\n\\n\\n>> 2. Fairness of evaluation\\n\\nPlease see response to all reviewers.\\n\\n>> 3. For the baseline, the authors lack some necessary baselines, like the following [1] and [2]\\n\\nWe are working on evaluating [1] and [2] under our test settings. We will report the results when ready.\\n\\nWe would also like to make a comment that, as concurrent work, our idea on directly maximizing input space margin is orthogonal to [1]'s idea on optimizing a regularized surrogate loss, and [2]'s idea on dynamically adjusting the convergence of inner maximization.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your comments. We are glad that you find the paper is clearly written and also value our theoretical results.\\n\\n>> A minor drawback is the length of the paper...\\n\\nWe will try our best to further shorten the main body of the paper.\\n\\n>> In proposition 2.4: the loss $L^{CE}$ should be clearly defined. \\n\\n$L^{CE}_\\\\theta = \\\\log\\\\sum_j \\\\exp(f_\\\\theta^j(x)) - f_\\\\theta^y(x)$, and we will make it clear in the paper.\\n\\n>> In equation (8), how is the weight of $L^{CE}$ and $L^{MMA}$ determined?\\n\\nWe tested 3 pairs of weights, (1/3, 2/3), (1/2, 1/2) and (2/3, 1/3), in our initial CIFAR10 Linf experiments. We observed that (1/3, 2/3), namely (1/3 for $L^{CE}$ and 2/3 for $L^{MMA}$) gives better performance.\\nWe then fixed it and use the same value for all the other experiments in the paper, including the MNIST experiments and L2 attack experiment s. We will make it clear in the appendix.\"}", "{\"title\": \"Response to All Reviewers\", \"comment\": \"We thank all reviewers for their efforts in review and thoughtful comments.\\n\\nHere we address the common concern from R2 and R3 about fairness in evaluation.\\n\\n>> R2: 2. For the epsilon, since it is different from the standard adversarial settings, how to guarantee the fair comparison? For example, how to evaluate the performance of MMA-12 to PGD-8 under the same test attack PGD-8?\\n>> R3: 6. Fairness of the comparison. Since MMA changes $\\\\epsilon$, how to fairly compare the robustness to standard epsilon bounded adversarial training is not discussed. Is it fair to compare MMA-3.0 vs PGD-2.5, since they have different epsilon?\", \"we_believe_the_comparison_is_fair_for_3_reasons\": \"1) the test settings are strong enough and are the same for all models, regardless of how they are trained; 2) Due to the different meanings of $d_\\\\max$ and $\\\\epsilon$, it is not clear what value of $d_\\\\max$ is a fair comparison to $\\\\epsilon = 8/255$. Instead, in the paper we compared a group of MMA trained models to a group of PGD trained models with different $d_\\\\max$'s and different $\\\\epsilon$'s (or the best from MMA to the best from PGD); 3) because PGD trained models are sometimes tested on the same PGD attack used for training, the evaluation is at least not in favour of MMA training.\", \"to_elaborate_on_the_first_two_points\": \"1) Regardless of how a model is trained, we care about whether it is robust at test time. We believe it is a fair comparison for different models, as long as they are evaluated under the **same** test setting, and the testing attacks are strong enough for each model.\\nNote that although our training algorithm is different from the standard adversarial training, we do use the same **standard adversarial test settings**, i.e. evaluating robust accuracies under repeated PGD attacks (both whitebox and transfer) at different perturbation magnitudes.\\nSpecifically, although trained with different algorithms, when we compare MMA-12 and PGD-8 models wrt their robustness at 8/255 perturbation magnitude for the CIFAR10-$\\\\ell_\\\\infty$ case, we believe that the same testing protocol with repeated PGD-8 attacks is strong enough on both MMA-12 and PGD-8, therefore it is a fair comparison.\\n\\n2) Since MMA training and PGD training have different types of hyperparameters, one-to-one comparison between MMA trained and PGD trained models might not be fair, e.g. MMA-12 vs PGD-8. However, in our evaluation, we trained a group of models for both MMA training and PGD training, covering reasonable values of $d_\\\\max$ and $\\\\epsilon$. Our comparison is between the MMA group and PGD group.\\n(To R3: Your comment might be related to Figure 4. Our intention was to verify the theory that MMA will be able to uniformly increase the margins of different data points while PGD cannot. PGD-2.5 vs MMA-3.0 is an arbitrary choice for comparison. In Figure 4, we are interested in the qualitative analysis of the pattern, rather than a concrete quantitive metric.)\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe paper propose to use maximal margin optimization for correctly classified examples while keeping the optimization on misclassified examples unchanged. Specifically, for correctly classified examples, MMA adopts cross-entropy loss on adversarial examples, which are generated with example-dependent perturbation limit. For misclassified examples, MMA directly applies cross-entropy loss on natural examples.\", \"problems\": \"1. For the performance measurement, why use the AvgRobAcc? does it make any sense to combine black-box results and white-box results?\\n2. For the epsilon, since it is different from the standard adversarial settings, how to guarantee the fair comparison? For example, how to evaluate the performance of MMA-12 to PGD-8 under the same test attack PGD-8?\\n3. For the baseline, the authors lack some necessary baselines, like the following [1] and [2]\\n[1] Theoretically Principled Trade-off between Robustness and Accuracy. ICML 2019\\n[2] On the Convergence and Robustness of Adversarial Training. ICML2019\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a method, Max-Margin Adversarial (MMA) training, for robust learning against adversarial attacks. In the MMA, the margin in the input space is directly maximized. In order to alleviate an instability of the learning, a softmax variant of the max-margin is introduced. Moreover, the margin-maximization and the minimization of the worst-case loss are studied. Some numerical experiments show that the proposed MMA training is efficient against several adversarial attacks.\", \"review:\", \"Overall, this paper is clearly written, and the readability is high. Though the idea in this paper is rather simple and straightforward, some theoretical supports are presented. A minor drawback is the length of the paper. The authors could shorten the paper within eight pages that is the standard length of ICLR paper.\", \"In proposition 2.4: the loss L^{CE} should be clearly defined.\", \"In equation (8), how is the weight of L^CE and L^MMA determined?\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper proposes an adaptive margin-based adversarial training (eg. MMA) approach to train robust DNNs by maximizing the shortest margin of inputs to the decision boundary. Theoretical analyses have been provided to understand the connection between robust optimization and margin maximization. The main difference between the proposed approach to standard adversarial training is the adaptive selection of the perturbation bound \\\\epsilon. This makes adversarial training with large perturbation possible, which was previously unachievable by standard adversarial training (Madry et al.) Empirical results match the theoretical analysis.\", \"pros\": \"1. The margin maximization idea has been well-explained, both intuitively and theoretically.\\n2. Interesting theoretical analyses and understandings of robust optimization from the margin perspective.\\n3. Clear advantage of MMA over standard adversarial training under large perturbations.\\n\\n8. The proposed PGDLS is very interesting, actually quite good and much simpler, without extra computational cost. A similar idea was discussed in paper [3], where they gradually increase the convergence quality of training adversarial examples, and show the convergence guarantee of \\\"dynamic training\\\".\\n9. The gradient-free SPSA helps confirm the improvements of MMA under large perturbations are not a side effect of gradient masking.\", \"cons\": \"1. The idea of \\\"shortest successful perturbation\\\" appears like a type of weak training attack, looking for minimum perturbations to just cross the classification boundary, like deepfool [1] or confidence 0 CW-L2 attack [2]. \\n2. The margin d_\\\\theta in Equation (1)/(2)/... defined on which norm? L_\\\\infty or L2 norm? I assume it's the infinity norm. In Theorem 2.1, the \\\\delta^{*} = argmin ||\\\\delta||, is a norm? Looks like a mistake. \\n3. The minimum margin \\\\delta^{*} is a bit confusing, is it used in maximization or just in the outer minimization? The last paragraph of page 3, L(\\\\theta, \\\\delta) or L(\\\\delta, \\\\theta), consistency check?\\n4. Why do we need the \\\"gradients of margins to model parameters\\\" analysis from Proposition 2.1 to remark 2.2? Given the \\\\delta^{*} found in the inner maximization (eg. attacking) process (step 1), minimizing the loss over this \\\\delta^{*} seems quite a straightforward step 2. Why don't go directly from Theorem 2.1 to Proposition 2.4, since the extensions from LM loss to SLM and CE loss via Proposition 2.3 -> Proposition 2.4., just proves that the standard classification loss CE can already maximize the margin given \\\\delta^{*}? \\n5. Section 4 Experiments. The experimental settings are not clear, and are not standard. What CIFAR10-\\\\ell_{\\\\infty} means: is it the CW-L2 attack, used for training, or for testing? How the test attacks were generated, the m and N, are confusing: for each test image, you have 260 samples for CIFAR10 (which means 260*10K in total), or just 260 in total (this is far less than a typical setting causing inaccurate results)? How are the d_max determined, and what are their relationship to standard \\\\epsilon? How the m models were trained?\\n6. Fairness of the comparison. Since MMA changes \\\\epsilon, how to fairly compare the robustness to standard epsilon bounded adversarial training is not discussed. Is it fair to compare MMA-3.0 vs PGD-2.5, since they have different epsilon? Why robustness was not tested against strong, unrestricted attacks like CW-L2 [2], and report the average L2 perturbations required to completely break the robustly trained model (and show MMA-trained models enforce large perturbations to succeed)?\\n7. Significance of the results. Normally, \\\\epsilon_{infty} > 16/255 will cause perceptual difference. Under 16/255, PGD-8/16, PGDLS-8/16 are still the best. At this level, it is quite a surprise that MMA does not improve robustness, although it does increase clean accuracy. This means the theoretical analysis only stand under certain circumstances. I don't think the optimal \\\\epsilon < margin can explain this, as it does not make sense to me the margin can be larger than 16/255. On the other hand, I thought the theoretical parts were discussing the ROBUSTNESS, not the CLEAN ACCURACY? But it turns out the MMA benefits a lot the clean accuracy? Why do we need robustness against large \\\\infty perturbations, this definitely deserves more discussion, as when perturbation goes large, the L2 attack (eg. CW-L2) makes more sense than PGD-\\\\infty.\\n\\n\\n[1] Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. \\\"Deepfool: a simple and accurate method to fool deep neural networks.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\\n[2] Carlini, Nicholas, and David Wagner. \\\"Towards evaluating the robustness of neural networks.\\\" 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.\\n[3] Wang, Yisen, et al. \\\"On the Convergence and Robustness of Adversarial Training.\\\" International Conference on Machine Learning. 2019.\\n\\n============\\nMy rating stays the same after reading through all the responses. I appreciate the authors' clarification on the notations and experimental settings. My 8/9 are positive points. My major concern is still the effectiveness of the proposed approach, and fairness of the comparison. It seems that MMA only works when the perturbation is large, which often larger than the \\\\epsilon used to train baseline adversarial training methods such as Trades. The authors seem have misunderstood my request for CWL2 results, I was just suggesting that the average L2 perturbation of CWL2 attack can be used as a fair test measure for robustness, instead of the AvgRobAcc used in the paper, and the susceptible comparison between MMA-12 vs PGD-8, or MMA-32 vs Trades.\"}", "{\"title\": \"thank you and wrt TRADES\", \"comment\": \"Thank you for your kind comment.\\n\\nWe haven't tried training TRADES with a larger epsilon yet. Training with clean data is essentially training with epsilon=0. According to the theory in our paper, for already correctly classified data, training on clean data is also \\\"adversarial training with an epsilon smaller than the margin\\\", so it maximizes a lower bound (although loose) of the margin. Combining this with the training on \\\"adversarial data via surrogate-loss minimization\\\", it could be possible that TRADES is less sensitive to hyper-parameter settings wrt large epsilon values, and TRADES-24/32 converges.\\n\\nHowever, we note that even if TRADES-24/32 converges, all the data points are still using a single epsilon, and thus 1) the choice of epsilon is arbitrary; 2) fixing epsilon does not consider that different data points might have different intrinsic robustness. Since our idea is orthogonal to the idea of TRADES, they could potentially be combined for further improvements.\"}", "{\"comment\": \"It is a great work.\\n\\nSince PGD adversarial training only train with the adversarial exmamples, when the epsilon becomes larger, it will be hard to converge. Thus, the models PGD-24 and PGD-32 are hard to train. \\n\\nIn Table 1, the results of PGD-24 show bad performance on both clean data and avdersarial data, which reveals PGD-24 may need a stronger neural network.\\n\\nThe training of PGDLS-24 can converge, as the its learning process is from the easy way to the difficult way, by linearly increasing the epsilon.\\n\\nHave the authors tried another stronger baseline, TRADES[1], to compare with, such TRADES-24 and TRADES-32. For TRADES, it trains with both clean data and adversarial data via surrogate-loss minimization. So I think it will converge and show less sensitivity to hyperparameter setting when the epsilon becomes large.\\n\\n[1] Theoretically Principled Trade-off between Robustness and Accuracy. ICML 2019\", \"title\": \"Great work.\"}" ] }
ByxHJeBYDB
Forecasting Deep Learning Dynamics with Applications to Hyperparameter Tuning
[ "Piotr Kozakowski", "Łukasz Kaiser", "Afroz Mohiuddin" ]
Well-performing deep learning models have enormous impact, but getting them to perform well is complicated, as the model architecture must be chosen and a number of hyperparameters tuned. This requires experimentation, which is timeconsuming and costly. We propose to address the problem of hyperparameter tuning by learning to forecast the training behaviour of deep learning architectures. Concretely, we introduce a forecasting model that, given a hyperparameter schedule (e.g., learning rate, weight decay) and a history of training observations (such as loss and accuracy), predicts how the training will continue. Naturally, forecasting is much faster and less expensive than running actual deep learning experiments. The main question we study is whether the forecasting model is good enough to be of use - can it indeed replace real experiments? We answer this affirmatively in two ways. For one, we show that the forecasted curves are close to real ones. On the practical side, we apply our forecaster to learn hyperparameter tuning policies. We experiment on a version of ResNet on CIFAR10 and on Transformer in a language modeling task. The policies learned using our forecaster match or exceed the ones learned in real experiments and in one case even the default schedules discovered by researchers. We study the learning rate schedules created using the forecaster are find that they are not only effective, but also lead to interesting insights.
[ "deep learning dynamics", "applications", "forecasting model", "real experiments", "forecaster", "policies", "deep learning models", "enormous impact", "model architecture", "number" ]
Reject
https://openreview.net/pdf?id=ByxHJeBYDB
https://openreview.net/forum?id=ByxHJeBYDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "9ueaOnplT", "rylURM6jjB", "rJla3R_jjr", "rkgeDR_ojB", "HJlRC3djiB", "ByghS3dssB", "HkgABKrXir", "rye5LcLAYH", "Hyxs5tB0FS", "SyeR04XiuS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739476, 1573798606348, 1573781172883, 1573781080266, 1573780693909, 1573780547679, 1573243205655, 1571871313952, 1571867026820, 1570612438259 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2060/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2060/Authors" ], [ "ICLR.cc/2020/Conference/Paper2060/Authors" ], [ "ICLR.cc/2020/Conference/Paper2060/Authors" ], [ "ICLR.cc/2020/Conference/Paper2060/Authors" ], [ "~Micah_Goldblum1" ], [ "ICLR.cc/2020/Conference/Paper2060/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2060/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2060/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper trains a transformer to extrapolate learning curves, and uses this in a model-based RL framework to automatically tune hyperparameters. This might be a good approach, but it's hard to know because the experiments don't include direct comparisons against existing hyperparameter optimization/adaptation techniques (either the ones based on extrapolating training curves, or standard ones like BayesOpt or PBT). The presentation is also fairly informal, and it's not clear if a reader would be able to reproduce the results. Overall, I think there's significant cleanup and additional experiments needed before publication in ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Replying to the authors' response\", \"comment\": \"Thank you for reading my review and responding to my comments.\"}", "{\"title\": \"Response to the comment\", \"comment\": \"Thank you for mentioning this connection. We have added it to the updated version of our work.\"}", "{\"title\": \"Response to the reviewer's comments\", \"comment\": \"Thank you for the insightful review.\\n\\nWe updated the paper with better results and more tasks. We show that our method outperforms the human baseline in terms of training speed and either matches or outperforms the human in terms of final accuracy on all tasks. While it is true that the human baseline does not require any additional computational resources for training, it does require domain expertise acquired through years of learning, which is arguably even more costly. Notably, in all 4 problems where we compare to the human baseline, we believe that human researchers used a similar or higher number of runs as our tuner to design the baseline schedules that we compare against.\\n\\nWe also updated the paper with more details regarding Transformer and Proximal Policy Optimization.\\n\\nThank you for mentioning the existing learning curve modeling methods. We added an explanation of differences of our method with those works. [1] learn a probabilistic model of one training curve using a handcrafted basis of nonlinear functions of shapes similar to the training curves being modelled. Our method does not make any assumptions about the shape of the modelled curves and is able to jointly model many training curves - in our experiments, training and validation loss and accuracy. [2] learn a deterministic model of a learning curve, while our method also models stochasticity, hence providing diverse experience for training a reinforcement learning agent. Also in contrast to [1] and [2], our method allows the hyperparameters to change over the course of training and models the influence of those changes on the training metrics.\\n\\n[1] Baker, Bowen, et al. \\\"Accelerating neural architecture search using performance prediction.\\\" arXiv preprint arXiv:1705.10823 (2017).\\n\\n[2] Domhan, Tobias, Jost Tobias Springenberg, and Frank Hutter. \\\"Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves.\\\" Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.\"}", "{\"title\": \"Response to the reviewer's comments\", \"comment\": \"We thank the reviewer for their comprehensive review.\\n\\nWe updated the paper with better results over more tasks, either matching or outperforming the human baseline in terms of final accuracy, and outperforming the model-free baseline in all cases. We also included results over multiple runs of all experiments, showing the minimum, maximum and mean accuracy.\\n\\n1. While it is true that the manually-tuned baseline we provided is simple, it is a standard practice in the field to adjust the learning rate during training and keep the rest of the hyperparameters constant. Adjusting all of them requires significantly more effort and is infeasible in many cases.\\n\\n2. Due to time constraints, we have not benchmarked our method against more hyperparameter-tuning baselines yet. We agree that it would be a very valuable comparison and leave that for future work. Nevertheless, please note that the human baselines we use for Transformer have been tuned by researchers using auto-tuners among other tools.\\n\\n3. [1] successfully use PPO with an LSTM policy on a challenging, partially-observable environment. It is equally principled to use a Transformer policy, since both would operate on the same sequence of observations. The SimPLe algorithm runs PPO on an MDP approximated by a powerful model that handles stochasticity well, which is also a valid approach.\\n\\n4. We updated the paper with a justification of our action discretization scheme. Such a discretization has a number of benefits, including multi-modality, which cannot be achieved using a parameterized Gaussian policy. [2] show that discretization of the action space improves the average performance, stability and robustness to hyperparameters of reinforcement learning agents on a range of continuous control tasks.\\n\\n5. While we have not included such transfer experiments in our current work, we do believe that a model trained on enough architectures and tasks will generalize to new ones. For instance, in the updated version of the paper, we show that the learned policy employs similar learning rate and weight decay rate adjustment schemes across very different tasks. Substantiating this claim in the general case will likely require a large-scale study, which we plan to perform in the future.\\n\\n[1] OpenAI et al. \\u201cLearning Dexterous In-Hand Manipulation\\u201d, arXiv preprint arXiv:1808.00177 (2018)\\n\\n[2] Tang et al. \\u201cDiscretizing Continuous Action Space for On-Policy Optimization\\u201d, arXiv preprint 1901.10500 (2019)\"}", "{\"title\": \"Response to the reviewer's comments\", \"comment\": \"We thank the reviewer for the effort, however we believe there is a mis-understanding.\\n\\nAs for the synthetic curves experiment, we updated the paper with a justification. This task, while simple, showcases the ability of Transformer to model a distribution over curves of similar shape to real training curves with varying speeds of convergence. It has been designed so it is easy to quantify the diversity of generated curves and the fit between the distribution generated by the model and the real one. Furthermore, we included two additional tasks, attesting to the ability of Transformer to model a wide range of distributions over training curves. We also updated the citation of the paper you mentioned with an arxiv URL.\\n\\nWe still believe that while focusing on the synthetic task the reviewer might have missed the main point of the paper, namely that time-series forecasting with Transformer works really well, at least in the context of modeling deep learning dynamics. The general problem has been studied in the community for many decades and we believe that we made significant progress, so we kindly encourage the reviewer to reconsider their assessment of our contributions.\"}", "{\"title\": \"An Interesting Connection\", \"comment\": \"Hi Authors,\\nThank you for your interesting paper. I wanted to bring to your attention that your insights into learning rate and weight decay is related to our paper, which shows that an alternative to weight decay may stabilize effective learning rate and can improve performance.[1] Please consider mentioning the relationship with our work in your next version.\\n\\n[1] https://arxiv.org/abs/1910.00359\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed to train a network with training curves and corresponding parameters, and use policy search to find optimal parameter to replace hundreds or thousands of training in real case scenario, and it is clearly much faster using the trained network to infer parameters, instead of tuning the network manually.\", \"the_first_point_would_be\": \"what's the meaning of synthetically generating training curves other than proving that transformer achieves good performance in modeling discrete distribution? Most practical problems would not have the same distribution as the previously gathered public dataset, thus the data is not representative, and synthetic training curves just does not make sence.\\n\\nThe cited paper 'Learning an adaptive learning rate schedule' does not appear online.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper investigates the possibility of learning a model to predict the training behaviour of deep learning architectures from hyperparameter information and a history of training observations. The model can then be used by researchers or a reinforcement learning agent to make better hyperparameter choices. The paper first adapts the Transformer model to be suitable to this prediction task by introducing a discretization scheme that prevents the transformer decoder's predictions from collapsing to a single curve. Next, the problem is formalized as a partially-observable MDP with a discrete action set, and PPO and SimPLe are introduced. The proposed model-based method is compared against a human and a model-free baseline training a Wide ResNet on CIFAR-10. The model-based method achieves better validation error than the other baselines that use actual data. Next, the method is compared against a human and a model-free baseline training Transformer models on the Penn Treebank dataset. While the human achieves the best performance at the end of the run, the proposed method appears to learn more quickly than the others and finishes with performance comparable to the model-free baseline.\\n\\nCurrently I lean towards accepting this paper for publication, despite a few issues. It asks an interesting question: can we learn a model of the training dynamics to avoid actually having to do the training? This could potentially prevent a lot of unnecessary computation and also lead to better-performing models. It then shows some experimental evidence suggesting that this is possible.\\n\\nMost importantly, I would like to see a measure of variance/uncertainty like confidence intervals included in the results; otherwise it's impossible to assess whether the results are likely to be significant or not. Other questions:\\n1. In the PTB experiment, it looks like the human only adapts the learning rate and leaves the rest of the hyperparameters alone. Why was this policy used as the baseline? It seems extremely basic and unlikely to truly lead to optimal performance.\\n2. Why were more baselines from the related work not included? I understand the experiments are a proof of concept, but it would be nice to get a feeling for what some of the other methods do.\\n3. How do PPO and SimPLe handle partial observability? Is it principled to apply them to partially-observable environments?\\n4. Why not use continuous actions with a parameterized policy (e.g. Gaussian)?\\n5. Is it reasonable to assume that the learning dynamics of all deep learning architectures are similar enough that a model trained on one set of deep learning architectures and problems will generalize to new architectures and problems?\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work focuses on learning a good policy for hyperparameters schedulers, for example learning rate or weight decay, using reinforcement learning. The main contributions include 1) a discretization on the learning curves such that transformer can be applied to predict the them; 2) an empirical evaluations using the predicted learning curves to train the policy.\\n\\nThe main novelties are two folds. On the methodology side, using predicted learning curves instead of real ones can speed up training significantly. On the technical side, the author presented a discretization step to use transformer for learning curve predictions. The results are mixed, we see slightly advantage over human baseline on one task but worse in the other. Human baseline does not need any training! On the writing part, it would be nice to provide more context for both transformer, Proximal Policy Optimization and Simulated Policy Learning to make the paper more self-complete.\\n\\nI like the directions using surrogate to speed up HPO in general but I feel the learning curve prediction part can be improved. There are already some works, not using deep learning method, for example the following:\\n\\n* Baker, Bowen, et al. \\\"Accelerating neural architecture search using performance prediction.\\\" arXiv preprint arXiv:1705.10823 (2017).\\n* Domhan, Tobias, Jost Tobias Springenberg, and Frank Hutter. \\\"Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves.\\\" Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.\\n\\nWhy these methods are not considered in the beginning? In my opinion, transformer is good for modeling long term dependency and concurrent predictions which is not necessarily the case for learning curves. How does the transformer based method comparing to others?\"}" ] }
BJeVklHtPr
Batch Normalization has Multiple Benefits: An Empirical Study on Residual Networks
[ "Soham De", "Samuel L Smith" ]
Many state of the art models rely on two architectural innovations; skip connections and batch normalization. However batch normalization has a number of limitations. It breaks the independence between training examples within a batch, performs poorly when the batch size is too small, and significantly increases the cost of computing a parameter update in some models. This work identifies two practical benefits of batch normalization. First, it improves the final test accuracy. Second, it enables efficient training with larger batches and larger learning rates. However we demonstrate that the increase in the largest stable learning rate does not explain why the final test accuracy is increased under a finite epoch budget. Furthermore, we show that the gap in test accuracy between residual networks with and without batch normalization can be dramatically reduced by improving the initialization scheme. We introduce “ZeroInit”, which trains a 1000 layer deep Wide-ResNet without normalization to 94.3% test accuracy on CIFAR-10 in 200 epochs at batch size 64. This initialization scheme outperforms batch normalization when the batch size is very small, and is competitive with batch normalization for batch sizes that are not too large. We also show that ZeroInit matches the validation accuracy of batch normalization when training ResNet-50-V2 on ImageNet at batch size 1024.
[ "batch normalization", "residual networks", "initialization", "batch size", "learning rate", "ImageNet" ]
Reject
https://openreview.net/pdf?id=BJeVklHtPr
https://openreview.net/forum?id=BJeVklHtPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "UsEOQVKJ4O", "H1emDY7njr", "r1egVKj-iH", "rJePYYZWjr", "r1e2BWceoS", "S1e5lyqxor", "SkguBLi29r", "BylOW51AKr", "r1e_OsV6tS", "BkeN40YhKH", "H1xVT-sedB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798739447, 1573824858542, 1573136679761, 1573095806826, 1573065027753, 1573064433527, 1572808256475, 1571842559686, 1571797872049, 1571753515655, 1569923515971 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2059/Authors" ], [ "ICLR.cc/2020/Conference/Paper2059/Authors" ], [ "ICLR.cc/2020/Conference/Paper2059/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2059/Authors" ], [ "ICLR.cc/2020/Conference/Paper2059/Authors" ], [ "~Antoine_Labatie1" ], [ "ICLR.cc/2020/Conference/Paper2059/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2059/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2059/Authors" ], [ "~Antoine_Labatie1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper is rejected based on unanimous reviews.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Additional comparisons with dropout\", \"comment\": \"We have performed the additional experiments requested by the reviewer. Please find below the comparisons between batch normalization, Fixup and ZeroInit, both with and without dropout. The experiments presented are for ImageNet classification with ResNet50-V2. When using dropout, we use a drop probability of 0.2 on the final classification layer for all methods.\", \"without_dropout\": \"Batch size BatchNorm Fixup ZeroInit without Dropout\\n\\n1024 74.93 / 92.13 74.60 / 91.69 74.61 / 91.81\", \"with_dropout\": \"Batch size BatchNorm w/ dropout Fixup w/ dropout ZeroInit\\n\\n1024 74.82 / 91.98 75.62 / 92.54 75.46 / 92.53\\n\\n\\nThese results seem to indicate that Fixup (like ZeroInit) does better with added regularization through dropout, and becomes comparable to ZeroInit at small batch sizes. Further, we see that batch normalization seems to do worse when dropout is added. \\n\\nNote that we independently tuned the learning rate for each of the experimental results shown above. We are currently in the process of evaluating these algorithms with dropout on other batch sizes.\"}", "{\"title\": \"Response to review\", \"comment\": \"We thank the reviewer for their assessment of our work. The reviewer agrees that our results are extensive but is unclear what the major contributions of this paper are. To clarify:\\n\\n1. The two most influential recent works studying the benefits of batch normalization are Bjorck et al. and Santurkar et al. (both NeurIPS 2018). Both papers argue that the key benefit of batch normalization is to improve the loss conditioning, which enables stable training with larger learning rates. Our experiments prove that this statement is false. When the batch size is small, the optimal learning rate with and without batch normalization is also small, yet batch normalized networks still achieve significantly higher test accuracies and lower training losses. Large learning rates cannot be the key benefit of batch normalization in residual networks.\\n\\n2. There is great interest in finding alternatives to batch normalization. We propose an extremely simple initialization scheme, ZeroInit, which is competitive with batch normalization and can be trained without any normalization. The key component of ZeroInit is to add a scalar multiplier at the end of each residual branch initialized to zero. Note that this can be implemented in a single line of code.\\n\\n3. ZeroInit is similar to the recently proposed Fixup initialization (Zhang et al., ICLR 2019). However, Zhang et al. argued that the key component of Fixup is to rescale the conv layers inside residual branches at initialization. We show empirically that this component is completely unnecessary, even if L2 regularization is also removed. \\n\\n4. Zhang et al. also argued that Fixup is stable at the same large learning rates as batch normalization. Again, we show this claim is false. Both ZeroInit and Fixup are only stable at smaller learning rates and consequently they are both only competitive with batch normalization for small/moderate batch sizes (eg < 1000 on ImageNet)\\n\\n5. Entire papers have been written whose sole purpose is to provide an alternative to batch normalization when the batch size is too small to estimate batch statistics. Examples include GroupNorm (over 250 citations) and batch renormalization (over 100 citations). ZeroInit can be trained at batch size 1 without any drop in final performance.\\n\\nIn summary, we believe our work contains a number of valuable novel contributions. Crucially, our paper does not confirm the results of previous studies. Instead, it shows that the core claims in a number of highly influential papers are false empirically, while also proposing an alternative to batch normalization in residual networks which is significantly simpler to implement than existing methods.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper conducts extensive experiments to study batch normalization, a very popular technique for training a deep convolutional network and its relationship with learning rate and batch size. In addition, the authors also propose a new initialization scheme, \\u201cZeroInit\\u201d, to train a deep ResNet for better test accuracy. This is a very empirical study and the authors also show extensive experimental results. However, I do not see any novel findings in this study. Mostly this paper confirms the results of previous studies. The experimental results do not show much advantage of ZeroInit either. Overall, it is unclear what is the major novel contribution in this paper.\"}", "{\"title\": \"Response to review\", \"comment\": \"We thank the reviewer for their comments on our work.\\n\\nFirst, we would like to clarify that we do not remove weight decay from Fixup. We train with weight decay throughout the paper, and we provide the decay coefficients used for each experiment. The only experiment for which we removed weight decay is the ablation study in table 3. This ablation study confirms that the rescaling of conv layers proposed in Fixup is not required to train very deep residual networks without batch normalization, directly contradicting claims made in the Fixup paper. We provided this ablation because the loss function of very deep networks at initialization is dominated by the L2 loss, and we were concerned that this may implicitly rescale the parameters in the conv layers early in training, even though ZeroInit does not rescale these parameters explicitly.\\n\\nSecond, we emphasize that a major contribution of this work is to study the benefits of batch normalization empirically. Bjorck et al. and Santurkar et al. (both NeurIPS 2018) claimed that the key benefit of batch normalization is to improve the conditioning of the loss and increase the largest stable learning rate. Our results show this claim is false. When the batch size is small, the optimal learning rate both with and without batch normalization is small, yet batch normalization still significantly increases the test accuracy and reduces the training loss. Batch normalization does enable larger learning rates, but this is only beneficial when the batch size is large.\\n\\nFinally, we introduced ZeroInit, which is significantly simpler than Fixup, well motivated by theory and achieves the same performance on ImageNet. Furthermore, the authors of Fixup initialization claim that Fixup enables stable training at the same large learning rates achieved by batch normalization, but we show that this is not true. Both ZeroInit and Fixup are not stable with very large learning rates, and consequently both are only competitive with batch normalization for small/moderate batch sizes (eg < 1000 on ImageNet).\\n\\nWe now address some of the additional points brought up in the review below.\\n\\n1. The reviewer mentions multiple times that we only provide a study of minibatch size on CIFAR-10. However we also provide experiments at a range of batch sizes between 256 and 4096 on ImageNet in tables 4 and 5. These results verify that ImageNet follows the same trends we observed on CIFAR-10.\\n\\n2. \\u201cGiven access to sufficient hardware, this will enable practitioners to dramatically reduce wallclock time of training (Goyal et al.)\\u201d: Our point is that methods like batch normalization which enable larger learning rates are particularly useful when one wishes to minimize the wallclock time of training, since one can increase both batch size and learning rate, and then parallelize computation over multiple GPUs. If large learning rates were not stable, Goyal et al. would not have been able to increase the learning rate and batch size to reduce the wall clock time.\\n\\n3. Biases are often used in conv layers, but these are usually removed when batch normalization is used. When replacing batch normalization with ZeroInit, we simply add these biases back into the network. As we show in the ablation study in table 1, these added biases only bring marginal benefits while the scalar multiplier initialized at zero is essential. We note that the simplicity of ZeroInit is a key positive contribution of our work.\\n\\n4. The reviewer asks for a proper evaluation of Fixup and ZeroInit for image classification. However, we already provide a thorough comparison of Fixup and ZeroInit on ImageNet at a range of batch sizes in tables 4 and 5 for both ResNet50-V1 and ResNet50-V2. We find that ZeroInit outperforms Fixup when the batch size is small but slightly underperforms Fixup when the batch size is large.\\n\\n5. The reviewer also asks for ImageNet results for Fixup with dropout. We note that in table 4, we provided ImageNet results for ZeroInit without dropout in order to enable a fair comparison to Fixup without additional regularization. As we stated in the text, ZeroInit without dropout performs similarly to both Fixup and batch normalization when the batch size is small. That said, we will run additional experiments on ImageNet with Dropout for both batch normalized networks and Fixup initialization and add these to the text.\\n\\n6. We were not aware before submission that Fixup had originally been called ZeroInit. We would be willing to change the name of the method to avoid confusion.\\n\\n7. We will add a citation to \\u201cBag of Tricks for Image Classification with Convolutional Neural Networks\\u201d. As clarified above, we did not remove weight decay from our networks, although we did confirm in an ablation study that weight decay is not required. We note that we did mention on page 6 that Goyal et al. set the scalar multiplier inside batch normalization to zero at initialization at the end of the residual branch.\"}", "{\"title\": \"Response to review\", \"comment\": \"We thank the reviewer for their comments. However their review also contains a number of misunderstandings regarding our work (which we will address when we update the manuscript). We hope the reviewer might reconsider their score if we clarify our contributions here.\\n\\nBjorck et al. and Santurkar et al. both claim that the primary benefit of batch normalization which explains its superior performance is that it improves the conditioning of the loss, enabling stable training with larger learning rates. We show empirically that this statement is false. Batch normalization does enable larger learning rates, and this explains why it is possible to efficiently train batch normalized networks with larger batch sizes. However when the batch size is small, the optimal learning rate both with and without batch normalization is also small, yet batch normalization continues to significantly increase the test accuracy and reduce the training loss.\\n\\nIn addition, we propose a simple and theoretically motivated initialization scheme, ZeroInit, which enables us to train very deep residual networks to high test accuracy without any normalization. This scheme is similar to Fixup initialization but it is significantly simpler to implement and is based on clear theoretical principles. We also demonstrate that many components of Fixup which the authors claim are essential are in fact unnecessary (most notably the rescaling of conv layers at initialization).\\n\\nThe authors of Fixup initialization also claimed that Fixup initialized networks could be trained at the same large learning rates as batch normalized networks. We show that this claim is false. Unlike batch normalization, both ZeroInit and Fixup cannot be trained with very large learning rates. Consequently, both schemes are competitive with batch normalization for small/moderate batch sizes but both underperform batch normalization when the batch size is large.\\n\\nWe believe these contributions will be very valuable to the ML community. In response to the specific negative points raised:\\n\\n1. See the discussion of our contributions above. Our paper demonstrates that a key claim of both Bjorck et al. and Santurkar et al. is false; stable training with large learning rates explains why batch normalized networks can be trained with large batch sizes but it does not explain why batch normalization significantly increases the test accuracy and reduces the training loss when the batch size is small. Although our results are primarily empirical, the success of ZeroInit strongly suggests that one of the key benefits of batch normalization in residual networks is to preserve gradient correlation, as proposed by Balduzzi et al..\\n\\n2. Introducing a scalar multiplier to the residual branch initialized at zero ensures that, at initialization, the signal only propagates through the skip connection and therefore the residual block computes an identity function (trivially a linear function). This ensures that the network at initialization is close to linear, preserving the gradient correlations.\\n\\n3. If we did not initialize the scalar multipliers at zero, the residual block would not compute the identity function at initialization, and therefore the gradient correlations would not be preserved. It is not necessary to initialize the biases at zero, although this is common practice. We will be happy to add an additional ablation study exploring this topic to the text.\\n\\n4. We will be happy to try to clarify this in the updated version. However, we feel that the definition of ZeroInit at the bottom of page 5 is sufficiently clear for future authors to implement.\\n\\n5. Like Fixup, ZeroInit is designed for ResNets, and it cannot be trivially extended to other networks. However it does suggest a simple guiding principle for ensuring that deep networks are trainable at initialization, namely that one should ensure that networks are randomly initialized at the boundary between linear and nonlinear functions. For instance, Xiao et al. [1] found that one can train very deep convolutional networks without batch normalization by choosing an initialization scheme with this property. We can clarify this in an updated version of the paper.\\n\\n6. As clarified above, we did not claim that batch normalization does not improve the conditioning of the loss. Batch normalization does improve the conditioning of the loss, however our results show that this does not explain why batch normalization significantly increases the test accuracy and reduces the training loss when the batch size is small.\\n\\n7. As stated in the text, in Figures 1 and 2 we use ghost batch normalization (Hoffer et al.), whereby the batch statistics are estimated over 64 examples. Consequently we cannot reduce the batch size below 64. However in Figures 3 and 4 we estimate the batch statistics over the full batch size, and we are able to reduce the batch size to 1.\\n\\n[1] Dynamical Isometry and a Mean Field Theory of CNNs, ICML 2018\"}", "{\"title\": \"Thank you very much for your response\", \"comment\": \"Thank you very much for your response. I look forward to discussing these ideas further.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The name \\\"ZeroInit\\\" is very confusing, because that is how FixUp was called initially https://openreview.net/forum?id=H1gsz30cKX , perhaps the authors should consider a different name. I will call it \\\"NewZeroInit\\\" in my review to avoid confusion.\\n\\nThe paper focuses on training image classification networks without batch normalization. The authors claim that effectiveness of batch normalization, and methods which attempt to eliminate it, should be tested on a wide range of learning rates. On experiments performed on CIFAR they find that batch normalization is able to achieve high accuracy even with very high learning rates, in line with Goyal et al. 2017. Based on this, they propose a simplification of FixUp for image classification, in which they remove the need in progressive scaling of initialization, and propose to remove weight decay regularization, while adding dropout on the last layer. This \\\"NewZeroInit\\\" is tested on ImageNet and compares favorably to batch normalization and FixUp.\\n\\nThe closest studies are FixUp and Goyal et al. 2017, with the difference that FixUp studies both image classification ResNet and seq2seq approaches in the absence of batch normalization, and Goyal et al. show a wide range of large scale experiments on full scale ImageNet, whereas \\\"NewZeroInit\\\" studies small scale CIFAR dataset. It is thus unclear if \\\"NewZeroInit\\\" transfers to seq2seq.\\nThere is also \\\"Bag of Tricks for Image Classification with Convolutional Neural Networks\\\" by He et al 2018 (missing citation) which evaluates a similar set of tricks on ImageNet ResNet-50 with batch normalization. In particular, they show that removing weight decay from BN bias and setting scaling gamma to 0 initially significantly improves the results.\\n\\nOn page 4 the authors say \\\"Given access to sufficient hardware, this will enable practitioners to dramatically reduce wallclock time of training (Goyal et al.)\\\". It is not clear what they mean, since Goyal et al. already enabled the reduction by increasing learning rate and minibatch size on ImageNet, whereas the results authors show are on small CIFAR dataset.\\n\\nOn page 5 the authors mention that they introduce bias to each convolution and classification layer, which is surprising because it is a standard way to composing a convolutional network.\\n\\nOverall, the most significant contributions of the paper are:\\n - a study of minibatch size on CIFAR\\n - removing weight decay from FixUp on ImageNet\\n\\nAlso, I am interested in the following results:\\n - clear comparison of FixUp with \\\"NewZeroInit\\\" for image classification\\n - ImageNet ResNet-50 results with dropout regularization in the final layer\\n - ImageNet ResNet-50 results with FixUp, dropout regularization and no weight decay.\\n - (optionally) seq2seq with NewZeroInit instead of FixUp.\\n\\nWithout these results it hard to judge the novelty and contributions of the paper, so I propose reject.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper conducts extensive experiments to verify two practical benefits of batch normalization. i) It increases the final test accuracy and the largest stable learning rate; ii) it enables efficient training with larger batches and a larger learning rate. In addition, the authors propose a new initialization scheme, \\u201cZeroInit\\u201d, to train a deep ResNet to improve the test accuracy. My detailed comments are as follows.\", \"positive_points\": \"1. The experiments are sufficient. In this paper, the authors conduct extensive experiments to explore the benefits of batch normalization, and verifies the effectiveness of the proposed \\u201cZeroInit\\u201d.\\n\\n2. The method is effective in some cases. Specifically, the proposed \\u201cZeroInit\\u201d outperforms batch normalization when the batch size is small, and it is competitive with batch normalization when the batch size is not too large.\", \"negative_points\": \"1. The importance and novelty of the empirical study should be emphasized. The practical benefits of batch normalization can be also found in other papers. For the first benefit, most studies (Bjorck et al. 2018) have found that batch normalization is able to improve the test accuracy. For the second benefit, batch normalization requires a large batch size and a large learning rate (Santurkar et al., 2018). Therefore, what is the difference between this paper and others? More critically, it is necessary to explain why batch normalization has these benefits. It would be better to provide empirical or theoretical justifications to support these. \\n\\n2. The motivation of the proposed \\u201cZeroInit\\u201d is not clear. (Balduzzi et al., 2017) states that \\u201cthe correlations can be preserved by initializing deep networks close to linear functions\\u201d. It is not clear how \\u201cZeroInit\\u201d preserves the correlations? \\n\\n3. Why initialize the scalar multiplier and biases to zero? What are the benefits of the zero initialization? Actually, the scalar multiplier and biases can be randomly initialized. When they are randomly initialized, what is the performance of the initialization? It is an important baseline to justify the effectiveness of the proposed initialization method. \\n\\n4. The technical details of \\u201cZeroInit\\u201d are not clear. It would be better to express the proposed initialization \\u201cZeroInit\\u201d in the mathematical formulation.\\n\\n5. The proposed initialization \\u201cZeroInit\\u201d is designed for deep ResNets. How to extend it to the other deep neural networks?\\n\\n6. This paper states that \\u201cthe empirical success of batch normalization \\u2026improves the conditioning of the loss landscape. However, our results conclusively demonstrate that this is not the case\\u201d. Does it mean that batch normalization does not improve the conditioning of the loss landscape? However, the empirical results cannot justify this statement and explain the success of batch normalization.\\n\\n7. Some results of the figures are missing. In Figure 1, the experimental results of w/o batch norm with varying batch sizes (2^0 ~ 2^5) are missing. Similarly, Figure 2 also has missing results. Please provide more discussions about these missing results.\"}", "{\"title\": \"We initialize the additional scalar bias at zero\", \"comment\": \"Hi Antoine,\\n\\nOur apologies for our slow reply, and thank you for your interest in our work! Your recent paper is very relevant and we will add a citation to it in future versions.\\n\\nWe initialize the additional scalar biases in ResNet-V1 networks at zero. We have not studied in detail how these biases ease training. It is possible that these biases rapidly acquire large positive values, which would effectively linearize the final RELU, thus easing signal propagation from the input to the output. We are not sure we entirely understand how to interpret your random walk intuition, but we will contact you to discuss further after our submission is de-anonymised.\"}", "{\"comment\": \"Hi,\\n\\nI enjoyed reading your paper. That's an interesting work.\\n\\nA closely related paper from ICML 2019 studied the influence of batch normalization and skip connections on the inductive bias of deep nets [1, 2]\\n\\nI wanted to comment as well on the introduction of the scalar bias after the merging of layers for resnets v1. When batch normalization is used, my view is that\\u00a0at initialization each channel after the merging follows a random walk iteratively thresholded at 0. At high depth, this thresholded random walk becomes positive with high probability and the ReLU becomes the identity, thus easing training. This does not happen with ZeroInit without scalar biases. Is it possible to have your opinion on this view ? How do you initialize the scalar biases ?\\n\\n[1] Characterizing Well-Behaved vs. Pathological Deep Neural Networks. ICML 2019.\\n[2] It\\u2019s Necessary to Combine Batch Norm and Skip Connections. Towards Data Science, 2019.\", \"title\": \"A closely related work and a comment\"}" ] }
BJgNJgSFPS
Building Deep Equivariant Capsule Networks
[ "Sai Raam Venkataraman", "S. Balasubramanian", "R. Raghunatha Sarma" ]
Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees. We present a variation of capsule networks that aims to remedy this. We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient. Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance. Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer. This is done using a trainable, equivariant function defined over a grid of group-transformations. Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function. As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase. We also introduce an equivariant routing mechanism based on degree-centrality. We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations. We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.
[ "Capsule networks", "equivariance" ]
Accept (Talk)
https://openreview.net/pdf?id=BJgNJgSFPS
https://openreview.net/forum?id=BJgNJgSFPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "pPaN6Fgsu", "H1lVyiP3sH", "ByxKoKw3jS", "HJe6fI0tsr", "Skgl6ZBPjB", "HJxncZrvoH", "HJxO4WBwiH", "HygxpbmHjr", "Bygaqb7BjH", "HkxsV-7rjH", "rylTAlQHsB", "HJx4P6rAFS", "HyeH-bq9FB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1576798739418, 1573841627812, 1573841312971, 1573672468573, 1573503415955, 1573503380169, 1573503279932, 1573364152136, 1573364116645, 1573364018886, 1573363925328, 1571867996396, 1571623165261 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/Authors" ], [ "ICLR.cc/2020/Conference/Paper2058/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2058/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper combine recent ideas from capsule networks and group-equivariant neural networks to form equivariant capsules, which is a great idea. The exposition is clear and the experiments provide a very interesting analysis and results. I believe this work will be very well received by the ICLR community.\", \"title\": \"Paper Decision\"}", "{\"title\": \"newest affnist results\", \"comment\": \"The latest results on affnist have been added to the response below and the paper in appendix C, Table 8.\"}", "{\"title\": \"Reply to additional comments\", \"comment\": \"We thank you for upgrading the score of our paper.\\n\\nWe have added a discussion based on your comments to appendix D and E, respectively. Briefly, it summarised as follows:\\n\\nWe provided examples of GetWeights and Agreement so as to clarify their role. Please note that a formalisation of these two concepts to cover a general case of consensus-based importance is not done. Such a formalisation does not currently exist, to the best of our knowledge. We hope the role of these functions is clarified by the examples we provide.\\n\\nWe discussed the case of Z^{n} and SO(3), as examples of discrete and continuous groups which can be used with our algorithm. Our algorithm preserves equivariance conditioned on the use of appropriate convolutions. Thus, our theoretical results hold. However, practical implementations of convolutions on continuous groups involve sampling that leads to loss of exact equivariance. Thus, while our routing algorithm preserves equivariance, sampling of continuous functions for implementation of convolutions results in a loss of this property.\"}", "{\"title\": \"Summary of responses to reviewers\", \"comment\": \"We thank the reviewers for their insightful comments. We have revised the paper and responded to their queries. We summarise the major changes to our paper below.\\n\\nSummary of major changes to our paper\\n\\n1. We have added missing definitions for group, group representation, and Pool in the appendix A as required by reviewer1. \\n2. We added description for GetWeights and Agreement functions in section 2; made the reference to 'routing among capsules' more clear in section 2 as required by reviewer 1.\\n3. We specified the norm used in the paper as required by reviewer 1.\\n4. We added results of SOVNET on CIFAR100 to the appendix C as required by reviewer 1.\\n5. We added the paper [1] to our references and related work in response to reviewer 1's comment.\\n6. We added results of experiments to verify the isomorphism of the capsule graph-decomposition to the appendix C as required by reviewer 2.\\n7. We added results of experiments on AFFNIST to the appendix C as required by reviewer 2.\\n8. We added the results of the transformation-robustness experiments for the group-convolutional networks to the tables 2 to 4 as required by reviewer 2.\\n9. We added the results of a simple capsnet model with shared parameters on MNIST and FashionMNIST to the appendix C in response to reviewer 2's comment .\\n\\nWe also fixed typos in our paper.\\n\\nThe codes for the experiments have been uploaded to our anonymous repository.\\n\\nReferences\\n[1] T.S. Cohen, M. Geiger, M. Weiler, \\\"A General Theory of Equivariant CNNs on Homogeneous Spaces.\\\", NeurIPS, 2019.\"}", "{\"title\": \"Continuation of reply to Reviewer #1\", \"comment\": \"C: - In the paper, the results are given for a general class of groups. However, it is not clear how these results generalize even for some popularly employed groups, such as Z^n, S_n, SO(n), SE(n) etc., with different symmetry properties, base space, and field type.\\n\\n- Please check the following paper for a detailed discussion on group equivariant CNNs with different group structures, and elaborate the theoretical results for particular groups (e.g. at least for p4m used in experiments) :\\n\\nT.S. Cohen, M. Geiger, M. Weiler, A General Theory of Equivariant CNNs on Homogeneous Spaces, NeurIPS 2019\", \"r\": \"Our theoretical results (Theorem 2.1 and Theorem 2.2) hold for all general groups, and the particular group representation $L_{g}$ defined in the main paper. Thus, they are true for groups such as Z^n, S_n, SO(n), SE(n) etc. In other words, there is no specific group-dependence of our results - as long as we are able to define a group-convolution.\\n\\nThe scope of our paper and that of Cohen et. al [6] are very different. Their paper aims to build a general theory of equivariant CNNs, and they show that convolutions with equivariant kernels are the most general class of equivariant maps between feature spaces. It is to be noted that they do not report any empirical results at all.\\n\\nWhereas our paper describes a means of integrating equivariant convolutions with capsule networks, thus lending equivariance guarantees to capsule networks - something that is, in general, lacking in the field of capsule networks. To supplement the theory in our work, we also perform several experiments (more than 75 experiments with over 300 values tabulated).\\n\\nOur model fits into the framework of Cohen et.al [6]; however, a description or a detailed discussion of their paper in our work is not in line with our goals, and also given the limitations in space. We have, however, made a mention of this paper in section 4 \\u2013 where related work is discussed - and have included it in our references as it is an important paper in the literature pertaining to equivariant convolutions.\\n\\nWe quote from [7] - the arxiv version of [6] - the total space, the base space, the stabiliser, and the category of representation for p4 and p4m convolutions: \\n\\nTotal Space Stabiliser Base space Category of representation \\n\\np4 C4 Z^{2} Regular\\np4m D4 Z^{2} Regular\\n\\nWhere C4 is the cyclic group of order 4 (here it corresponds to the 4 multiples of 90 degree rotations) \\n D4 is the dihedral group of order 8 (here it corresponds to all possible compositions of two mirror reflections and the 4 multiples of 90 degree rotations)\\n\\n..........................................................................................................\\n\\nReferences\\n1. Sabour, Sara, Nicholas Frosst, and Geoffrey E. Hinton. \\\"Dynamic routing between capsules.\\\" Advances in neural information processing systems, 2017.\\n2. Hinton, Geoffrey E., Sara Sabour, and Nicholas Frosst. \\\"Matrix capsules with EM routing.\\\" ICLR, 2018.\\n3. Bahadori, Mohammad Taha. \\\"Spectral capsule networks.\\\" ICLR, 2018.\\n4. Wang, Dilin, and Qiang Liu. \\\"An optimization view on dynamic routing between capsules.\\\" ICLR, 2018.\\n5. Karim Ahmed, Lorenzo Torresani. \\\"Star-Caps: Capsule Networks with Straight-Through Attentive Routing.\\\" NeurIPS, 2019.\\n6. T.S. Cohen, M. Geiger, M. Weiler, \\\"A General Theory of Equivariant CNNs on Homogeneous Spaces.\\\", NeurIPS, 2019.\\n7. Cohen, Taco, Mario Geiger, and Maurice Weiler. \\\"A general theory of equivariant cnns on homogeneous spaces.\\\" arXiv preprint arXiv:1811.02017, 2018.\"}", "{\"title\": \"Continuation of reply to Reviewer #1\", \"comment\": \"C: - Have you performed analyses using larger datasets such as Cifar 100 or Imagenet? It would be great to provide some results for larger datasets to explore their scalability.\", \"r\": [\"Due to the large number of experiments - over 300 values tabulated ( Table 2 to Table 5, section 3) - we found it infeasible to perform several runs for the baselines as well as our architecture. However, we did train and test our SOVNET architectures multiple times, and found that the results are consistent, though the numbers are not recorded.\"], \"c\": [\"How do you calculate accuracy of models? Are these numbers calculated for a single run, or for an average of multiple runs? If it is the former, please repeat the results for multiple runs, and provide average accuracy with variance/standard deviation. If it is the latter, please provide the variance/standard deviation as well.\"]}", "{\"title\": \"Reply to Reviewer #1\", \"comment\": \"Thank you for sharing your valuable feedback. Please find our responses to your comments, below. (C = Comment; R = Our Response). We have modified the paper to reflect your review, and are very interested in any further feedback from you.\\n...........................................................................................................\", \"c\": [\"Please define accuracy given in tables more precisely, use dot \\\".\\\" at the end of sentences in captions.\"], \"r\": \"We define accuracy as the number of correctly classified test-instances divided by the total number of test-instances. The accuracies given in the transformation-robustness experiments (Tables 2 to 4, section 3) are obtained for each dataset and model, by first training a model with a transformed version of a train dataset, and then reporting this accuracy for several transformed versions of the test dataset.\\n\\n Further we have modified the captions as pointed out.\", \"formal_definition_of_a_group\": \"A tuple (G, .), where G is a non-empty set and . defines a binary operation on G, is said to form a group if the following properties are satisfied:\\n\\na. Closure: For all g1, g2 in G, g1.g2 belongs to G.\\nb. Associativity: For all g1, g2, g3 in G, (g1.g2).g3 = g1.(g2.g3)\\nc. Existence of the identity element: There exists e in G, such that for all g in G, e.g = g.e = g.\\nd. Existence of an inverse: For each g in G, there exists g^{-1} in G, such that g.g^{-1} = g^{-1}.g = e.\", \"formal_definition_of_a_group_action_and_group_representation\": \"Given a group (G,.) and a vector space V, a group action is a function f from G x V to V satisfying the following properties.\\n\\na. For all a in V, f(e, a) = a.\\nb. For all g, h in G and for all a in V, = f(h, f(g, a)) = f(h.g, a).\\n\\nA group representation is a group action by invertible linear maps. More formally, a group representation of a group (G, .), with respect to a vector space V, is a homomorphism from G to GL(V) - the set of linear, invertible maps from V to V.\\n\\n3. Definition of Pool(g)\\nConsider a one-layer GCNN-convolutional prediction network $\\\\Psi_{j}^{l+1}$ for a SOVNET layer $l+1$, and for the $d^{l+1}$-dimensional $j^{th}$ capsule-type. Intuitively, $Pool_{j}^{l+1}(g)$ is defined by the extent of the support of the g-transformed filter. More formally, $Pool_{j}^{l+1}(g)$ = $\\\\{h \\\\in G: \\\\Psi_{j}^{l+1}(g^{-1}\\\\circ h) \\\\neq 0\\\\}$.\\nFor a general $L$-layer GCNN prediction- network, $Pool_{j}^{l+1}(g)$ is defined by recursively applying the above definition through all the layers of the prediction network. \\n\\n4. Description of GetWeights and Agreement\\nThe weighted-sum family of routing algorithms described in Algorithm 1 build deeper capsules using a weighted sum of predictions made for them by shallower capsules. To ensure that the predictions are combined in a meaningful manner, different methods can be used to obtain the weights. The role of GetWeights is to represent any such mechanism. \\n\\nThe activation of a capsule, representative of the probability of existence of the object it represents, is determined by the extent of the consensus among its predictions. This is based on the routing-by-agreement principle of capsule networks. The Agreement function represents any means of evaluating such consensus.\\n............................................................................................................\"}", "{\"title\": \"Continuation of reply to Reviewer #2\", \"comment\": \"References\\n1. Sabour, Sara, Nicholas Frosst, and Geoffrey E. Hinton. \\\"Dynamic routing between capsules.\\\" Advances in neural information processing systems. 2017.\\n2. Hinton, Geoffrey E., Alex Krizhevsky, and Sida D. Wang. \\\"Transforming auto-encoders.\\\" International Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg, 2011.\\n3. Hinton, Geoffrey E., Sara Sabour, and Nicholas Frosst. \\\"Matrix capsules with EM routing.\\\" ICLR, 2018.\\n4. Lenssen, Jan Eric, Matthias Fey, and Pascal Libuschewski. \\\"Group equivariant capsule networks.\\\" Advances in Neural Information Processing Systems. 2018.\\n5. Jeong, Taewon, Youngmin Lee, and Heeyoung Kim. \\\"Ladder Capsule Network.\\\" International Conference on Machine Learning. 2019.\\n6. Choi, Jaewoong, et al. \\\"Attention routing between capsules.\\\" Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019.\\n7. Tai, Kai Sheng, Peter Bailis, and Gregory Valiant. \\\"Equivariant Transformer Networks.\\\" arXiv preprint arXiv:1901.11399 (2019).\"}", "{\"title\": \"Continuation of reply to Reviewer #2\", \"comment\": \"C: One goal for CapsuleNetworks vs GCNNs is the hope for handling different transformations and not only rotations that one can grid with group convolutions. But, the experiments only report on rotation, translation as a transformation. Reporting results by training on MNIST, testing on AFFNIST could shed light on this aspect of SOVNETs.\", \"r\": \"THe complete set of results on MNIST and FashionMNIST for the transformation-robustness experiments as in the main text have been given below. The results for the experiments on CIFAR10 will be given shortly. All the values are accuracies in percentage.\\n\\n Experiments on MNIST\\n Trained on (0,0) Trained on (2,30)\\n (0,0) (2,30) (2,60) (2,90) (2,180) (0,0) (2,30) (2,60) (2,90) (2,180) \\nGCNN 99.61, 93.96, 75.53, 58.91, 46.07 99.67, 99.46, 97.11, 84.5, 63.74\\nSOVNET 99.68, 96.15, 80.53, 64.55, 51.02 99.77, 99.70, 98.86, 90.63, 69.26\\n\\n\\n Trained on (2,60) Trained on (2,90)\\n (0,0) (2,30) (2,60) (2,90) (2,180) (0,0) (2,30) (2,60) (2,90) (2,180)\\nGCNN 99.52, 99.38, 99.37, 97.02, 74.98 89.34, 89.16, 89.13, 88.86, 75.53\\nSOVNET 99.70, 99.65, 99.63 98.56 79.59 99.68, 99.60, 99.59, 99.5, 87.76\\n\\n Trained on (2,180)\\n (0,0) (2,30) (2,60) (2,90) (2,180)\\n GCNN 87.8, 87.51, 87.47, 87.41, 87.45\\n SOVNET 98.34, 98.10, 98.11, 98.08, 98.06\\n\\n Experiments on FashionMNIST\\n Trained on (0,0) Trained on (2,30)\\n (0,0) (2,30) (2,60) (2,90) (2,180) (0,0) (2,30) (2,60) (2,90) (2,180) \\nGCNN 84.63, 56.23, 37.31, 0.2862, 21.58 92.25, 90.95, 72.17, 51.93, 37.12\\nSOVNET 94.72, 61.58, 41.01, 34.07, 27.63 94.99, 94.36, 77.19, 58.59, 43.84\\n\\n Trained on (2,60) Trained on (2,90)\\n (0,0) (2,30) (2,60) (2,90) (2,180) (0,0) (2,30) (2,60) (2,90) (2,180)\\nGCNN 90.78, 89.82, 89.67, 76.69, 49.97 90.31, 89.46, 89.42, 89.22, 64.44\\nSOVNET 94.49, 94.08, 94.20, 90.23, 73.48 94.41, 94.03, 93.93, 93.98, 91.42\\n\\n Trained on (2,180)\\n (0,0) (2,30) (2,60) (2,90) (2,180)\\nGCNN 89.7, 88.65, 88.61, 88.62, 88.6\\nSOVNET 94.11, 93.77, 93.56, 93.57, 93.60\\n\\n\\nAs can be seen in the tables, the SOVNET architecture performs better than the GCNN architecture in all the cases. We will add these results in the main text of the paper.\\n\\nWhile we would very much like to include the other aforementioned results in the main text of the paper, due to space constraints, we will add them in the appendix.\\n............................................................................................................\", \"c\": \"In the appendix there is a comparison with GCNNs on fashion MNIST which shows they have better performance than GCNNs. I would advise reporting GCNNs for all the experiments in the main paper.\", \"edit\": \"we have updated the accuracy on affnist based on the results of our model.\", \"the_code_for_all_of_these_experiments_will_be_made_available_in_the_associated_github_repository_https\": \"//github.com/AnonymousCapsuleSOVNET/SOVNET within a day.\"}", "{\"title\": \"Continuation of reply to reviewer #2\", \"comment\": \"C: The discussion on ideal graph on page 5 is interesting. But the points made are not used later on. I expected the results to have an analysis or at least a show case that indeed if you transform the resultant graphs stay isomorphic.\", \"r\": \"We have performed two experiments to verify that the capsule decomposition-graphs of the transformed and untransformed images are isomorphic.\\n\\nFor the first of these, we trained a P4 convolution based SOVNET architecture on untransformed images of MNIST. We then considered four variations of the MNIST test-dataset - untransformed, and three versions rotated exactly by multiples of 90 degrees: 90, 180, and 270. Our experiment verifies that the mapping defined in the proof of Theorem 2 (given in the appendix, page 13, Theorem A.2) is indeed an isomorphism.\\n\\nTo this end, we considered the capsule-activations as well as the degree-scores, obtained across all the capsule-layers, for each image of all the variations of the test split. We then mapped the activations and the degree-scores for the untransformed images by the aforesaid mapping for each of the transformations. This corresponds to 'rotating' the activations and degree-scores by each transformation. We then computed the squared error of these with each of the activations and degree-scores obtained from the correspondingly transformed image, respectively. A successful verification would result in zero error (up to machine precision). \\n\\nThe results below show that this happens. \\n \\n Rotation Mean-squared error for capsule-activations Mean-squared error for degree-scores \\n 90 6.1900e-15 3.3087e-15\\n 180 6.2821e-15 3.3606e-15\\n 270 6.1911e-15 3.3138e-15\\n\\nThe second of our experiments is an empirical verification that the test-accuracies remain unchanged under transformations for which SOVNET exhibits equivariance. We use the same trained architecture as above, and verify that the accuracy remains unchanged under exact transformations of the images. We present the results below.\\n\\n Rotation Accuracy\\n 0 99.52%\\n 90 99.52%\\n 180 99.52%\\n 270 99.52%\\n\\nWe repeated the same experiments for FashionMNIST - the results are presented below. Results for CIFAR-10 will be updated shortly. We emphasize that the accuracies reported for these experiments are the result of simple architectures, whose primary aim is to verify theorem 2, and not any limitation of the SOVNET model.\\n\\n Rotation Mean-squared error for capsule-activations Mean-squared error for degree-scores \\n 90 2.5678e-13 1.9576e-13\\n 180 2.6306e-13 1.9981e-13 \\n 270 2.5869e-13 1.9662e-13\\n \\n Rotation Accuracy\\n 0 92.23%\\n 90 92.23%\\n 180 92.23%\\n 270 92.23%\\n\\nWe note that the architecture used for these experiments does not use residual blocks during the initial convolution stage and in the prediction networks. The main reason for this is that strided convolution layers (used in residual blocks) cause a loss in provable equivariance. Thus, we use only unstrided, simple convolutions for these experiments. \\n\\nHowever, these architectural differences aside, our architectures are still within the framework of the SOVNET model. They use P4 group-equivariant convolutional prediction-mechanisms, and degree-routing. The use of strided residual-blocks in the previous experiments was to have a mix of equivariant networks for transformation-robustness and residual-connections for better performance.\"}", "{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Reply to Reviewer 2\\n\\nThank you for sharing your valuable feedback, and acknowledging our contributions. Please find our responses to your comments, below. (C = Comment; R = Our Response). We will modify the paper to reflect your review, and are very interested in any further feedback from you.\", \"edit\": \"We have modified the paper, and uploaded the codes. The results of the GCNN experiment for CIFAR10 will be added in a couple of days.\\n.......................................................................................................................................................................................................\", \"c\": \"One assumption in CapsNets is that each part belongs to one whole. Therefore, the normalization in Alg.2 usually is division by degree^k_i. The proposed normalization formula for c_ij seems to encourage that each upper capsule only receives one part. Is this a typo or is there a justification for this?\", \"r\": \"The normalisation scheme used in our degree-routing algorithm is intentional, and not a typo. Our intuition and explanation for this follows.\\n\\nEach routing algorithm, at least in the weighted-sum family, defines a means of evaluating the relative strengths of connections between lower and upper-capsules. These are given quantitatively by the routing weights. We emphasise that the means of normalisation of these weights must be seen in the context of the method used to obtain them.\\n\\nIn methods that normalise among upper-capsules, such as \\\"dynamic routing\\\" by Sabour at. al [1], the un-normalised weights denote the similarity between a prediction for an upper-capsule, and an intermediate vector-value of that capsule. Thus, in such methods, normalisation of weights among upper-capsules, given a fixed lower-capsule, models the relative importance amongst upper-capsules for that lower-capsule. The upper-capsule with the largest similarity to the prediction made for it by the fixed lower-capsule gets the maximum normalised weight. Thus, in scenarios where routing-weights model the \\\"attention\\\" that lower capsules give to upper-capsules, normalisation among the latter is meaningful. \\n\\nThis is in contrast to our degree-routing procedure. We aim to capture and use consensus among predictions for a fixed upper-capsule so as to build agreement-based, rather than attention-based, upper-capsules. Thus, the main aim is to give larger weights to predictions (for a fixed upper-capsule) that exhibit greater consensus with respect to their peers. Thus, it is entirely possible that two predictions are close in their overall consensus behaviour, and would have similar weights, causing multiple parts to route to a single whole. One means of assigning such weights, is to consider the degree scores for each prediction (treating the predictions as being vertices of a similarity-weighted, complete graph). By using these scores in a weighted summation, we aim to build a deeper capsule keeping in mind the principle of routing-by-agreement as espoused in the paper \\\"Transforming autoencoders\\\" by Hinton et al.[2]. The normalisation among lower-capsules is merely a means to ensure that the weights are in the range (0,1). \\n\\nIt is to be noted that normalising across the upper-capsules instead of lower-capsules in our method would lead to comparison of degree-scores of different prediction-graphs, and would not be meaningful.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper combines CapsuleNetworks and GCNNs with a novel formulation. First they modify the CapsNet formulation by replacing the linear transformation between two capsule layers with a group convolution. Second they share the group equivarient convolution filters per all capsules of the lower layer. Third, they change the similarity metric from a lower-upper similarity into a pairwise lower similarity and aggregation which makes it keep the equivarience. Since the cij does not depend on upper capsule anymore they only perform 1 routing iteration (no modification of the routing factors).\\n\\nOne assumption in CapsNets is that each part belongs to one whole. Therefore, the normalization in Alg.2 usually is division by degree^k_i. The proposed normalization formula for c_ij seems to encourage that each upper capsule only receives one part. Is this a typo or is there a justification for this?\\n\\nThe discussion on ideal graph on page 5 is interesting. But the points made are not used later on. I expected the results to have an analysis or at least a show case that indeed if you transform the resultant graphs stay isomorphic. \\n\\nOne goal for CapsuleNetworks vs GCNNs is the hope for handling different transformations and not only rotations that one can grid with group convolutions. But, the experiments only report on rotation, translation as a transformation. Reporting results by training on MNIST, testing on AFFNIST could shed light on this aspect of SOVNETs.\\n\\nConditioned that the last two points will be addressed in the rebuttal I vote for accepting this paper since they suggest a novel formulation that brings some measures of rotation equivarience guarantee into CapsNets. Also their results suggest that there is no need for per Capsule filter bank and several refinements to get rotation robustness (it would be interesting to check the performance of a simple capsnet with shared parameters). In the appendix there is a comparison with GCNNs on fashion MNIST which shows they have better performance than GCNNs. I would advise reporting GCNNs for all the experiments in the main paper. \\n\\n\\n------------------------------------------\\nThank you for updating and expanding the paper. The extra experiments, isomorphism analysis and their response regarding the attention vs part-whole makes the paper much stronger. Therefore, I am increasing my score.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"In this work, a method was proposed to train capsule network by projectively encoding the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer. Thereby, the proposed method aims to improve equivariance of capsule nets with respect to translation (rotation and scaling).\", \"The proposed method is interesting and the initial results are promising. However, there are various major and minor problems with the work:\", \"There are various undefined functions and mathematical notation, such as the following:\", \"Please give formal and precise definitions of groups and group representations for readers who are not familiar with mathematical groups.\", \"What are GetWeights and Agreement used in Algorithm 1?\", \"Please define \\u201crouting among capsules\\u201d more precisely.\", \"How do you calculate Pool() more precisely?\", \"In the paper, the results are given for a general class of groups. However, it is not clear how these results generalize even for some popularly employed groups, such as Z^n, S_n, SO(n), SE(n) etc., with different symmetry properties, base space, and field type.\", \"Please check the following paper for a detailed discussion on group equivariant CNNs with different group structures, and elaborate the theoretical results for particular groups (e.g. at least for p4m used in experiments) :\", \"T.S. Cohen, M. Geiger, M. Weiler, A General Theory of Equivariant CNNs on Homogeneous Spaces, NeurIPS 2019\", \"Please define the norms used in Algorithm 2.\", \"How do you calculate accuracy of models? Are these numbers calculated for a single run, or for an average of multiple runs? If it is the former, please repeat the results for multiple runs, and provide average accuracy with variance/standard deviation. If it is the latter, please provide the variance/standard deviation as well.\", \"Have you performed analyses using larger datasets such as Cifar 100 or Imagenet? It would be great to provide some results for larger datasets to explore their scalability.\", \"Please define accuracy given in tables more precisely, use dot \\\".\\\" at the end of sentences in captions.\", \"There are several typo/grammatical errors, such as the following:\", \"-- Homegenous -> homogeneous\", \"-- for with prediction networks\", \"-- Please proof-read the paper in detail and fix the typo etc.\"], \"after_the_discussions\": \"Most of my questions were addressed and the paper was improved in the discussion period. Therefore, I increase my rating.\\n\\nHowever, some parts of the paper still need to be clarified. For instance;\\n\\n- GetWeights: To ensure that the predictions are combined in a meaningful manner, different methods can be used to obtain the weights. The role of GetWeights is to represent any such mechanism. \\n\\n-> Please define these methods in detail and more precisely, at least in the Supp. mat.\\n\\n- Agreement : The Agreement function represents any means of evaluating such consensus.\\n\\n-> This is also a very general concept, which should be more precisely defined. \\n\\n- Our theoretical results (Theorem 2.1 and Theorem 2.2) hold for all general groups, and the particular group representation defined in the main paper. \\n\\n-> Could you please give a concrete discussion on generalization of these results and the proposed algorithms for discrete and continues groups? For instance, how do these algorithms and results generalize with Z^n and SO(n)?\"}" ] }
rylNJlStwB
Learning to Infer User Interface Attributes from Images
[ "Philippe Schlattner", "Pavol Bielik", "Martin Vechev" ]
We present a new approach that helps developers automate the process of user interface implementation. Concretely, given an input image created by a designer (e.g, using a vector graphics editor), we learn to infer its implementation which when rendered (e.g., on the Android platform), looks visually the same as the input image. To achieve this, we take a black box rendering engine and a set of attributes it supports (e.g., colors, border radius, shadow or text properties), use it to generate a suitable synthetic training dataset, and then train specialized neural models to predict each of the attribute values. To improve pixel-level accuracy, we also use imitation learning to train a neural policy that refines the predicted attribute values by learning to compute the similarity of the original and rendered images in their attribute space, rather than based on the difference of pixel values.
[ "user interface attributes", "images", "input image", "new", "developers", "process", "user interface implementation", "designer", "vector graphics editor", "implementation" ]
Reject
https://openreview.net/pdf?id=rylNJlStwB
https://openreview.net/forum?id=rylNJlStwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "w-BFRBRXkt", "BJgnNBH_sH", "ryetcfr_jS", "BJxQKt8LiH", "Syl3EvQNor", "rJlTAUXEjH", "r1gsYbQVoH", "HyendkXNjr", "rke2S9fVsS", "Bkgi2uRR9B", "SJlsL5NoFB", "HkeXm6OKKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739389, 1573569844499, 1573569168868, 1573443962829, 1573300019871, 1573299925088, 1573298563277, 1573298036116, 1573296708491, 1572952243169, 1571666515257, 1571552538541 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/Authors" ], [ "ICLR.cc/2020/Conference/Paper2057/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2057/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2057/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The majority of reviewers suggest rejection, pointing to concerns about design and novelty. Perhaps the most concerning part to me was the consistent lack of expertise in the applied area. This could be random bad luck draw of reviewers, but more likely the paper is not positioned well in the ICLR literature. This means that either it was submitted to the wrong venue, or that the exposition needs to be improved so that the paper is approachable by a larger part of the ICLR community. Since this is not currently true, I suggest that the authors work on a revision.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Overhead of the refinement loop\", \"comment\": \"Question: My concern has been the cost/benefit ratio: Siamese network is significantly more complicated than PixelSim (or doing nothing) but only brings marginal improvements over best prediction. We may need more evidence to show it's necessity. For example, if somehow the experiments on other UI elements showed strong improvements over the baseline.\", \"answer\": \"We agree that to show that the refinement loop is necessary it would require more evidence. The only claim we can currently support is that the refinement loop does lead to an improvement, as shown by our experiments.\\n\\nHowever, we would like to point out that the Siamese network design as well as our choice of imitation learning are made such that they incur only small overheard, both for training and the required infrastructure. This is because: \\n\\n(i) the Siamese Network reuses the already trained attribute prediction networks by combining their learned latent features (from the second to last layer) and adds learnable transformation on top of them. As discussed in our evaluation, we initialize the Siamese networks with the pretrained weights of the best attribute prediction network.\\n\\n(ii) our approach is based on generating synthetic datasets, therefore the infrastructure required to render attributes is reused.\\n\\n(iii) the choice of imitation learning means that training a policy is phrased as a sequence of supervised learning tasks. This again reuses the infrastructure and training methods used for training the attribute prediction network. This is in contrast to using other reinforcement learning methods (e.g., REINFORCE) which are more difficult to train and require specialized training algorithms and infrastructure for scalable training.\\n\\nThe main source of overhead comes from selecting which attribute to refine (and to which value) and determining how many iterations to perform (i.e., when to stop).\"}", "{\"title\": \"Paper Revision\", \"comment\": \"Dear reviewers,\\n\\nWe have updated our paper based on your comments and questions. The main changes include: \\n\\n1) [Abstract/Introduction]: We clarify that the main scope of our work is to explore a new domain of learning to infer user interface , the challenges it contains and experimentally showing how they can be addressed for a non-trivial set of attributes. We explicitly say both in abstract and introduction that our is evaluated on Android button component. The motivation behind this choice was that: (i) it is the most common component used by existing applications, and (ii) provides high variety in the attributes (e.g., both categorical and continuous, colors, text attributes and visual attributes such as border and shadows). \\n\\n2) [Abstract] remove mentioning vector image to avoid confusion as the input to our approach is rasterized image\\n\\n3) [Section 4.2] Clarify the usage of [-c, c]\\n\\n4) [Evaluation] Clarify that the color clipping is designed for solid color palettes and not for gradient colors\\n\\n5) [Appendix B] Provide details of the stopping criterion used in the refinement loop\\n\\n6) [Appendix B] Discuss the nature of errors and and provide per-attribute accuracy breakdown of the refinement loop\\n\\nWe believe that our work is a useful step in developing practical tools that support a wide range of different attributes and components, beyond those considered in our work. We will release or source code, datasets as well as the learning infrastructure to support further research in this domain.\"}", "{\"title\": \"Response to Authors' response.\", \"comment\": \"First I want to thank the authors for the detailed answers. Most of the questions are answered well, I hope authors can make them clear in the paper too.\\n\\nHere are the two questions I'm still not 100% convinced.\", \"question_10\": \"I understand that the Button is a widely, probably mostly used UI element, and choosing it as the object to study makes sense.\\n\\nHowever, the authors claim that this work is about generic UI elements in the title, abstract and the second paragraph in Introduction. I still think we need more cases than Android Button to prove that the method works for generic UI elements, which is a quite diverse set of things.\", \"question_14\": \"Authors argued that the Siamese network is the only similarity function that brings an improvement, which I agree.\\n\\nMy concern has been the cost/benefit ratio: Siamese network is significantly more complicated than PixelSim (or doing nothing) but only brings marginal improvements over best prediction. We may need more evidence to show it's necessity. For example, if somehow the experiments on other UI elements showed strong improvements over the baseline.\", \"last_note_to_the_area_chair\": \"I'm not working in the field of UI pixel-to-code generation. All my comments are made with my experience in generic ML research (mostly NLP and Data Mining on the Web) and real-world mobile apps development. It could help if at least one of the reviewers has research background on this matter.\"}", "{\"title\": \"Response to Reviewer #3 (part 3)\", \"comment\": \"Q: In the equation to the end of page 5, do we need an extra outer bracket for the denominator?\", \"a\": \"Indeed, color clipping works best when the application uses a fixed color palette. We will clarify this point in our paper.\", \"q\": \"The effect of color clipping selection seems very specific to applications with a fixed color palette. While this is indeed the majority, this prior knowledge need to be specified clearly by saying it's tailored towards such applications\"}", "{\"title\": \"Response to Reviewer #3 (part 2)\", \"comment\": \"Q: Authors mentioned the REINFORCE algorithm by Williams et al 1992 is expensive. It could help the reader if a brief explanation of why it's expensive is provided.\", \"a\": \"Overall, adding different combinations helped to improve the network accuracy. A possible intuition behind the multiplication is that it allows the network better capture the magnitude of the difference between the features.\", \"q\": \"In Section 4.2. Second paragraph, Reviewer can understand the necessity of additive and subtractive operations, but why multiplication?\"}", "{\"title\": \"Response to Reviewer #3 (part 1)\", \"comment\": \"We thank the reviewer for the thorough comments.\\n\\nWe would like to clarify that the main scope of our work is to explore a new domain of learning to infer user interface attributes, the challenges it contains and experimentally showing how they can be addressed for a non-trivial set of attributes (including comprehensive evaluation of different design decisions, network architectures and various optimizations used throughout our work). To achieve this we have selected Android button component as: (i) it is the most common component used by existing applications, and (ii) provides high variety in the attributes (e.g., both categorical and continuous, colors, text attributes and visual attributes such as border and shadows). To our best knowledge we are the first work to explore this domain and we will release or source code, datasets as well as the learning infrastructure to support further research in this domain.\", \"please_find_the_answers_to_your_questions_below\": \"\", \"q\": \"Since the rendering process could be costly, what is the speed of convergence in the attribute value adjustment iterations?\", \"a\": \"The convergence is fast and converges on average after 4-5 iterations (when starting from the predictions computed by the attribute network). This is partially because the starting predictions are already good and the refined values are selected by sampling from the learned distribution of the most likely miss-predicted values (rather than picking them at random).\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the comments and clarifying questions. We provide detailed answers below:\", \"q\": \"My only disappointment is maybe the fact that only the Android Button was considered, and it is not clear how the model would perform with other and more sophisticated Android components.\", \"a\": \"Since each component consists of a set of attributes, our main focus was to design and evaluate our approach on a wide range of attributes. For this reason we have selected Android button component as: (i) it is the most common component used by existing applications, and (ii) provides high variety in the attributes (e.g., both categorical and continuous, colors, text attributes and visual attributes such as border and shadows).\\n\\nHaving said that, we do agree that experimenting with other components would make our work stronger and provide experimental support that indeed our technique scales well.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the comments and clarifying questions. We provide detailed answers below:\", \"q\": \"Are the baselines strong enough? None of them seem to be from recent prior work. How about a direct comparison to some of the work listed in the second para on page 2?\", \"a\": \"The reason why we do not provide experimental comparison to the prior work (e.g., second paragraph on page 2) is because such comparison is unfortunately not possible. Even though the high level task is the same, inverting rendering engines to interpret images, the actual datasets and network architectures are specialized for the given domain. For example, it makes little sense to use an architecture specialized to predict camera angle and instead try to predict border width. To our best knowledge, there is no prior work that we can compare to in the same domain as we are the first work to explore solving the task of learning to inferring user interface attributes.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an approach to infer the attribute values of an input image representing a user interface. The model first infers the most likely initial attribute values, and iteratively refine them to improve the similarity between the input image and the interface generated from the newly inferred attributes. The model is trained on synthetic datasets generated by a black box rendering engine, and generalizes well to real-world datasets. To address the issues of pixel based metrics and mean squared error, the authors instead uses the probability that two images are equal in the attribute space to define the cost between these two images.\\n\\nAlthough I'm not familiar with the problem addressed by the paper, I found the paper very clear and well written. Overall, the method is sensible and elegant, and could easily be applied to other domains. My only disappointment is maybe the fact that only the Android Button was considered, and it is not clear how the model would perform with other and more sophisticated Android components.\", \"a_few_questions_for_the_authors\": [\"How many steps do you perform in the refinement loop? This is an important information, but I couldn't find it in the paper. Typically, I was surprised to see in the first row of Table 2 that the model with a random attribute initialization can reach such a high performance. But I imagine that you need many more iterations to converge if you start from random attributes than from the best prediction initialization?\", \"Also, what is the stopping criterion? Do you decide to stop when none of the proposed attribute changes improve the pixel-level accuracy?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an approach for reverse-engineering webpages using Siamese networks and imitation learning. While the idea of using synthetic data (which can be easily procedurally generated) to do this reverse-engineer training is very clever, prior work has exploited it also. Novel elements include the attribute refinement using imitation learning, and the authors show the effect of this step, but the improvement is small. Thus, the limited novelty and not very convincing results make the question the potential impact of this paper.\", \"some_questions\": \"a) The authors mention they cannot use a GAN-style method because all generated images are by definition true/real; how about learning whether a *pair* is real or fake? (where the pair consists of the design specification and the rendered version). \\nb) Are the baselines strong enough? None of them seem to be from recent prior work. How about a direct comparison to some of the work listed in the second para on page 2?\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"Authors proposed an algorithm to predict the attribute of GUI elements from rasterized design images. The problem is separated into two steps. The first step is to predict initial values of the attributes (border width, color, padding etc) from the image where the type of UI element and set of attributes are already known. Authors designed a typical convolutional DNN for each of the attributes. The second step is to learn a policy \\\\pi to iteratively adjust one attribute a time until the final rendering matches input image pixel-perfect.\\n\\nAuthors conducted the experiment with large synthetic data set of Android buttons, and evaluate the performance with a held-out synthetic set as well as a 110 hand crafted real world buttons set sampled from apps in Google Play App Store. Several variations of the same model were compared. The result showed that network structure, padding strategy (this is a bit unexpected), background type and color selection strategy all affect the accuracy significantly. \\n\\nReviewer has concern about the scope and application value of the problem as a research paper. A number of key prior assumptions have to be made to let the algorithm work: the type of the UI element need to be known; the list of attributes and their value ranges need to be fixed beforehand and each of the attribute demands a DNN; the refinement iteration has the actual rendering in the loop which could be costly on current App development platforms. \\n\\nFeedback questions.\\n\\n1) The abstract mentioned vector image as input but the main body only discussed rasterized images.\\n\\n2) Since the rendering process could be costly, it's useful to discuss the speed of convergence in the attribute value adjustment iterations.\\n\\n3) Reviewer is interested in the nature of the error (7.5% loss) but it's not discussed.\\n\\n4) In related work, authors mentioned the REINFORCE algorithm by Williams et al 1992 is expensive. It could help the reader if a brief explanation of why it's expensive is provided.\\n\\n5) In Section 3 Background, authors mentioned that the attributes are not independent of each other, which is a major challenge. Reviewer would like to see some discussion or experiment data on how this affects the process and how did the current algorithm address it.\\n\\n6) It's a bit surprise that color clipping method has a big impact on accuracy. Some examples could have helped the reviewer understand it.\\n\\n7) In Section 4.2 first paragraph, it seems that the user of the algorithm need to set the [-c, c] clipping values manually per feature. This sounds like quite some prior knowledge and hand-tuning.\\n\\n8) In Section 4.2. Second paragraph, Reviewer can understand the nessesity of additive and subtractive operations, but why multiplication?\\n\\n9) In the equation to the end of page 5, do we need an extra outer bracket for the denominator? By the way, the equations should be numbered for easier reference.\\n\\n10) The task of predicting Android button attributes, while practical, seems over-simplified. Reviewer suggests at least experiment with a set of common UI elements to proof the horizontal performance.\\n\\n11) In Section 5.1, Reviewer respect the experiment results but doesn't understand why solid color background provides the best variety but screenshots don't. May need more analysis and explanation.\\n\\n12) In Table 1, the first line for variant (C) also looks pretty good, or even better than core on the Android app store dataset.\\n\\n13) In Section 5.1, the effect of color clipping selection seems very specific to applications with a fixed color palette. While this is indeed the majority, this prior knowledge need to be speciifed clearly by saying it's tailored towards such applications (or use more examples to proof that's not the case).\\n\\n14) In Table 2: Pixel Sim's performance on Best Prediction Initialization seems pretty good, and Reviewer believes this is the more practical scenario. Is a more complicated Siamese Network justified?\"}" ] }
B1eXygBFPH
Attacking Graph Convolutional Networks via Rewiring
[ "Yao Ma", "Suhang Wang", "Tyler Derr", "Lingfei Wu", "Jiliang Tang" ]
Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification. Recent researches show that graph neural networks are vulnerable to adversarial attacks, which deliberately add carefully created unnoticeable perturbation to the graph structure. The perturbation is usually created by adding/deleting a few edges, which might be noticeable even when the number of edges modified is small. In this paper, we propose a graph rewiring operation which affects the graph in a less noticeable way compared to adding/deleting edges. We then use reinforcement learning to learn the attack strategy based on the proposed rewiring operation. Experiments on real world graphs demonstrate the effectiveness of the proposed framework. To understand the proposed framework, we further analyze how its generated perturbation to the graph structure affects the output of the target model.
[ "Graph Neural Networks", "Rewiring", "Adversarial Attacks" ]
Reject
https://openreview.net/pdf?id=B1eXygBFPH
https://openreview.net/forum?id=B1eXygBFPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "KylCrU_z4y", "S1ebgQN2sB", "r1lNAbNnsB", "S1gxPZ4hsr", "SklXmeN3iB", "BJgLmkVhjH", "S1gdft7NcS", "rylgT8lNqB", "BJlUEvc6KS", "Hkl2-ampKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739358, 1573827304965, 1573827020085, 1573826904459, 1573826587412, 1573826333675, 1572251920213, 1572239032141, 1571821357792, 1571794180370 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2056/Authors" ], [ "ICLR.cc/2020/Conference/Paper2056/Authors" ], [ "ICLR.cc/2020/Conference/Paper2056/Authors" ], [ "ICLR.cc/2020/Conference/Paper2056/Authors" ], [ "ICLR.cc/2020/Conference/Paper2056/Authors" ], [ "ICLR.cc/2020/Conference/Paper2056/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2056/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper2056/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2056/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for attacking graph convolutional networks, where a graph rewiring operation was introduced that affects the graph in a less noticeable way compared to adding/deleting edges. Reinforcement learning is applied to learn the attack strategy based on the proposed rewiring operation. The paper should be improved by acknowledging/comparing with previous work in a more proper way. In particular, I view the major innovation is on the rewiring operation and its analysis. The reinforcement learning formulation is similar to Dai et al (2018). This connection should be made more clear in the technical part. One issue that needs to be discussed on is that if you directly consider the triples as actions, the space will be huge. Do you apply some hierarchical treatment as suggested by Dai et al. (2018)? The review comments should be considered to further improve too.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for the valuable comments and suggestions.\", \"we_address_the_concerns_from_the_reviewer_as_follows\": \"\", \"q1\": \"In figure 3, the authors also show that the proposed method can make less noticeable changes on eigenvalue. But are these changes still noticeable compared to original one? Please also show these information.\", \"a1\": \"In Figure 3, we have compared the changes made by the rewiring attack and the random adding/deleting operation. Here, we directly provide the changes. After the rewiring attack performed by ReWatt, the eigenvalues of the Laplacian matrix change about 2.6% on average, while they change about 7.78% after random adding/deleting attack.\", \"q2\": \"2% data for testing is too few for me. The authors should increase these number. In addition, how many replication of experiments did the author do? The author should give the variance of the results and make significant test if needed.\", \"a2\": \"For the attack experiments, we have to split each dataset into three non-overlapping parts: 1) a classifier-training set to train the classifier to be attacked; 2) an attacker-training set to train the attacker; and 3) an attacker-testing set to test the performance of the trained attacker. To test the performance of the attack performance, we need to obtain a well-trained GCN; as a result, we need a large portion of each dataset (or a classifier-training set) to train the GCN algorithm. In REDDIT-MULTI-12K and REDDIT-MULTI-5K, we use $90\\\\%$ of the entire dataset to train the classifier. Furthermore, the remaining $10\\\\%$ of the dataset is used to train and test the attacker, where $80\\\\%$ of the remaining data is used as the attacker-training set, while $20\\\\%$ of the remaining data is used as the attacker-testing set. Hence, the ratio between the attacker-training and attacker-testing sets is 4:1 which suggests the attacker-training set and attacker-testing set are well balanced. Although we could use a larger portion for attacker-testing set by reducing the classifier-training set, it affects the performance of the classifier to be attacked. In the IMDB-MULTI dataset, to have enough graphs to train and test the attacker, we compromise the performance of the classifier by using only $50\\\\%$ of the entire dataset to train the classifier.\\n\\nTo compare ReWatt with RL-S2V, we run these two methods on $5$ different data splits and report the average performance with variance. Specifically, for each split, we keep the attacker-training set fixed to make sure the being attacked classifier is the same over different runs. We then randomly shuffle the remaining dataset and split it into the attacker-training set and the attacker-testing set. The performance on REDDIT-MULTI-12K is as follows:\\n 1 2 3\", \"rl_s2v\": \"0.024 (0.0063); 0.059 (0.0068); 0.0624 (0.0284)\", \"rewatt\": \"0.2306 (0.0149); 0.233 (0.0164); 0.2338 (0.0178)\\nThe $p$-values are all smaller than $0.00001$. Hence, ReWatt significantly outperforms RL-S2V on IMDB-MULTI datset.\", \"q3\": \"What is the prediction accuracy of the target classifier? Did the attacker flip more correct predictions?\", \"a3\": \"We take the REDDIT-MULTI-12K dataset as an example to answer this question. The prediction accuracy of the target classifier on the original (unattacked) testing set is $43.24\\\\%$, after the attack, the accuracy is reduced to $32.88\\\\%$. According to this observation, the attacker flips more correct predictions than incorrect predictions.\"}", "{\"title\": \"Response to Official Blind Review #1--Part 2\", \"comment\": \"Q4: The paper shows the change of eigenvalues under one rewiring operation. How does it change after multiple operations? In addition, the smaller change to the eigenvalues is compared with rewiring to more distant nodes or adding an edge between two distant nodes. That is, it is under a *given* $v_{fir}$ and $v_{sec}$. A different attack may select a different $v_{fir}$ and $v_{sec}$ in the first place. So it is still not clear whether rewiring leads to less noticeable changes.\", \"a4\": \"Applying multiple rewiring operations to a graph can be viewed as applying these operations one by one. So, in the worst case, the changes can be accumulated. In some specific cases, the changes made by multiple rewiring operations can be smaller than direct accumulation. For example, the two rewiring operations $(v_1,v_2,v_3)$ and $(v_1, v_3, v_4)$ can be merged to one single rewiring operation $(v_1,v_2,v_4)$. Note that the experiments in Appendix C are not based on a single rewiring operation but potentially multiple rewiring operations. So, we have empirically shown that even with multiple rewiring operations, the change to the eigenvalues is still small. We have empirically shown in Appendix C that, with the same number of operations, ReWatt made smaller changes to the eigenvalues of the Laplacian matrix than random adding/deleting operation.\", \"q5\": \"The experiment splits the dataset into three parts, training set, rewiring operation set, and test set. However, for those predicted incorrectly on the rewiring operation set, the success rate should not be counted. Perhaps this is already done?\", \"a5\": \"Each dataset is split into three non-overlapping parts: 1) a classifier-training set to train the classifier to be attacked; 2) the attacker-training set to train the attacker; and 3) the attacker-testing set to test the performance of the trained attacker. So, the attacker learns to perform the rewiring operation properly on the attacker-training set and then attacks the attacker-testing set by performing rewiring operations. The success rate reported in the paper is only based on the attacker-testing set.\"}", "{\"title\": \"Response to Official Blind Review #1--Part 1\", \"comment\": \"Thank you for the valuable comments and suggestions.\", \"we_address_the_concerns_from_the_reviewer_as_follows\": \"\", \"q1\": \"It's quite surprising that ReWatt achieves higher success rate than RL-S2V (first two rows of Table 1). RL-S2V considers a properly larger set of attacks and uses Q-learning (in contrast to actor-critic in ReWatt). So is it the conclusion that actor-critic is better than Q-learning? Perhaps it will be illustrative to experiment with replacing Q-learning in RL-S2V by actor-critic. This can be implemented in the framework of ReWatt: in Eq 5, replace $p_{fir}*p_{thi}$ by $p(add/remove|e_t)$.\", \"a1\": \"We agree that RL-S2V has a larger attack space, which means the optimal solution it can achieve is as good or better than the one our method can find. However, both methods are not guaranteed to always find the optimal solution in the given attack space. We list some potential reasons to explain why ReWatt can outperform RL-S2V as follows:\\n1) When performing an adding/deleting edge action in RL-S2V, it chooses two nodes sequentially. Then it decides to add an edge between two nodes if they are not connected, otherwise, the edge between them is removed. Since most graphs are very sparse, the RL-S2V algorithm is, by design, biased to adding an edge. On the other hand, ReWatt removes an edge and then add another edge. The adding/deleting edge operations are more balanced. \\n2) The reward design in ReWatt is different from RL-S2V. In RL-S2V, a non-zero reward is only given at the end of an attacking session. Specifically, at the end of an attacking session, a positive reward of $1$ is given if the attack succeeded, otherwise a negative reward $-1$ is given. All the intermediate steps get $0$ reward. In ReWatt, the reward is given after each action. A positive reward is given once an action leads to a successful attack. A negative reward is penalized to take each action if it does not directly lead to a successful attack, which encourages the attacker to make as few actions as possible. Furthermore, we also proposed an adaptive negative reward design, which determines the value of the negative reward according to the size of each graph. In fact, the design of this adaptive negative reward has shown to be very effective and important to the ReWatt framework. As shown in Table 1, ReWatt-n (which is a variant of ReWatt without adaptive negative reward design) performs much worse than ReWatt. Specifically, if we apply ReWatt-n in the same setting of RL-S2V (with fixed actions), its performance is not as good as RL-S2V in REDDIT-MULTI-12K and REDDIT-MULTI-5K datasets. The performance of ReWatt-n on REDDIT-MULTI-12K is [11.26%; 14.7%; 18.02] while RL-S2V achieves [9.46; 18.5% 21.1%]. On the REDDIT-MULTI-5K, the performance of ReWatt-n is [4.49%; 5.62%; 6.74%] while RL-S2V archives [4.49%; 16.9%; 18.0%]. Hence, the design of adaptive negative reward could be an important reason why ReWatt can perform better than RL-S2V.\\n\\nAlso, please note that RL-S2V cannot be implemented with actor-critic by simply replacing $p_{fir}*p_{thi}$ with $p(add/remove|e_t)$ in the framework of ReWatt. This is because the action of ReWatt is different from RL-S2V as described in 1). The edge $e_t$ chosen by ReWatt is an existing edge in the graph, therefore we can only delete it from the graph and can not add it to the graph. Hence, $p(add/remove|e_t)$ cannot be performed in practice.\", \"q2\": \"The attack is specifically designed for graph classification, while the graph convolutional filter is widely used in other problems like node classification and link prediction. Can it be applied to such problems as well?\", \"a2\": \"The ReWatt framework can be applied to attack node level tasks such as node classification and link prediction by adjusting the design of the rewards. For example, for node classification, we can design the reward based on the overall performance of the targeted classifier. Specifically, if the goal is to decrease the overall performance of a node classification classifier, a positive reward can be given when an action reduces the overall performance (evaluated on a validation set) and a negative reward can be given if an action increases the accuracy.\", \"q3\": \"In addition to RL-S2V, it will be helpful to compare with Nettack (Z\\u00a8ugner et. al, 2018). It employs an admissible set of perturbations, which can be adapted for the rewiring attack.\", \"a3\": \"Our work focuses on the graph-level attack, while Nettack is designed for targeted node-level attack. It is not straightforward to adapt Nettack for graph-level tasks. Hence, we didn\\u2019t compare our method with Nettack. However, we do agree that some of the constraints used in Nettack can be incorporated into our framework, which can be a promising future step to make the attack even more unnoticeable.\"}", "{\"title\": \"Response to Official Blind Review #4\", \"comment\": \"Thank you for the valuable comments and suggestions.\", \"we_address_the_concerns_from_the_reviewer_as_follows\": \"\", \"q1\": \"The motivation for using rewiring is to make the perturbations unnoticeable. Besides presenting the theoretical results on this property of the rewiring operation, it's better to provide some empirical results (e.g., generated adversarial graphs) to prove that the rewiring operation can make the adversarial graphs unnoticeable in practice.\", \"a1\": \"We have performed empirical investigations of the rewiring operation, which can be found in Appendix C. In summary, the rewiring attack performed by ReWatt does smaller changes to the attacked graph in terms of connectivity and the Laplacian spectrum. Furthermore, as requested by Blind Reviewer #3, we have done some experiments to show the rewiring attack performed by ReWatt also does small changes to the spectrum of the adjacency matrix and distribution of the edge_centrality (please see the responses to Q1 and Q2 of Blind Review #3 ).\", \"q2\": \"In Table 1, why are the results of ReWatt better than RL-S2V? Since there are more constraints (i.e., smaller action space) in ReWatt than RL-S2V, RL-S2V could be easier to fool GCNs. The authors could explain more about the results.\", \"a2\": \"We agree that RL-S2V has a larger action space, which means the optimal solution it can achieve is as good or better than the one our method can find. However, both methods are not guaranteed to always find the optimal solution in the given action space. We list some potential reasons to explain why ReWatt can outperform RL-S2V as follows:\\n1) When performing an adding/deleting edge action in RL-S2V, it chooses two nodes sequentially. Then it decides to add an edge between two nodes if they are not connected, otherwise, the edge between them is removed. Since most graphs are very sparse, the RL-S2V algorithm is, by design, biased to adding an edge. On the other hand, ReWatt removes an edge and then add another edge. The adding/deleting edge operations are more balanced. \\n2) The reward design in ReWatt is different from RL-S2V. In RL-S2V, a non-zero reward is only given at the end of an attacking session. Specifically, at the end of an attacking session, a positive reward of $1$ is given if the attack succeeded, otherwise a negative reward $-1$ is given. All the intermediate steps get $0$ reward. In ReWatt, the reward is given after each action. A positive reward is given once an action leads to a successful attack. A negative reward is penalized to take each action if it does not directly lead to a successful attack, which encourages the attacker to make as few actions as possible. Furthermore, we also proposed an adaptive negative reward design, which determines the value of the negative reward according to the size of each graph. In fact, the design of this adaptive negative reward has shown to be very effective and important to the ReWatt framework. As shown in Table 1, ReWatt-n (which is a variant of ReWatt without the adaptive negative reward design) performs much worse than ReWatt. Specifically, if we apply ReWatt-n in the same setting of RL-S2V (with fixed actions), its performance is not as good as RL-S2V in REDDIT-MULTI-12K and REDDIT-MULTI-5K datasets. The performance of ReWatt-n on REDDIT-MULTI-12K is [11.26%; 14.7%; 18.02] while RL-S2V achieves [9.46; 18.5% 21.1%]. On the REDDIT-MULTI-5K, the performance of ReWatt-n is [4.49%; 5.62%; 6.74%] while RL-S2V archives [4.49%; 16.9%; 18.0%]. Hence, the design of our adaptive negative reward could be an important reason why ReWatt can perform better than RL-S2V.\", \"q3\": \"What are the differences between the proposed attack method based on reinforcement learning and the method in RL-S2V? RL-S2V is also based on reinforcement learning. The authors should clearly introduce the novelty of the proposed method as well as the contributions.\", \"a3\": \"A major contribution is that we propose to use rewiring to perform the attack. We also show that the rewiring operation is less noticeable both theoretically and empirically. On the other hand, the architecture of the reinforcement framework of ReWatt is also different from RL-S2V. We have stated the differences in the response to Q2\"}", "{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thank you for the valuable comments and suggestions.\\nThanks for letting us know about the existence of another interesting paper in the field of adversarial attacks in the graph domain. We have cited it accordingly in the revision.\", \"we_address_the_key_concerns_mentioned_by_the_reviewer_as_follows\": \"\", \"q1\": \"There is no discussion on tracking the path capacity of the graph as measured by the largest eigenvalue of the adjacency matrix and the eigengaps between the largest in module eigenvalues of the adjacency matrix. Rewiring often affects the path capacity even if one makes sure the degree distribution is the same and restricts the rewiring to 2-hop neighbors.\", \"a1\": \"We empirically verify that both the largest eigenvalue and the spectral gap of the adjacency matrix will not change too much after the rewiring attack performed by the ReWatt framework. We take the REDDIT-MULTI-12K as a representative dataset to conduct the verification experiments. Specifically, the experiments are conducted on the graphs (from the testing set) that are successfully attacked by ReWatt. Over this set of graphs, the mean of the largest eigenvalue of each initial graph (i.e., before the attack) is around 11.95. We calculate the change of the eigenvalue after rewiring attack by comparing with the original largest eigenvalue as follows:\\n $ |\\\\lambda_{ori} - \\\\lambda_{att}|$\\nwhere $\\\\lambda_{ori}$ denotes the original largest eigenvalue and $\\\\lambda_{att}$ denotes the largest eigenvalue after rewiring attack. We then average this change over all the graphs in the set. On average, after the rewiring attack, the largest eigenvalue of each graph changes 0.042, which is quite small given the magnitude of the largest eigenvalue is around 11.95. \\nOn the other hand, the average spectral gap over the set of graphs is 0.1449. After the rewiring attack, the average spectral gap becomes 0.1204. The average change over the set of graphs is 0.049. Hence, the change in the spectral gap is also small.\", \"q2\": \"Rewiring affects edge centrality and so one needs to show that the proposed algorithm doesn't change the distribution over edge centrality.\", \"a2\": \"We conduct verification experiments in the same set of graphs as in response to Q1. For each graph in this set, we use the two-sample Kolmogorov-Smirnov Test to test whether the edge centrality values before and after attacking are from the same distribution. The null hypothesis of this test is that the two samples are from the same distribution. We are supposed to reject the null hypothesis when the p-value is small. When the p-value is large, we cannot reject the null hypothesis. The average p-value over all the graphs in the set is 0.568. 58% of the graphs are with $p$-value larger than 0.5. 31% of graphs are with $p$-value smaller than 0.05, which indicates the rejection of the null hypothesis. The remaining 11% of graphs have $p$-value between 0.05 and 0.5. So, the rewiring attack may affect the edge centrality distribution. However, empirically, for most of the graphs, the edge centrality distribution of the attacked graph is not significantly different from the original one.\", \"q3\": \"In social networks, the highest eigenvalues of the adjacency matrix are very close to each other because of all the triangles. The paper will be stronger if it included how the proposed method performs under various random graph models -- e.g., Gnp random graph, preferential attachment, and small-world.\", \"a3\": \"The analysis (on the Laplacian spectrum) in the paper is for general graphs but not limited to social networks. The proposed framework is designed to attack the graph classification task. However, there are no natural and meaningful labels associated with the random graphs and we cannot perform graph classification on random graphs. A possible way is to construct labels while generating these graphs. However, such synthetic labeling could have a great bias in the results depending on how the labels are selected. So, we do not apply the proposed framework to random graphs.\\n\\nFinally, thanks for providing the miscellaneous notes, we have updated most of them accordingly in the updated version of the paper. Due to the space limit, we do not include more information in the caption of the figures. They can be found in the text of the paper.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper addresses a real problem. Most attacks on graphs can be easily identified [1]. This paper argues that if one rewires the graph (instead of adding/deleting nodes/edges) such that the top eigenvalues of the Laplacian matrix are only slightly perturbed then the attacker can go undetected.\", \"the_paper_should_address_the_following_issues\": \"1. There is no discussion on tracking the path capacity of the graph as measured by the largest eigenvalue of the adjacency matrix and the eigengaps between the largest in module eigenvalues of the adjacency matrix . Rewiring often affects the path capacity even if one makes sure the degree distribution is the same and restricts the rewiring to 2-hop neighbors.\\n\\n2. Rewiring affects edge centrality and so one needs to show that the proposed algorithm doesn't change the distribution over edge centrality.\\n\\n3. In social networks, the highest eigenvalues of the adjacency matrix are very close to each other because of all the triangles. The paper will be stronger if it included how the proposed method performs under various random graph models -- e.g., Gnp random graph, preferential attachment, and small-world.\", \"miscellaneous_notes\": [\"The captions for the figures should be more informative.\", \"Table 2 should list more characteristics of the graphs such as number of nodes, number of edges, exponent of the degree distribution, global clustering coefficient, average clustering coefficient, diameter, average path length.\", \"\\\"Zgner &Gnnemann\\\" is misspelled.\", \"\\\"As we can observed from the figures, ...\\\" has a typo in it.\", \"__________________________________________________\", \"[1] B. Miller, M. \\u00c7amurcu, A. Gomez, K. Chan, T. Eliassi-Rad. Improving Robustness to Attacks Against Vertex Classification. In The 15th International Workshop on Mining and Learning with Graphs (held in conjunction with ACM SIGKDD\\u201919), Anchorage, AK, August 2019.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a new type of adversarial attack setting for graphs, namely graph rewiring operation, which deletes an edge in the graph and adds a new edge between one node of the first edge and one of its 2-hop neighbors. This new attack is proposed to make the perturbations unnoticeable compared with adding or deleting arbitrary edges. To solve this problem, a reinforcement learning based approach is proposed to learn the attack strategy in the black-box manner. Experiments conducted on several datasets prove the effectiveness of the proposed with over an existing method and baseline methods.\\n\\nOverall, this paper proposes a new adversarial setting for graphs to make the modifications unnoticeable. A reinforcement learning method is proposed to generate adversarial examples under the proposed setting. The writing is clear. However, I have several concerns about this paper as follows.\\n\\n1. The proposed graph rewiring operation is a special operation of the general adding and deleting operations (i.e., rewiring is operated as deleting an edge and adding a new edge with some constrains). The motivation of using rewiring is to make the perturbations unnoticeable. Besides presenting the theoretical results on this property of the rewiring operation, it's better to provide some empirical results (e.g., generated adversarial graphs) to prove that the rewiring operation can make the adversarial graphs unnoticeable in practice.\\n\\n2. In Table 1, why are the results of ReWatt better than RL-S2V? Since there are more constrains (i.e., smaller action space) in ReWatt than RL-S2V, RL-S2V could be easier to fool GCNs. The authors could explain more on the results.\\n\\n3. What are the differences between the proposed attack method based on reinforcement learning and the method in RL-S2V? RL-S2V is also based on reinforcement learning. The authors should clearly introduce the novelty of the proposed method as well as the contributions.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes the ReWatt method to attack graph classification models by making unnoticeable perturbations on graph. Reinforcement learning was leveraged to find a rewiring operation a = (v1; v2; v3) at each step, which is a set of 3 nodes. In the first step, an existing edge (v1, v2) in the original graph is selected and removed. Then another node v3 that is 2-hop away from v1 and not 1-hop away is selected. Finally (v3, v1) is connected as a new edge. Some analysis shows that the rewiring operation tends to make smaller changes to the eigenvalues of the graph's Laplacian matrix compared with simply adding and deleting edges, making it difficult to detect the attacks.\\n\\nPros\\n\\n1. The rewiring operation is more unnoticeable. Small change is shown on the eigenvalues with one rewiring operation.\\n\\n2. The proposed ReWatt method is effective in attacking the graph classification algorithm, facilitated by the policy network to pick the edges.\\n\\n3. ReWatt outperforms the RL-S2V in terms of success rate, especially when the second step in the rewiring process is not limited by 2-hops away from v1.\\n\\n4. The paper measured the relative difference between the graph embeddings in terms of L2 norm and measured the KL-divergence in probabilities.\\n\\nCons\\n\\n1. It's quite surprising that ReWatt achieves higher success rate than RL-S2V (first two rows of Table 1). RL-S2V considers a properly larger set of attacks and uses Q-learning (in contrast to actor critic in ReWatt). So is it the conclusion that actor critic is better than Q-learning? Perhaps it will be illustrative to experiment with replacing Q-learning in RL-S2V by actor critic. This can be implemented in the framework of ReWatt: in Eq 5, replace $p_{fir} * p_{thi}$ by $p(add/remove | e_t)$.\\n\\n2. The attack is specifically designed for graph classification, while the graph convolutional filter is widely used in other problems like node classification and link prediction. Can it be applied to such problems as well?\\n\\n3. In addition to RL-S2V, it will be helpful to compare with Nettack (Z\\u00a8ugner et. al, 2018). It employs an admissible set of perturbations, which can be adapted for the rewiring attack.\\n\\n4. The paper shows the change of eigenvalues under one rewiring operation. How does it change after multiple operations? In addition, the smaller change to the eigenvalues is compared with rewiring to more distant nodes or adding an edge between two distant nodes. That is, it is under a *given* $v_{fir}$ and $v_{sec}$. A different attack may select a different $v_{fir}$ and $v_{sec}$ in the first place. So it is still not clear whether rewiring leads to less noticeable changes.\\n\\n5. The experiment splits the dataset into three parts, training set, rewiring operation set, and test set. However, for those predicted incorrectly on the rewiring operation set, the success rate should not be counted. Perhaps this is already done?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors studied the adversarial attack problem for graph classification problem with graph convolutional networks. After observing that traditional attack by adding or deleting edges can change graph eigenvalues, the author proposed to attack by adding rewiring operation which make less effects. Rewiring does not change the graph edge number and the average degree. Further, the authors propose an RL based learning method to learn the policy of doing rewiring operation. Experiments show that the proposed method can make more successful attack on social network data than baselines and previous methods.\\n\\nThe idea of using rewiring to make graph attack is interesting and sensible. The proposed RL-based method where the search space is constraint also can solve the problem. However, I have a few concerns on the experiments.\\n\\n1. In figure 3, the authors also show that the proposed method can make less noticeable changes on eigenvalue. But are these changes still noticeable compared to original one? Please also show these information.\\n2. 2% data for testing is too few for me. The authors should increase these number. In addition, how many replication of experiments did the author do? The author should give the variance of the results and make significant test if needed.\\n3. What is the prediction accuracy of the target classifier? Did the attacker flip more correct predictions?\"}" ] }
Hyl7ygStwB
Incorporating BERT into Neural Machine Translation
[ "Jinhua Zhu", "Yingce Xia", "Lijun Wu", "Di He", "Tao Qin", "Wengang Zhou", "Houqiang Li", "Tieyan Liu" ]
The recently proposed BERT (Devlin et al., 2019) has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at https://github.com/bert-nmt/bert-nmt
[ "BERT", "Neural Machine Translation" ]
Accept (Poster)
https://openreview.net/pdf?id=Hyl7ygStwB
https://openreview.net/forum?id=Hyl7ygStwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "yxG2CveYIu", "BkxicGFioB", "H1xIHxKsjr", "rklGWyKojB", "S1epjIzD5B", "SkeX8nWv5S", "rJgB3-oRtS", "rJxBSkziYr", "HyeCZACtYr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739327, 1573782162893, 1573781566329, 1573781242051, 1572443812631, 1572441162730, 1571889580931, 1571655484881, 1571577349720 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2055/Authors" ], [ "ICLR.cc/2020/Conference/Paper2055/Authors" ], [ "ICLR.cc/2020/Conference/Paper2055/Authors" ], [ "ICLR.cc/2020/Conference/Paper2055/Authors" ], [ "~SICHENG_YU1" ], [ "ICLR.cc/2020/Conference/Paper2055/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2055/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2055/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors propose a novel way of incorporating a large pretrained language model (BERT) into neural machine translation using an extra attention model for both the NMT encoder and decoder. The paper presents thorough experimental design, with strong baselines and consistent positive results for supervised, semi-supervised and unsupervised experiments. The reviewers all mentioned lack of clarity in the writing and there was significant discussion with the authors. After improvements and clarifications, all reviewers agree that this paper would make a good contribution to ICLR and be of general use to the field.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer#2\", \"comment\": \"Thanks a lot for your valuable comments and suggestions!\\n\\n## About writing ##\\nWe have carefully revised the paper according to your suggestions. Considering that the paper is still under review (by other reviewers), we did not compress the article within eight pages, which would lead to significant changes to the organization and correspondingly additional workload for other reviewers. We will revise it after the review period.\\n\\n## About additional data ##\\nVery good point! Yes, any model leveraging BERT will indirectly benefit from additional data. The difference is that back-translation (briefly, BT) leverages the unlabeled data from the target side, while we leverage the data from source side. That is, our model is complementary to BT. In Section 5.4 of the original submission, we have already verified that our method can further improve the results of BT. \\nNote that it is usually costly to back translate a large amount of monolingual data due to the decoding process, and therefore BT usually takes much longer time for training. In contrast, we do not need to translate the unlabeled data when using BERT-fused model, because BERT is already pretrained and publicly available. The BERT module in our approach is fixed and does not need to be updated, which does not significantly increase training time. \\n\\nWe back translate 1M,2M, 5M, 15M and 25M unlabeled German wiki corpus (used for training BERT) and run BT on IWSLT\\u201914 En->De translation. The BLEU scores of above five settings are 29.42, 29.76, 29.10, 28.26 and 27.34 respectively. According to the above results, simply increasing wiki data for BT actually hurts the translation accuracy on IWSLT: 29.76 for 2M wiki data vs 27.34 for 25M wiki data. The highest BLEU score 29.76 of BT comes from 2M data, which is not as good as ours (30.45). This verifies the effectiveness of our approach while leveraging monolingual data. Please refer to Appendix B.5 for more detailed discussions.\\n\\n## The BERT model for each task ##\\nWe use the BERT models archived by Huggingface (https://github.com/huggingface/transformers). \\nFor IWSLT\\u201914 tasks, we choose BERT_{base} models.\\n1.\\tIWSLT\\u201914 En->{De, Es, Fr, Zh}, we choose \\u2018bert-base-uncased\\u2019.\\n2.\\tIWSLT\\u201914 De->En, we choose \\u2018bert-base-german-cased\\u2019.\\nFor WMT\\u201914 En->{Fr, De}, we choose \\u2018bert-large-uncased\\u2019, which is a BERT_{large} model.\\nFor WMT\\u201916 Ro->En, we choose \\u2018bert-base-multilingual-cased\\u2019, because there is no BERT specially trained for the Romanian. \\nFor the two unsupervised NMT tasks, we choose the XLM models (cross-lingual pretrained language models)\\n1.\\tunsupervised En->Fr, we choose \\u2018xlm-mlm-enfr1024\\u2019\\n2.\\tunsupervised En->De, we choose \\u2018xlm-mlm-enro1024\\u2019\\nAll these details are provided in Appendix D.\\n\\n## Subword tokenization ##\\nWe assume you are talking about Table 1. While using BERT to initialize the encoder of NMT, we use BERT vocabulary and tokenization; while using XLM to initialize the encoder of NMT, we use XLM vocabulary and tokenization. We also use BERT vocabulary and tokenization for standard Transformer, which leads to similar accuracy (our 28.57 vs BERT 28.18). \\n\\n## Statement ## \\nWe revise the ablation study considering all review comments. As stated in ``Training strategy\\u2019\\u2019 of Section 5.1, we first train a Transformer model until convergence, then use this model to initialize the encoder and decoder of the BERT-fused model. The BERT-encoder attention and BERT-decoder attention are randomly initialized. In the ablation study of Section 6, \\u201cTraining NMT module from scratch\\u201d means that the encoder and decoder of BERT-fused model are not initialized from a pre-trained Transformer model but randomly initialized. We now change \\u201cTraining NMT module from scratch\\u201d to \\u201cRandomly initialize the encoder/decoder of BERT-fused\\u201d.\"}", "{\"title\": \"Response to Reviewer#1\", \"comment\": \"Thanks a lot for your valuable comments and suggestions!\\n\\n1.\\tYes, when the encoder is pretrained with a BERT/XLM model, it is finetuned rather than frozen. \\n2.\\tAbout Table 1 \\u201cpretraining can decrease the performance significantly\\u201d. Indeed, we have no definitive answer to explain this observation so far. Domain mismatch is one of our conjectures. Another conjecture is that XLM uses a different codebase compared to Fairseq-pytorch. We tried our best to boost the performance of XLM on IWSLT, including tuning different dropout rates and learning rates. As shown in Table 5 in the paper, for Ro->En, our reproduced Transformer baseline using Fairseq-pytorch is 33.12, while in the XLM paper, the Transformer baseline is only 28.4 (see Table 3 in XLM paper), which is a very significant gap. Similar phenomenon is also observed in WMT\\u201914 En->De (see Appendix B1). Since Transformer baseline already achieves very high accuracy, it might be difficult for XLM to further boost the accuracy.\\n3.\\tWe simplified the algorithm in Section 4 in the updated version. Please kindly have a check.\", \"about_bert_decoder_attention\": \"First, analogy to the encoder-decoder attention, we use BERT-decoder attention to allow decoder explicitly attending to the BERT output instead of leveraging this information implicitly/indirectly from encoder output. Second, we have done ablation study in Section.6-> Study for training strategy and network architecture->(3). If removing the BERT-decoder attention, the performance drops from 30.45 to 29.90, which demonstrates that leveraging BERT-decoder attention is a more effective way.\\n4.\\tNote that the drop-net operation is performed independently in different layers. Thus, although the self-attention and BERT-attention never meet in the same layer, they can meet across layers, e.g., self-attention in l-th layer and BERT attention in (l+1)-th layer. Please see our code at line https://github.com/bert-nmt/bert-nmt/blob/75bd2120a0302c6ae413a58276a2c0759a19287c/fairseq/models/transformer.py#L1356 and line https://github.com/bert-nmt/bert-nmt/blob/75bd2120a0302c6ae413a58276a2c0759a19287c/fairseq/models/transformer.py#L1546.\\n5.\\tFollowing your suggestions, we replace the BERT module in our algorithm with a pretrained NMT encoder (previously trained with a different random seed and without the fused architecture). On IWSLT\\u201914 En->De and De->En, this algorithm achieves 28.99 and 35.26 BLEU score, not as good as our method (30.45 and 36.11). This shows the advantage of BERT over a conventional encoder.\\n6.\\tFor the ensemble results, please kindly refer to \\u201c## About better comparisons ## of Reviewer 3\\u201d and Appendix B.2.\\n7.\\tThanks for your comment on advantage of the different tokenization problem in our method. As you suggested, we discussed and highlight it in the paragraph before section 4.2.\\n8.\\tThanks for pointing the problems of related work. We have already corrected them.\"}", "{\"title\": \"Response to Reviewer#3\", \"comment\": \"Thanks a lot for your valuable comments and suggestions!\\n\\n## About unclear explanations ##\\nWe have revised the paper according to your suggestions and uploaded a new version. Specifically, for your questions:\\n1.\\t\\\"Function cascade\\\" means that the functions are applied to the input in a cascaded way. We make it clearer in the current version\\uff1a\\\"..., the input is processed by self-attention, encoder-decoder attention and BERT-decoder attention sequentially\\\" and provide a mathematical formulation. Currently, we move this part to 'Part I of Appendix B.2'.\\n2.\\tAs stated in \\\"Training strategy\\\" of Section 5.1, we first train a Transformer model until convergence, then use this model to initialize the encoder and decoder of the BERT-fused model. The parameters in BERT-encoder attention and BERT-decoder attention are randomly initialized. In the ablation study of Section 6, \\u201cTraining NMT module from scratch\\u201d means that the encoder and decoder of BERT-fused model are not initialized from a pre-trained Transformer model but randomly initialized. We now change \\u201cTraining NMT module from scratch\\u201d to \\u201cRandomly initialize the encoder/decoder of BERT-fused\\u201d for clarity.\\n3.\\tMiculicich et al. (2018) released their code at https://github.com/idiap/HAN_NMT. We have already tried our best to tune this model but failed to achieve higher results than our baselines. Our conjecture is that (Miculicich et al. 2018) use a different code base (OpenNMT) instead of Fairseq-Transformer, which may cause several differences in implementation. We will conduct more study in the future.\\n\\n## About better comparisons ##\\n1.\\t\\u201cThe proposed architecture with (fixed) random vectors instead of the BERT's contextualized embedding\\\": We implemented this algorithm and conducted experiments on IWSLT\\u201914 En->De and IWSLT\\u201914 De->En. Such an algorithm achieved 28.91 BLEU score for En->De and 35.00 for De->En. Indeed, this algorithm outperforms the standard Transformer, where the two BLEU scores are 28.57 and 34.64 respectively. However, its accuracy is still far-behind our proposed method (30.45 and 36.11), indicating that the improvement of our model mainly comes from pretrained BERT instead of purely increasing the number of parameters. (See Table 6 and Section 6 -> Study for training strategy and network architecture \\u2013> (2) for more details.)\\n2.\\tFor ensemble: On IWSLT\\u201914 En->De, the ensemble of two, three and four standard transformer models can lead to 29.71, 30.08 and 30.18 BLEU scores respectively. Our BERT-fused model (30.45) beats all those scores.\\nFurthermore, BERT-fused model can also benefit from ensemble. Ensemble of two, three and four BERT-fused models can lead to 31.09, 31.45 and 31.85 BLEU scores, outperforming the single BERT-fused model by up to 1.40 points. Details are reported in Appendix B.2 of the updated version.\\n3.\\tWe enriched the discussions in Section 6. Please kindly refer to the new version of our paper.\"}", "{\"title\": \"Re: Dev/Test set of IWSLT tasks\", \"comment\": \"Hi, Sicheng.\\n\\nThanks for your interests to our work.\\n\\nWe guess that you did not find the real test set. The files you used, dev2010 and tst2010 exist in training archive. You should download the corresponding test archive in the following link:\", \"https\": \"//wit3.fbk.eu/mt.php?release=2017-01-ted-test\", \"we_re_check_our_result_of_en_fr_through_the_following_command\": \"cat $your_output_file | python sacreBLEU/sacrebleu.py -t iwslt17 -l en-fr\\n\\nand you can get\\n\\nBLEU+case.mixed+lang.en-fr+numrefs.1+smooth.exp+test.iwslt17+tok.13a+version.1.4.2 = 38.7 64.9/44.4/32.5/23.9 (BP = 1.000 ratio = 1.048 hyp_len = 28258 ref_len = 26962)\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Dev/Test set of IWSLT tasks\", \"comment\": \"I am confused about your split of IWSLT tasks. What do you mean by 'For other tasks, we do not lowercase the words and use the official validation/test sets of the corresponding years.' I found in IWSLT17 there is only dev2010 and several test sets. I just tried BERT in machine translation with sacreBLEU on IWSLT EN-FR using dev2010 as validation and tst2010 as test set. However I only get sacreBLEU 30.38. Did I use the wrong set split?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper discusses a method that effectively incorporates a (large) pre-trained LM, such as BERT and XMN, for improving the performance of NMT.\\n \\nThe motivation of this paper is rather straightforward and not novel; many researchers can quickly think of such an idea of incorporating the power of the recent (rapid) development of pre-training LMs into NMT.\\nFrom this perspective, this paper is not very exciting. \\nHowever, as described in the paper, we often fail to improve (or even degrade) the performance of NMT when we straightforwardly incorporate a pre-trained LM.\\nThus, many researchers/developers might want to know a practical approach to integrate a pre-trained LM into NMT.\\nThis paper provides a straightforward but smart way to incorporate pre-trained LMs, which is not trivial in the community.\\nIn this sense, this paper might have a considerable influence on the community.\\nI was a bit surprised by the apparent effectiveness of the proposed method since I also have attempted to apply pre-trained LMs to NMT and have not obtained a good result.\\n \\n \\nExperimental results are mostly convincing; the authors conducted comprehensive and extensive experiments on many settings, such as supervised NMT with low- and hi-resource settings, a semi-supervised NMT setting by back-translation, document-level MT, and unsupervised NMT.\\nThe results were also promising; the proposed method consistently outperformed conventional methods. \\nI think these results are useful for many readers.\\nMoreover, such findings also offer further insights for many researchers who aim to apply BERT to many other tasks, especially for text generation tasks.\\n \\n\\nHere are my concerns about this paper.\\n\\n1, unclear explanations\\nThe writing can be much improved. Readers might be able to guess, but several descriptions are hard to follow, or detailed explanations are missing.\\nFor example, what is the exact operation of \\\"function cascade\\\"? \\nWhat is the difference between the \\\"Training NMT module from scratch\\\" and \\\"Standard Transformer\\\" in Table 6? What is the main reason for the lower performance of (Miculicich et al. (2018)) than that of sentence-level NMT in Table 4?\\n\\n2, better comparisons\\nI think the authors need to confirm another model setting for a fairer comparison, something like \\\"The proposed architecture with (fixed) random vectors instead of the BERT's contextualized embeddings.\\nIt is because we sometimes observe the improved performance for the above model comparing with the original one.\\nWe can interpret this improvement by the effect of increasing the weight parameters for injecting the additional random vectors to the original architecture.\\nTherefore, I think the above model settings can improve the performance of standard Transformers, which can be a preferable counterpart of the proposed method.\\nMoreover, the proposed method is closely related to the model ensembling since the method utilizes two separate models.\\nTherefore, the authors should also report the results of model ensembling for better comparisons.\\n \\n3, less discussion for the experimental results\\nI found minimal discussions about the results.\\nFor example, in the ablation study, the authors only show (list) the observations of their results and no discussions.\\nThe authors should provide discussions about how and why their method (architecture) can improve the performance compared with a similar (and current de facto standard) approach, like the fine-tuning setting that can often improve most of the other NLP tasks.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper explores the use of BERT to improve Neural Machine Translation (NMT) both in supervised, semi-supervised and unsupervised settings. The authors first show that using BERT to initialize the encoder and/or the decoder does not bring any clear improvement, while using it as a feature extractor performs better. Based on this finding, the authors propose a new approach to integrate BERT in NMT, named BERT-fused NMT, which incorporates BERT representations from the input sequence into the encoder and decoder attention mechanisms.\", \"I am ambivalent about this paper. On the one hand, the paper presents a thorough experimental evaluation, with strong baselines (often outperforming their original implementation) and results that can be interesting from different angles, and the reported improvements are consistent. However, the paper is rather poorly written and some important details are not adequately described, which left me with some concerns and an overall negative impression as I read through the paper. More concretely:\", \"The paper is rather poorly written. There are many expressions that sound ungrammatical or otherwise unnatural to me (although I am not a native speaker myself) and, more importantly, the overall exposition of ideas is not sufficiently clear. I found the paper difficult to follow, and I was left with many doubts as I read through it. In addition, the style in which some results are presented is inappropriate for an academic paper (e.g. \\\"Obviously, our proposed BERT-fused NMT can improve the BLEU scores\\\"), although I understand that this was probably not intentional.\", \"To make things worse, the paper is 10 pages long, and according to the CFP reviewers are \\\"instructed to apply a higher standard to papers in excess of 8 pages\\\". I think that the paper could be fit in the regular 8 page limit.\", \"The pre-trained BERT models that the authors use were trained on different (and generally larger) training data than what they use for the NMT training (e.g. they all use Wikipedia). As such, the models that build on BERT are indirectly using this additional training data. How can we make sure that the reported improvements are not due to this additional data? What would happen if the same data was used for the baseline systems (e.g. through back-translation)? Also, please clearly state which pre-trained model you use for each specific experiment.\", \"The treatment of subword tokenization is not given sufficient attention and raises some concerns to me. It seems clear that the authors combine different subword tokenizations for their proposed system (i.e. BERT and the NMT encoder/decoders use a different subword vocabulary). However, it is not clear to me how this is handled in the baseline systems that use BERT for initialization only, for which a mismatch in tokenization would be problematic.\", \"I often find it difficult to understand what the authors did exactly for each of the reported systems. For instance, what is the difference between \\\"Standard transformer\\\" and \\\"Training NMT module from scratch\\\" in Table 6? I cannot see any yet the difference in BLEU is 1.5.\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an approach to incorporate BERT pretrained sentence representations within a NMT architecture.\\nIt shows that simply pretraining the encoder of a NMT model with BERT does not necessarily provide gains (and can even be detrimental) and proposes instead to add a new attention mechanism, both in the encoder and in the decoder. The modification is relatively simple, but provides significant improvements in supervised and unsupervised MT, although it makes the model slower and computationally more expensive. The paper contains a lot of experiments, and a detailed ablation study.\\n\\n===\\n\\nI'm very surprised by the results in Table 1, i.e. the fact that pretraining can decrease the performance significantly. The provided explanation \\\"Our conjecture is that the XLM model is pre-trained on news data, which is out-of-domain for IWSLT dataset mainly about spoken languages\\\" is not satisfactory to me. The domain mismatch is also there in the majority of GLUE tasks, SQUAD, etc. and yet pretraining with BERT significantly improves the performance on these tasks. When the encoder is pretrained with a BERT/XLM model, I assume the encoder is not frozen, but finetuned?\\n\\nThe description of the algorithm in Section 4 could be simplified a lot I feel. Overall, the attention in the encoder is simply replaced by two attention layers: one over the previous layer like in a standard setting, and one on top of the BERT representation. Also I don't understand why the attention over the BERT sequence is also necessary in the decoder. Shouldn't this information already be captured by the encoder output?\\n\\nThe Drop-Net Trick is interesting. But the fact that 1.0 gives the best performance (Section 6.2) is very unintuitive to me. This means that the model will never consider the setting with two attentions at training time, although this is what it does at test time.\\n\\nIn Table 6, you propose experiments with 12 and 18 layers for fair comparison, because as you mention, your model with BERT-fused has more parameters. But IWSLT is a very small dataset and it would have been surprising that using 18 layers actually helps (overfitting is much more likely in that setting). Instead, I think something like an ensemble model would be a more fair comparison. In fact, the BERT-fused is essentially an ensemble model of the encoder.\\nCould you try the following experiment on IWSLT, where you do not pretrain the BERT model with the BERT objective, but with a NMT encoder trained in a regular supervised setting (i.e. do not reload a BERT model, but a NMT encoder that you previously trained without the fused architecture)?\\n\\nOverall, I think the gains are nice, but I would really like to see the comparison I mentioned just above, and comparisons with ensemble models. The proposed model is significantly larger / slower than the baseline models considered, and I wonder if you could not achieve the same gains with ensemble models.\\n\\nSomething I like about the approach is that is it quite generic in the sense that you can provide any external sequence of vectors as input to your encoder. As a result, it is possible to leverage a model pretrained with a different tokenization. Tokenization is often an issue with pretraining in NLP (how do you leverage a model trained without BPE if you actually want to use BPE in your new model). The proposed approach does not has this constraint and I think this is something you should highlight more in the paper.\\n\\n===\", \"small_details_in_the_related_work_section\": [\"I would cite \\\"Sutskever et al, 2014\\\" for the LSTM encoder, along with \\\"Hochreiter & Schmidhuber\\\", and not only \\\"Wu et al, 2016\\\"\", \"Removing the NSP task was proposed in \\\"Lample & Conneau, 2019\\\", not in \\\"Liu et al, 2019\\\"\"]}" ] }
BkgGJlBFPS
Unsupervised Hierarchical Graph Representation Learning with Variational Bayes
[ "Shashanka Ubaru", "Jie Chen" ]
Hierarchical graph representation learning is an emerging subject owing to the increasingly popular adoption of graph neural networks in machine learning and applications. Loosely speaking, work under this umbrella falls into two categories: (a) use a predefined graph hierarchy to perform pooling; and (b) learn the hierarchy for a given graph through differentiable parameterization of the coarsening process. These approaches are supervised; a predictive task with ground-truth labels is used to drive the learning. In this work, we propose an unsupervised approach, \textsc{BayesPool}, with the use of variational Bayes. It produces graph representations given a predefined hierarchy. Rather than relying on labels, the training signal comes from the evidence lower bound of encoding a graph and decoding the subsequent one in the hierarchy. Node features are treated latent in this variational machinery, so that they are produced as a byproduct and are used in downstream tasks. We demonstrate a comprehensive set of experiments to show the usefulness of the learned representation in the context of graph classification.
[ "Hierarchical Graph Representation", "Unsupervised Graph Learning", "Variational Bayes", "Graph classification" ]
Reject
https://openreview.net/pdf?id=BkgGJlBFPS
https://openreview.net/forum?id=BkgGJlBFPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "2Gn_k1idq4", "B1xrAKUnoH", "ByxhQGLhoH", "Byx7A-82jS", "r1ekj-InsS", "SJljZmHL9S", "SJg6jDF6tr", "BJezGUojtr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739298, 1573837260853, 1573835300472, 1573835210556, 1573835158902, 1572389635319, 1571817381211, 1571694090214 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2054/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2054/Authors" ], [ "ICLR.cc/2020/Conference/Paper2054/Authors" ], [ "ICLR.cc/2020/Conference/Paper2054/Authors" ], [ "ICLR.cc/2020/Conference/Paper2054/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2054/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2054/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper presents an unsupervised method for graph representation, building upon Loukas' method for generating a sequence of gradually coarsened graphs. The contribution is an \\\"encoder-decoder\\\" architecture trained by variational inference, where the encoder produces the embedding of the nodes in the next graph of the sequence, and the decoder produces the structure of the next graph.\\n\\nOne important merit of the approach is that this unsupervised representation can be used effectively for supervised learning, with results quite competitive to the state of the art. \\n\\nHowever the reviewers were unconvinced by the novelty and positioning of the approach. The point of whether the approach should be viewed as variational Bayesian, or simply variational approximation was much debated between the reviewers and the authors. \\n\\nThe area chair encourages the authors to pursue this very promising research, and to clarify the paper; perhaps the use of \\\"encoder-decoder\\\" generated too much misunderstanding. \\nAnother graph NN paper you might be interested in is \\\"Edge Contraction Pooling for Graph NNs\\\", by Frederik Diehl.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Variational Bayes\", \"comment\": \"Again, I do not see the point in calling variational approximation variational Bayes is there is no prior on the parameters. You seem to be confusing variational approximation in e.g. EM where a complex distribution is replaced by a factored approximation with variational Bayes where the posterior distribution of the *parameters* is approximated by a factored distribution. Notice that using the Bayes rules does not turn a frequentist approach into a Bayesian one (eg the so-called naive Bayes classifier is not a Bayesian approach unless you add a prior on its parameter). And thus writing \\\"A core subject of Bayesian inference is concerned with estimating the posterior distribution p(z|x)\\\" where x is the observed data and z the latent variable is clearly misunderstanding of Bayesian inference.\"}", "{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thank you very much for raising the concerns. We address them below. Hope the response helps you reassess the contribution of the work.\", \"re\": \"When is the node representation of the coarsening sequence needed?\\n\\nThe use of node features (both those of the original graph and the learned ones of the coarse graphs) is explained in section 3.5. Specifically, for each graph in the sequence, the node features are pooled to form a graph embedding, such that all graph embeddings can be concatenated to form the final graph representation. A simple predictive model (e.g., MLP) is trained separately for the downstream task (e.g., graph classification). This predictive model will vary depending on the nature of the task.\"}", "{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thank you very much for the comments. We respond to them in the following.\", \"re\": \"Terminology regarding \\\"variational Bayes\\\".\\n\\nThe terminology is subject to debate but we would like to share our opinion. It is not relevant to the contribution of this paper.\\n\\nTraditionally, the motivation for variational Bayes is approximate inference. The machinery was borrowed recently for developing generative models (VAE). More interesting in these models is the generative part (still, the mathematics is the same). The inference part serves only as a tool for training. The prior therein is an assumed distribution for the latent space; it is not used for parameters. In VAE, the parameters are with respect to the encoder network and the decoder network. Unless ones performs Bayesian deep learning, no priors on the network parameters exist for simplicity. On the other hand, the latent space is the prior for VAE. This prior may be made extremely simple (such as standard Gaussian), or slightly more expressive (such as a factored Gaussian parameterized by mean and variance), or even more complex. In this work, we find that the simple choice suffices.\"}", "{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you very much for raising the concerns. Our response is in the following. Hope the response help you reassess the contribution of the work.\", \"re\": \"Concern (2); unsupervised representation.\\n\\nThe power of unsupervised representation comes from the fact that it is trained without knowing any downstream task. The learning uses much less information. We believe it is fair to consider that the method is successful, if the performance of a downstream task is comparable with that resulting from learning with additional supervised information.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an unsupervised approach to learn a representation of graphs. The idea comes from an encoder-decoder architecture, which is common in related literature. The paper uses a variational Bayes approach in the learning process. Thorough experiments are provided to justify the feasibility of the method.\\n\\nThis paper provides an unsupervised style of learning graph representations, which may not be coupled with a specific downstream task so that it may be more useful in general; also, the experiments themselves seem to be at least comparable to the recent methods. \\n\\nHowever, I vote for rejecting this submission for the following concerns. \\n\\n(1) I did not find too many significant differences between this paper and [Kingma & Welling, 2014] in the design of encoder-decoder architecture as well as the learning procedure (I am not an expert in this area so please correct me if I am wrong).\\n\\n(2) The intuition of learning the representation in an unsupervised manner is interesting and important to me, though the experiments are mostly on the classification tasks. I think it would be helpful to demonstrate the representation power of the learned representation of the graph in tackling other tasks.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose in this paper a new unsupervised graph representation learning method. The method leverages recent advances in graph coarsening, mainly Loukas' method. The key idea of the method consists in using a reconstruction target that is not the classical one in an auto-encoder setting. More precisely, the encoder takes as an input the original adjacency matrix and node features but the decode only aims at reconstructing the coarse adjacency matrix (obtained via Loukas' method).\\n\\nThe experimental evaluation is quite thorough and shows that the method performs quite well, especially considering it is unsupervised but is compared to supervised representation methods. It would be nice to include statistical tests to assess the significance of the differences in cases were accuracies are very close one to another. A missing part would be to explore the relevance of the learned representation for other tasks (i.e. to use a multi task data set). Of course as the representation is learned in an unsupervised way, one can argue that the current evaluation is already providing an answer.\\n\\nOverall, I find the paper clear, but the variational bayes part could be much clearer. In fact I'm not sure why this is presented as variational bayes and not only variational. I do not see any prior distribution over parameters, for instance. I understand that the recent \\\"tradition\\\" in variational auto-encoder is to use this terminology, but as a (part time) bayesian, this is a bit annoying.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This work proposes an unsupervised hierarchical graph representation learning method, named BayesPool. The method learns a coarsening sequence of graphs together with the corresponding node representations. The coarsening sequence is learned using the method in Loukas (2019). The node representations are learned using an encoder-decoder structure, where the encoder encodes a graph to coarsened node representations, and the decoder decodes the node representations to a coarsened graph. The adopted objective function is analogous to VAE, except that the decoder does not aims to reconstruct an identical graph. Experiments on graph classification is performed on 5 different datasets, and competitive accuracy is achieved.\", \"concerns\": \"The authors claim that the leant representation in an unsupervised manner is more desirable in terms of generalization. However, they only provide very limited experimental results, which is not very convincing. Moreover, the authors also do not explain clearly on when the node representation of the coarsening sequence is needed.\"}" ] }
SklM1xStPB
Copy That! Editing Sequences by Copying Spans
[ "Sheena Panthaplackel", "Miltiadis Allamanis", "Marc Brockschmidt" ]
Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code. In this paper, we argue that existing seq2seq models (with a facility to copy single tokens) are not a natural fit for such tasks, as they have to explicitly copy each unchanged token. We present an extension of seq2seq models capable of copying entire spans of the input to the output in one step, greatly reducing the number of decisions required during inference. This extension means that there are now many ways of generating the same output, which we handle by deriving a new objective for training and a variation of beam search for inference that explicitly handle this problem. In our experiments on a range of editing tasks of natural language and source code, we show that our new model consistently outperforms simpler baselines.
[ "span copying", "sequence generation", "editing", "code repair" ]
Reject
https://openreview.net/pdf?id=SklM1xStPB
https://openreview.net/forum?id=SklM1xStPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "s-uuWDHWmG", "SkgpBHUKoS", "BkeGuNIKsB", "rJefKqf7jS", "HJgZV5fmiB", "Bkgo3FzXor", "rJxLOFzmjB", "BJxj26u45S", "H1xT0KVecH", "rklla_W3KH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739269, 1573639493508, 1573639273571, 1573231225577, 1573231144826, 1573231027093, 1573230958185, 1572273587184, 1571994068803, 1571719351618 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2053/Authors" ], [ "ICLR.cc/2020/Conference/Paper2053/Authors" ], [ "ICLR.cc/2020/Conference/Paper2053/Authors" ], [ "ICLR.cc/2020/Conference/Paper2053/Authors" ], [ "ICLR.cc/2020/Conference/Paper2053/Authors" ], [ "ICLR.cc/2020/Conference/Paper2053/Authors" ], [ "ICLR.cc/2020/Conference/Paper2053/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper2053/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2053/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes an addition to seq2seq models to allow the model to copy spans of tokens of arbitrary length in one step. The authors argue that this method is useful in editing applications where long spans of the output sequence will be exact copies of the input. Reviewers agreed that the problem is interesting and the solution technically sound. However, during the discussion phase there were concerns that the method was too incremental to warrant publication at ICLR. The work would be strengthened with a more thorough discussion of related work and additional experiments comparing with the relevant baselines as suggested by Reviewer 2.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Update\", \"comment\": \"We retrained our model without using our new marginalized objective (Eq. 2), and instead forcing the model to copy the longest possible span at each timestep, as done in Zhou et al. (2018). We find that the performance of the model significantly worsens (see updated Table 1; \\u201cforce copy longest\\u201d). We believe that this is due to the fact that the model fails to capture the spectrum of correct actions possible since at each point it learns that only one action is correct.\"}", "{\"title\": \"Update\", \"comment\": \"We ran the evaluation with a beam size of 100 (See Review #1). With a beam size of 100 the results on BFP small are as follows:\\n\\n\\u2022\\tExact Match: 17.89% (+0.2 increase from using a beam size of 20).\\n\\n\\u2022\\tMRR: 0.250 (from 0.247 when the beam size is 20)\\n\\nOverall, these suggest that the improvements are minimal. Using an alternative beam search method, that would preserve (e.g.) beams that contain a lot of copied tokens (at the cost of memory) does not seem to provide any particular advantage.\"}", "{\"title\": \"General Review Response and Paper Update (8 Nov)\", \"comment\": \"Thank you for all your thoughtful feedback and questions. We want to clarify some general points, and document the changes to a revision to the paper we have uploaded now:\\n\\n\\u2022\\tExperimental Results: Our model achieves new state-of-the-art results on the considered code-related tasks as well as on the natural language edit representation task. For the grammar error correction task, we compare the model to a baseline model similar to the one used in state-of-the-art works, but do not perform any of the pre-processing and pre-training steps, since these aspects are not a core novelty of our approach.\\n\\n\\u2022\\tRelated Work: We have extended our related work section with some of the works pointed out by the reviewers. This includes references to \\u201cLatent Predictor Networks for Code Generation\\u201d (Ling et al. 2016), which to our knowledge is the first to marginalize over copying actions and generation actions yielding the same results. Our is a generalization of this marginalization strategy to spans of text.\\n\\nWe now also compare to \\u201cSequential Copying Networks\\u201d (Zhou et al. 2018), which also provides a mechanism for copying subsequences of the input, but does not present marginalization strategies over action sequences generating the same result, which are the main contribution of our paper.\\n\\n\\u2022\\tAdditional Visualizations: We have added Appendix A.1 and A.5 with additional visualization about the span-copying mechanisms. This should hopefully provide better insights on the inner workings of our method.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions.\\n\\n\\u2022\\tEfficiency/Scalability: The computation cost of our algorithm during training is negligible. To identify all spans that can be copied from an input sequence of length $M$ in each location of a target sequence of length $N$ we use a standard dynamic programming approach [a] with cost $O(MN)$. This \\u201calignment\\u201d can be pre-computed in the data pre-processing before training. During training, the marginalization has $O(N^2)$ cost (see Fig. 2: at each of the $O(N)$ decoding steps there are at most $O(N)$ possible lookbacks). In practice, this can be parallelized efficiently on a GPU, and given that the loss is a scalar, the main computational cost still rests on the vector/matrix operations rather than on the marginalization.\\n\\nAt inference time, our beam search is not significantly slower than a more standard beam search (the only additional operation is the group_by_toks, which can be implemented using a HashMap in effectively constant time). \\n\\n\\u2022\\tWe have added a histogram of the sequence length distributions in appendix A.4. More than 60% of the copy actions copy spans of length longer than a single token, in beam decoding on the BFP medium testset, with a median length of 16 tokens. Note that to measure this accurately, we disabled beam merging. Additionally, we added in Appendix A.1 visualizations of the span-copying attention for the example in Fig 3.\\n\\n\\u2022\\tWe have added a description of the benchmark datasets when we introduce them in the text.\\n\\n\\n[a] https://en.wikipedia.org/wiki/Longest_common_subsequence_problem\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions. We would be glad to add more details to the paper as you are asking. Specifically, you mention the beam decoding section, could you let us know what information you\\u2019d have liked to see in that section that would make the algorithm and explanation more detailed and helpful?\\n\\nThank you for bringing to our attention some of the work we have not cited. We have now cited them in the paper. However, we disagree that our work is too incremental for publication with respect to those works. Although some parts of our model are necessarily a combination of existing components, SpanCopy is different in significant ways from those works as follows:\\n\\n\\u2022\\tSequential Copying Networks (Zhou et al. 2018) proposes to copy spans (for text summarization tasks) by predicting the start and end of a span to copy. However, it lacks marginalization over different actions generating the sequences and instead uses teacher forcing towards the longest copyable span; and it does not adapt its inference strategy to reflect this marginalization. This marginalization is the core contribution of our paper.\\nAs discussed in Sect. 2, using this marginalization was crucial for good experimental results. We have started running experiments comparing a variant without the marginalization and simple beam decoding in our final experimental setting and will report once these finished.\\n\\n\\u2022\\tQuickEdit (Grangier and Auli 2017) present a machine translation method that accepts a source sentence (e.g. in German) and a guessed translation sentence (e.g. in English) that is annotated (by humans) with change markers. It then aims to improve upon the guess by generating a better translations avoiding the marked tokens. This is markedly different as (a) the model accepts as input the spans that need to be removed or retained in the guess sentence. In contrast, CopySeqSpan needs to infer this information. (b) QuickEdit does mention having a copying mechanism neither for a single token nor for spans.\\n\\n\\u2022\\tLatent Predictor Networks (Ling et al. 2016) are one of the first papers to marginalize over a single-token copying action and a generation action yielding the same result. However, they do not consider copying spans of text, and hence only need to consider one decoding step at a time (essentially corresponding to the unnamed equation on page 2 of our submission).\\n\\n\\u2022\\tHybrid LMs (Grave et al 2019), is similar to van Merri\\u00ebnboer et al. (2017), which we discuss in the related work section of our submission. These works also marginalize over different ways that yield the same output for language modeling (character-level, character n-gram and word-level). And although their training objective is similar to our objective in Eq. (2) they focus on language modeling without encoders, copying or span-copying, considering only a fixed set of possible outputs (words, character n-grams, characters). In contrast, here we consider arbitrary spans of the input and need to create representations of those spans.\\n\\n\\u2022\\tGhazvinenejad et al (2019), Lee et al (2018) and the other non-autoregressive work we cited (Welleck et al., 2019; Stern et al., 2019; Gu et al., 2019) are indeed relevant but not directly related to this work as they can also be augmented with a span-copying mechanism.\\n\\nWe have included the above in a revision of our submission.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comments and suggestions. To answer your questions:\\n\\n\\u2022\\tRe baselines: Apologies for the imprecision in our text: our baseline is a seq2seq model where the encoder is a (two-layer) biGRU and the decoder is a GRU. More precisely, all of our baselines are using the same codebase with span-copying turned off. We have made this explicit in the introduction of the evaluation section.\\n\\n\\u2022\\tRe biGRU architecture: The choice of (bidirectional) GRU networks was motivated by cursory early experiments in which GRU models performed similarly well as LSTM models, but were slightly faster.\\n\\n\\u2022\\tRe beam search: We are not sure we fully understand the question. The problem you seem to describe (that a low-scoring ray could eventually be completed to a high-scoring ray, but is not explored if it falls out of the beam) is a general problem of beam search, and does not seem to be influenced by our modification. The prevalence of this problem for a given decoder can be approximated by running beam search with substantially bigger beams and comparing the results (where the \\u201cgap\\u201d stems from rays that were dropped). We are running such an experiment on BFP small with a beam size of 100 and will report these results when they are finished.\\n\\n\\u2022\\tRe corner cases of beam search: We did not understand which corner cases are unclear. Could you provide more details on this?\\n\\n\\u2022\\tRe related work: We have added a comparison to \\u201cLatent Predictor Networks for Code Generation\\u201d in our latest revision, and were already citing the Levenshtein Transformer in our discussion of decoders that are not following a left-to-right decoding strategy.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new decoding mechanism that allows span-copying which can be viewed as a generalisation of\\npointer networks.\\n\\nBecause action sequences of the same length can lead to sequences of different lengths decoding becomes tricky therefore authors propose a variation of standard beam search that calculates a lower bound of sequence probabilities rather than the true probability of generation this is achieved in practice by merging probabilities of sampled rays of actions generating the same sequence during the search.\\n\\none advantage of this proposed model is that it doesn't need to copy word by word to update sequences which need minor changes, rather than the seq2seq model with copy actions which due to the way we train those models using NLL loss will likely assign high probabilities to the non-modified input.\", \"authors_evaluate_their_model_against_traditional_seq2seq_models_with_copy_actions_over_a_set_of_tasks\": [\"code correction tasks: two bug-fix pair (BFP) datasets of Tufano et al. (2019)\", \"grammar error correction (Bryant et al., 2017)\", \"learning edit representations\"], \"pros\": \"Overall I am in favour of this work acceptance it represents a neat modelling for copying sequences that integrated simply with seq2seq models, especially the transformer model.\", \"cons\": [\"One of the drawbacks of this method is the decoding strategy although authors present a motivated solution for that. The proposed variation of beam search by calculating the lower bound solution seems adhoc and some corner cases are not explained in the paper (see the question below).\", \"Experiments could have been more thorough, especially in terms of architectures. I was disappointed not to see authors only comparing between GRU based seq2seq with copy actions, one baseline and their model implementation over a biGRU seq2seq.\"], \"questions_to_authors\": [\"During inference using the proposed variation of beam search (e.g. k=5), What will happen for example if one ray of actions was dropped because not of the top 5 this ray of actions if continued using future actions would map to one existing top-scoring rays? do you do a way to control that?\", \"What is the reason behind choosing the bi-gru architecture?\"], \"missing_references\": \"\", \"there_are_a_couple_of_similar_work_that_authors_might_want_to_add\": [\"Latent Predictor Networks for Code Generation https://arxiv.org/pdf/1603.06744.pdf\", \"An Operation Network for Abstractive Sentence Compression https://www.aclweb.org/anthology/C18-1091.pdf\", \"Levenshtein Transformer https://arxiv.org/pdf/1905.11006.pdf\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper study the problem of editing sequences, such as natural language or code source, by copying large spans of the original sequence. A simple baseline solution to this problem is to learn a sequence to sequence neural network, which generates the edited sequence conditioned on the original one. This method can be improved by adding a copying mechanism, based on pointer networks, to copy tokens from the input. However, most of such existing approaches can only copy one input token at a time, which is a limitation when most of the input should be copied, which is the case for most editing tasks. In this paper, the authors propose a mechanism that can copy entire spans of the input instead of just individual tokens. In that case, a particular sequence can often be generated by many different actions (eg. copying individual tokens, pairs of tokens, or the whole span). It is thus important to marginalize over all the actions that generated a particular sequence. This can be done efficiently, using dynamic programming, if the probability of an action depends on the generated tokens only, but not on the sequence of actions used to generate them. In the case of neural network, this means that the decoder of the model takes the tokens as input, instead of the spans. To represent spans, the authors propose to use the concatenation of the hidden states corresponding to the beginning and end of the span. Then the probability of copying a span is obtained by taking the dot product between this representation and the current hidden state of the decoder, and applying the softmax. The authors evaluate the proposed approach on the following tasks: code repair, grammar error correction and edit representations (on wikipedia and c# code).\\n\\nThe paper is well written, and easy to follow, even if some sections could be a bit more detailed (for example, the section on \\nbeam search decoding). The problem studied in the paper - copying spans from the input - is interesting, and has applications in NLP or code generation. I think that the the proposed solution is technically sound.\\nHowever, I have some concerns regarding the paper. First I believe that many relevant prior works are not discussed in the paper, making some technical contributions of the paper not novel. For example, previous methods were proposed to copy \\nspans from the input [1], to edit existing sequences [2], or to marginalize over different generation sequences\\nby conditioning only on the generated tokens (instead of the actions the generated the sequence) [3,4]. The body of work on iterative refinement for sequence generation is also probably relevant to this paper [5,6]. Additionally, I found the experimental section a bit weak, as most of the baseline used to compare seem a bit weak. The proposed method is mostly compared on datasets that are relatively new, or on tasks such as the grammar error correction where strong methods were excluded.\\n\\nOverall, I found the paper well\\twritten, and the proposed method to make sense. Unfortunately, I believe that the work is a bit incremental, most of the technical contributions having already been published. Since the experimental results are not very strong, I do not think the paper is good enough for publication to the ICLR conference.\\n\\n== References ==\\n\\n[1] Sequential Copying Networks, Qingyu Zhou, Nan Yang, Furu Wei, Ming Zhou, AAAI 2018.\\n[2] QuickEdit: Editing text & translations via simple delete actions, David Grangier, Michael Auli, 2017\\n[3] Latent Predictor Networks for Code Generation, Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, Fumin Wang, Phil Blunsom, ACL 2016\\n[4] Training Hybrid Language Models by Marginalizing over Segmentations, Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, Armand Joulin, ACL 2019\\n[5] Mask-Predict: Parallel Decoding of Conditional Masked Language Models, Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer, EMNLP 2019\\n[6] Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement, EMNLP 2018\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this work, the authors tackle the problem of span-based copying in sequence-based neural models. In particular, they extend the standard copying techniques of (Vinyals et. al., Gulcehre et. al., etc.) which only allow for single-token copy actions. Their span-based copy mechanism allows for multiple tokens to be copied at a time during decoding via a recursive formulation that defines the output sequence distribution as a marginal over the complete set of action combinations that result in the sequence being produced. The authors also propose a span-based beam decoding algorithm that scores output sequences via a sum over the probabilities of action sequences that produce the same output.\", \"the_authors_evaluate_their_model_on_four_tasks\": \"code repair, grammar error correction, editing wikipedia, and editing code. They find that their proposed technique consistently outperforms single-token copy-based seq2seq baselines. They also show that the efficacy of their proposed beam decoding mechanism and do some simple quantitative analysis that the model learns to copy spans longer than a single token.\\n\\n\\nIn general, I found this paper to be very clearly written with very good motivation for the proposed solution. In addition, I thought the authors did a good job of testing their model against a wide range of benchmark problems. It seems that their copy extension is a meaningful contribution. \\n\\nI do, however, some questions regarding the evaluation, in particular the complexity of the baselines that were compared against. For example, the model consistently outperforms simple copy seq2seq baselines as well as the baselines in which the benchmark datasets were proposed (Tufano et. al, Yin et. al.) However, it does not seem the span-based copying method is state-of-the-art. If it is not state-of-the-art, how far off the SOTA is this proposed architecture? Did the authors do any analysis whereby the span-copy mechanism was added to an existing SOTA model, and if so, did this still produce gains? It's a bit difficult to situate the exact power of this new mechanism, given that it is often only compared to a simplistic copy-seq2seq method. \\n\\nOther questions/feedback I have for the authors:\\n1) How efficient/scalable is the proposed mechanism? I would like to see a more formal treatment of the run-time of the training marginalization operation.\\n2) It would be nice to see a quantitative analysis for distribution of sequence lengths copied over (like some sort of histogram) for the datasets.\\n3) It would also be helpful to add some short descriptions of the benchmark datasets.\"}" ] }
SJlWyerFPS
DeepXML: Scalable & Accurate Deep Extreme Classification for Matching User Queries to Advertiser Bid Phrases
[ "Kunal Dahiya", "Anshul Mittal", "Deepak Saini", "Kushal Dave", "Himanshu Jain", "Sumeet Agarwal", "Manik Varma" ]
The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set. Unfortunately, state-of-the-art deep extreme classifiers are either not scalable or inaccurate for short text documents. This paper develops the DeepXML algorithm which addresses both limitations by introducing a novel architecture that splits training of head and tail labels. DeepXML increases accuracy by (a) learning word embeddings on head labels and transferring them through a novel residual connection to data impoverished tail labels; (b) increasing the amount of negative training data available by extending state-of-the-art negative sub-sampling techniques; and (c) re-ranking the set of predicted labels to eliminate the hardest negatives for the original classifier. All of these contributions are implemented efficiently by extending the highly scalable Slice algorithm for pretrained embeddings to learn the proposed DeepXML architecture. As a result, DeepXML could efficiently scale to problems involving millions of labels that were beyond the pale of state-of-the-art deep extreme classifiers as it could be more than 10x faster at training than XML-CNN and AttentionXML. At the same time, DeepXML was also empirically determined to be up to 19% more accurate than leading techniques for matching search engine queries to advertiser bid phrases.
[ "extreme multi label learning", "extreme classification", "deep extreme multi label learning", "deep extreme classification", "large output space" ]
Reject
https://openreview.net/pdf?id=SJlWyerFPS
https://openreview.net/forum?id=SJlWyerFPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Bf_KGoGLn", "ryg8yS8moS", "BylXPX8XjS", "B1gHAeImsH", "HyebKkIXiH", "S1e5yiS7jr", "BJe2vF789S", "SylqCgSG5H", "HyxA3eVAYr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798739241, 1573246174235, 1573245786677, 1573245132858, 1573244793142, 1573243618502, 1572383075641, 1572126929636, 1571860661704 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper2052/Authors" ], [ "ICLR.cc/2020/Conference/Paper2052/Authors" ], [ "ICLR.cc/2020/Conference/Paper2052/Authors" ], [ "ICLR.cc/2020/Conference/Paper2052/Authors" ], [ "ICLR.cc/2020/Conference/Paper2052/Authors" ], [ "ICLR.cc/2020/Conference/Paper2052/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper2052/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper2052/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a new method for extreme multi-label classification. However, this paper only combine some well known tricks, the technical contributions are too limited. And there are many problems in the experiments, such as the reproducibility, the scal of data set and the results on well-known extreme data sets and so on. The authors are encouraged to consider the reviewer's comments to revise the paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Official Blind Review #2 [Part 2/2]\", \"comment\": \"2. A. Please refer to 1 (A).\\n\\nB. In extreme classification, propensity-scored metrics are as critical, if not more so, than the vanilla metrics as far as real life applications are concerned [Accuracy subsection in Section 2 (Related work)]. DeepXML/DeepXML-RE is able to consistently outperform the current state-of-the-art approaches such as DiSMEC, Parabel, and AttentionXML on vanilla metrics, and even more so on propensity-scored metrics. Specifically, DeepXML-RE can be 2.39%, 2.21%, 4.21%, and 4.11% more accurate than Parabel in terms of PSP@1 on WikiSeeAlsoTitles-500K, WikiTitles-500K, AmazonTitles-670K, and AmazonTitles-3M respectively [Table 1; Page 7].\\n\\nC. We agree with the reviewer that Slice can scale to 100 million labels, however with pre-trained features only. As demonstrated in Table 3 [Page 8], pre-trained features with Slice classifier are unable to perform on-par with DeepXML. Unfortunately, Slice loses both accuracy and scalability if features are to be learnt alongside the classifier. Firstly, Slice needs to train the ANNS graph multiple times when features are being constantly updated. Secondly, when the features are pre-trained, the loss function decomposes over the labels and therefore Slice is able to train each of the classifier in parallel [Eq. 1 & 2 in Jain et al. 2019]. This property is no longer true when features are learnt and therefore the source of parallelism and hence efficiency goes away.\\nSpecifically, Slice with feature training [referred to as DeepXML-ANNS in Table 3] was found to be ~2.3% less accurate relative to DeepXML-RE. \\n\\n3. A & B. We would like to thank the reviewer for pointing out the unclear aspects. We have updated the paper to clearly define notation such as X\\u2019, L_h, and method names [Section 5 and Section A.6 (in the appendix)]. \\nC. [Last 2 Paragraphs on Page 4] discuss the highly relevant papers such as fasttext, Slice, etc. in detail whereas the Related Work section covers the relevant approaches at a higher level. Additionally, the same paragraph presents our motivation behind the design choices. \\n\\n4. a) Please refer to setting beta in hyper-parameters [point 1 (B) in response].\\nb) DeepXML does excel over non-deep-learning based methods on especially on short text. However, the accuracy also depends significantly on the following factors: i) number of labels, ii) feature distribution, iii) training points, and iv) label distribution. \\nc) After carefully analyzing the label distribution of the WikiTitles-500K dataset we observed that 3 labels such as \\u201cliving people\\u201d occurred in tens of thousands of documents and it was sufficient to use multiple representatives for these three labels only. Additionally, the number of clusters were chosen empirically in order to improve recall values [Line 8; Para 2; DeepXML-h; Section 3.1]. \\nThis problem is specific to the datasets where some labels are highly diverse [Para 2, DeepXML-h, Page 5] in nature and that is the case with WikiTitles-500K only [for e.g. \\u201cLiving people\\u201d tag]. Ideally, one could cluster data points and use multiple representatives for every label in L_h. However, it leads to an increase in training time without significant gain in accuracy.\"}", "{\"title\": \"Response to Official Blind Review #2 [Part 1/2]\", \"comment\": \"We thank the reviewer for the constructive criticism and helpful feedback. We would like to take this opportunity to address the expressed concerns:\\n\\n1. A. we would like to clarify that Q2B-3M is a highly impactful real world application being used by millions of users around the world (though, unfortunately, personal user queries cannot be released publically due to privacy and other concerns). When deployed in production in a live flight, DeepXML was found to benefit many users and advertisers even though our search engine already has a large ensemble of state-of-the-art techniques for this task. In particular, DeepXML added more than 67 million new good quality predictions over the ensemble. Furthermore, for 8.6% of queries, DeepXML was able to match the query to a good quality bidded keyword and show an ad whereas none of the algorithms in the production ensemble could. Simultaneously, DeepXML increased the quality of predictions by 2.9% over the ensemble. As a result, this increased revenue per thousand queries by 1.64% which is highly significant given the overall volume. Thus, to reiterate, DeepXML can lead to large gains in performance for real world applications impacting millions of users.\\nFurthermore, [for Q2B-3M] DeepXML used the default hyper-parameters determined from small and moderate-sized datasets, mitigating the need for any hyper-parameter tuning.\\n\\nB. As included in Table 5 (in the appendix), DeepXML takes 3 tunable hyper-parameters, namely the threshold to split labels, beta, and the learning rate for DeepXML-h. Other parameters such as the shortlist size (|s|) and the learning rate for DeepXML-t etc. have been chosen to have the same values for all datasets. Please note that only the learning rate for DeepXML-h (only 1 out of 3 tunable parameters) requires re-training the head network, i.e. DeepXML-h, in order to search for the optimal value. \\ni. The proposed approach splits labels based on frequency [Section 4.2 and Section A.2 (in the appendix)]. The frequency threshold for splitting the labels is chosen while keeping the following points in mind: i) The number of head labels must not grow beyond 200K labels. ii) Most of the words/tokens should be covered in the vocabulary for head labels. Please note that the threshold is chosen strictly based on the aforementioned criteria and hence we train DeepXML only for the chosen threshold frequency, i.e., no re-training required.\\nii. The beta parameter which controls the weightage of the classifier score and the ANNS score doesn\\u2019t have any impact on the training time. Beta is chosen post-prediction in order to achieve the best precision on the validation set. The impact of beta is already covered in Fig. 3 [in the appendix].\\n\\nC. It seems that we have inadvertently conveyed the misconception that our paper just learns the features on the head and then runs Slice on these fixed features. Doing this in our experiments lead to an accuracy drop of 1-2% and 3x decrease in efficiency. Thus the focus of our paper is to learn apt feature representation and to address these limitations of Slice so as to actually increase the accuracy and improve the efficiency. In particular, the following limitations of Slice were addressed: a) Slice would have required to re-trains ANNS graph multiple times as the features are learnt for DeepXML-t as well. DeepXML addresses this limitation by using pre-residual features to train ANNS and post-residual features to train the classifier. b) Slice would have required to sample ~3x labels leading to 3x increase in cost of classifier during training and prediction.\\n\\nDeepXML makes principled design choices which were required to achieve state-of-the-art accuracy and scalability. The design choices of splitting labels, feature representations, and classifier are motivated in the \\u201cDeepXML, FastText & Slice\\u201d subsection [Page 4] and backed by empirical results in Tables 1 and 2.\\n\\n[Continued below]\"}", "{\"title\": \"Response to Official Blind Review #1 [Part 2/2]\", \"comment\": \"4. a) In the extreme classification community TF-IDF based BoW features are widely used. We would like to clarify that we have also used the same TF-IDF based BoW features for training/evaluating both DiSMEC/Parabel and DeepXML/DeepXML-RE. Please note that the dataset statistics are already included in Table 4 in the appendix.\\nb) Bi-gram and tri-gram features may lead to improvements in BoW based methods for certain applications. However, bi-gram and tri-gram features can also lead to an increase in training time, prediction time and model size due to increased number of features. We had experimented with bi-grams and tri-grams and they did not lead to any significant gain in our experiments. Please refer to the following table which demonstrates results with different numbers of bi-grams for Parabel on AmazonTitles-670K. Here, both uni-grams and bi-grams were included in the vocabulary and hence the features. It should be mentioned that here TF-IDF features are used, similar to the rest of the paper. \\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n#unigrams #bigrams P@1 PSP@1\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n66666 0 38.00 23.10\\n66666 10000 37.76 22.86\\n66666 50000 37.95 22.93\\n66666 100000 38.16 23.12\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\n5. a) DiSMEC could potentially be scaled to AmazonTitles-3M, however it would require roughly 2 weeks to train on a machine with the following configuration: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz (64 cores).\\nb) AttentionXML could also be potentially scaled to AmazonTitles-3M, however, it would require 150+ hours on a single P40 GPU card.\\nThese training times are an order of magnitude higher than DeepXML and are prohibitive for real world applications. Nevertheless, as requested by the reviewer, we are running these experiments and will update the paper with the numbers as soon as they are available.\\nc) We would also like to clarify that we have used the most recent version of AttentionXML (i.e., with shallow trees). The code for this was provided by the authors.\"}", "{\"title\": \"Response to Official Blind Review #1 [Part 1/2]\", \"comment\": \"We thank the reviewer for the constructive criticism and helpful feedback. We would also like to thank the reviewer for recognizing that less than a handful deep learning based approaches have ever been shown to scale to the extreme multi-label classification setting. The reviewer's main criticisms seem to be around the experiments, in which regard we would like to clarify the following:\\n\\n1. As mentioned in the abstract and contribution, the dataset and the code will be made publicly available when the paper is accepted. Furthermore, raw text (and labels) for 3 of the datasets namely, AmazonTitles-670K, WikiTitles-500K and AmazonTitles-3M is already publicly available at the Extreme Classification Repository. We have used \\u201ctitles\\u201d from the standard datasets from Extreme classification Repository suited for short text documents. \\n\\n2. Thank you for recognizing that focus of the paper is on short text documents where we clearly demonstrate that DeepXML consistently outperforms the current state-of-the-art methods. Our architecture is specifically designed for short text documents, focusing on accuracy and scalability. In particular, for real world applications such as Q2B predictions need to be made in milliseconds on the CPU and therefore expensive architectures as those employed in AttentionXML and XML-CNN are unsuitable. \\nNevertheless, for the sake of completeness and as requested by the reviewer, we have experimented with full text datasets and results are summarized as follows. We have added these results in Table 8 in the supplementary section as well.\\n\\nA. Amazon-670K\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nMethod PSP@1 P@1\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nDeepXML-RE 30.17 44.78\\nDiSMEC 27.8 44.7\\nParabel 25.43 44.89\\nAttentionXML 30.29 47.58\\nProXML 30.8 43.5\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\nB. Wikipedia-500K\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nMethod PSP@1 P@1\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nDeepXML-RE 30.16 69.58\\nDiSMEC 31.2 70.2\\nParabel 26.88 68.7\\nAttentionXML 30.85 76.95\\nProXML 33.1 68.8\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\nAs we can clearly see LSTM/AttentionXML based architecture does provide benefit over DeepXML for long text datasets such as Wikipedia-500K, where a document may contain hundreds (sometimes even thousands) of words.\\nHowever, in our scenario, i.e., short text documents, DeepXML-RE can lead to a gain of 1.0-4.3% in performance (vanilla and propensity scored precision@1) while being 33-42\\u00d7 faster at training than AttentionXML which incorporates an Attention mechanism for each label.\\n\\n3. A. In extreme classification, propensity scored metrics are as critical, if not more so, than the vanilla metrics as far as real life applications are concerned. DeepXML/DeepXML-RE is able to consistently outperform the current state-of-the-art approaches such as DiSMEC, Parabel, and AttentionXML on vanilla metrics and even more so on propensity-score metrics. The results (Table 1 & Table 2) can be summarized as: i) Vanilla precision/nDCG: DeepXML-re can be 2.91%, 2.76%, and 1.08% more accurate than Parabel in terms of precision@1 on WikiSeeAlsoTitles-500K, WikiTitles-500K, and AmazonTitles-670K respectively, while being comparable (~0.75% worse) to Parabel on AmazonTitles-3M. ii) Propensity scored precision/nDCG: DeepXML-RE can be 2.39%, 2.21%, 4.21%, and 4.11% more accurate than Parabel in terms of PSP@1 on WikiSeeAlsoTitles-500K, WikiTitles-500K, AmazonTitles-670K, and AmazonTitles-3M respectively. iii) Additionally, results on the Q2B-3M dataset further strengthen the argument for DeepXML, where DeepXML-re has been found to be 19% more accurate in terms of P@1 and 21.71% more accurate on PSP@1, relative to Parabel. \\nB. Regarding the utility of DeepXML, we would like to clarify that Q2B-3M is a highly impactful real world application being used by millions of users around the world (though, unfortunately, personal user queries cannot be released publically due to privacy and other concerns). When deployed in production in a live flight, DeepXML was found to benefit many users and advertisers even though our search engine already has a large ensemble of state-of-the-art techniques for this task. In particular, DeepXML added more than 67 million new good quality predictions over the ensemble. Furthermore, for 8.6% of queries, DeepXML was able to match the query to a good quality bidded keyword and show an ad whereas none of the algorithms in the production ensemble could. Simultaneously, DeepXML increased the quality of predictions by 2.9% over the ensemble. As a result, this increased revenue per thousand queries by 1.64% which is highly significant given the overall volume. Thus, to reiterate, DeepXML can lead to large gains in performance for real world applications impacting millions of users.\\n\\n[Continued below]\"}", "{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"We would like to thank the reviewer for the constructive feedback and kind words regarding the writing. We would like to take this opportunity to address the expressed concerns:\\n\\n1. 0.1L is used to assert that less than 10% of the labels are treated as head labels by DeepXML for the large datasets such as AmazonTitles-3M and Q2B-3M. The proposed approach indeed splits labels based on frequency [Section 4.2 and Section A.2 (in the appendix)]. The frequency threshold for splitting the labels is chosen while keeping the following points in mind: i) The number of head labels must not grow beyond 200K labels. This is important for practical purposes as the head network, i.e. DeepXML-h, is trained with a fully connected layer (classifier) and increasing the labels can lead to increase in training time. For instance, DeepXML-fr, which consider all labels as head labels, can be 3-5x slower to train than DeepXML [Table 1; Page 7]. ii) Most of the words should be covered in the vocabulary for head labels. Note that word embeddings are not updated for the tail network. Hence, the network needs to learn word embeddings for all words in the vocabulary from only the head labels. Additionally, the exact number of head and tail labels for various datasets are included in Table 5 in the appendix.\\n\\n2. Inference in the tail network, i.e. DeepXML-t, relies on following components: i) The ANNS graph which returns the indices and scores (cosine similarity) corresponding to the |s| most probable labels for a novel test point, ii) A classifier which is evaluated (dot product) only on the most probable labels returned by the ANNS graph. Hence, the final score is the combination of the ANNS score, i.e. $\\\\hat{y}_{anns-t}$, and the classifier score $\\\\hat{y}_{clf-t}$, as included in Equation 1 in Section 3.2. This approach brings down the classifier cost to $O(d log(|L_t|)$ + d |s|) from $O(d |L_t|)$. Please note that $|s|$ is kept as 300 for all the datasets.\\n\\n3. The head network, i.e. DeepXML-h, is trained with a fully connected output layer. However, in order to meet the low latency constraint of real life applications an ANNS structure is trained post-training (for DeepXML-h) which brings down the inference cost of the classifier to $O(d \\\\log(|L_h|) + d |s|)$ from $O(Nd |L_h|)$. [Line 7; Para 1; DeepXML-h; Section 3.1].\\nHowever, trivially training an ANNS leads to poor recall values on head labels, i.e., some of the true labels didn't appear in the ANNS shortlist [Para 2 in 3.1 (DeepXML-h)]. Please note that if a label (say $l_i$) doesn't appear in the shortlist (for a novel test instance $x_j$), then the classifier will not be evaluated for $l_i$ and hence the network will not be able to predict it for $x_j$. This problem mainly occurs for labels with highly diverse contexts such as \\\"Living People\\\" in WikiTitles-500K. DeepXML tackles this problem by allowing multiple representatives for the aforementioned labels resulting in a 5% increase in recall@300 and 6% in precision@300 with a shortlist of size 300 [Line 8; Para 2 in 3.1 (DeepXML-h)].\\n\\n4. PSP@k [Jain et al. 2016] is a standard metric used in extreme classification. We have added the definitions in A.6 in the supplementary section for completeness. Please note that predicting rare tail labels accurately is much more rewarding than predicting common and obvious head labels in extreme multi-label learning [Please refer to the \\\"Tail labels\\\" subsection under the Introduction (Section 1)]. Hence, PSP@k, which focuses more on tail labels is critical to real word applications such as matching user queries to advertiser Bid Phrases, i.e., Q2B-3M. \\n\\n5. As mentioned in the abstract and contribution, the dataset and the code will be made publicly available when the paper is accepted. Furthermore, raw text (and labels) for 3 of the datasets namely, AmazonTitles-670K, WikiTitles-500K, and AmazonTitles-3M is already publicly available at the Extreme Classification Repository. We have used \\\"titles\\\" from the standard datasets from Extreme classification Repository suited for short text documents.\\n\\n6. Some of the acronyms such as PLT (Jasinska et al., 2016) and ANNS (Jain et al., 2019) are taken from the respective papers. We have defined all acronyms on first use. Thanks for pointing this out.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a new algorithm for extreme multi-label classification. The focus of this paper is on being able to handle short documents with all experiments focussed on matching user queries to advertiser bid phrases. The key novelty in this paper is to split the labels into two buckets: head labels and tail labels. The model learns word embeddings on the head labels + a classifier on top of those embeddings. For the tail labels, the embeddings from the head labels are used as the input for another classifier which is trained on only the tail.\", \"my_thoughts_on_the_paper\": [\"I really like the paper writeup; its succinctness (in most sections, some comments below).\", \"Small nit: why choose head labels as 0.1L? I would have expected a more natural choice to be based on frequency?\", \"Figure 1 (and explanation in section 3.1): I understand the head setup completely (although it seems to be missing the ANNS). For the DeepXML-t part, I am not very clear from the picture nor the explanation how the ANSS feeds into the weights and how that leads to the label \\\\hat{y}+{clf-t}?\", \"Section 3.1, DeepXML-h section: the bit after \\\"Additionally ...\\\" until \\\"DeepXML-t\\\" section is unclear. I think this needs to be explained better.\", \"I might have missed it but is PSP metric defined explicitly somewhere?\", \"The experiment section contains lots of baseline comparisons; unfortunately not all on publicly available datasets.\", \"The paper uses a large number of aconyms which are defined sometimes after their first use, sometimes never: i.e. PLT, ANNS.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper considers extremely multi-label learning (XML) where the label size is very large . It aims to improve the accuracy as well as the scalability of XML algorithms, especially for short text inputs. The accuracy for XML with short text inputs can be significantly improved using deep learning representation than using TFIDF features. This paper proposes several tricks to handle the issue for efficiently learning both neural network parameters and classification weights of extremely large number of labels. The proposed method borrowed ideas from several previous literature, and is mainly based on SLICE, where a pre-trained fixed deep learning representation for the inputs are used with ANNS graph for labels to improve the scalability. The main difference is that instead of using a fixed input embedding, the proposed method learns the word embedding parameters via a set of head labels. The remaining labels are then trained using SLICE with fixed word embeddings from the learned word embedding model.\\n\\nOverall the paper tackles the problem well. And the empirical results show improved results. However, I don't think this paper is ready for publication due to the following concerns. \\n\\n1. My main concern is that the proposed method seems to be a combination of a number of tricks. This makes the overall algorithm/model very complicated and introduces a lot of hyper-parameters, for example, head label portion, L-h', c, beta, s neural network hyper-parameters and so on. Hence, it will be hard to be used in real applications. \\n\\n2. Another concern is about the experiments. \\n a. The most significant improvement of the proposed method over existing method happens in the private dataset, Q2B-3M, which can't be reproduced. \\n b. On the public datasets, DeepXML seems to show good results on small datasets, WikiSeeAlsoTItles-350K and WkipediaTitle-500K, while on large datasets, DeepXML performance is close to the existing methods. \\n c. The largest label size in the experiments is 3M. SLICE can be scaled up to 100M labels. \\n\\n3. The writing and the organization of this paper needs to be improved. \\n a. Some notations are not clearly defined. For example, L_h in Line 6 and X' in Line 9 on Page 5. \\n b. Several method names are not defined. For example, DeepXML-fr, AttentionXML-l, Sliced-CDSSM, DeepXML-SW, DeepXML-f. I have to guess what they are. \\n c. The last two paragraphs on Page 4 seems to be related work, while there is a section called \\\"Related work\\\".\", \"other_minor_comments\": \"1. It seems it is not stated how Beta is set. \\n2. I am wondering if it's true that the shorter the input text is, the better improvement over non-deep-learning methods DeepXML can achieve.\\n3. In the first paragraph of Sec 3.1, it is mentioned \\\"clustering just the top 3 labels into 300 clustering\\\". Why choose 3 and 300? Are these numbers used for all datasets?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a deep learning method for extreme classification and apply it to the application of matching user queries to bid phrases. The main idea is to learn the deep models separately for head and tail labels. Since there is abundant training data for the head labels and transfer the learnt word embeddings for the network for tail-labels, while keeping the head network fixed.\\n\\nOn the positive side, given that there are relatively few successful approaches for deep learning in extreme classification, the main contribution of the paper is towards making an attempt towards this goal.\\n\\nHowever, since the paper is mostly empirical in nature and based on algorithmic implementation, the experimental evaluation does not seem quite convincing for the following reasons :\\n\\n1. Firstly, all the datasets used in the work are private and not publically available. This is quite in contrast to all the various works in this community which use publicly available data and codes.\\n\\n2. It is not clear why the authors did not to evaluate their approach on the \\\"standard\\\" datasets from the Extreme Classification Repository http://manikvarma.org/downloads/XC/XMLRepository.html. Though it is clear that the focus of the paper is on short text classification, but it is important to evaluate what happens when that is not case. Does the method also works well in longer training/test instance, as there is no immediate reason for it to not work well in that case. Or is it that other methods outperform in that scenario.\\n\\n3. The performance of the proposed method DeepXML is not significantly better than Parabel. For instance, on two of the four datasets in Table 1, it gives same predictive performance with order of magnitude less training time and much lower prediction time. This begs the question of utility of proposed approach.\\n\\n4. Related to above is impact of data pre-processing for different methods. DeepXML seems to use tf-idf weighted word embeddings while other methods use simply BoW representation. It is possible that using simialr data representation or combination with bigrams/trigrams might also improve performance of Parabel and DiSMEC, since it is known from short text classification that using this info can improve performance (https://www.csie.ntu.edu.tw/~cjlin/papers/libshorttext.pdf).\\n\\n5. Lastly, it is unclear why AttentionXML and DiSMEC are shown to be non-scalable for Amazon3M when they have shown to be evaluated on the bigger version of the same datasets in other works. Also, it might be noted that AttentionXML in the latest version can be combined with shallow trees for efficient scaling.\"}" ] }