forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
B1erJJrYPH | Optimizing Loss Landscape Connectivity via Neuron Alignment | [
"N. Joseph Tatro",
"Pin-Yu Chen",
"Payel Das",
"Igor Melnyk",
"Prasanna Sattigeri",
"Rongjie Lai"
] | The loss landscapes of deep neural networks are poorly understood due to their high nonconvexity. Empirically, the local optima of these loss functions can be connected by a simple curve in model space, along which the loss remains fairly constant. Yet, current path finding algorithms do not consider the influence of symmetry in the loss surface caused by weight permutations of the networks corresponding to the minima. We propose a framework to investigate the effect of symmetry on the landscape connectivity by directly optimizing the weight permutations of the networks being connected. Through utilizing an existing neuron alignment technique, we derive an initialization for the weight permutations. Empirically, this initialization is critical for efficiently learning a simple, planar, low-loss curve between networks that successfully generalizes. Additionally, we introduce a proximal alternating minimization scheme to address if an optimal permutation can be learned, with some provable convergence guarantees. We find that the learned parameterized curve is still a low-loss curve after permuting the weights of the endpoint models, for a subset of permutations. We also show that there is small but steady performance gain in performance of the ensembles constructed from the learned curve, when considering weight space symmetry. | [
"deep learning",
"optimization",
"non-convex optimization"
] | Reject | https://openreview.net/pdf?id=B1erJJrYPH | https://openreview.net/forum?id=B1erJJrYPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ovskqci4JH",
"Hkl1Cp92sH",
"rJgY7pchjS",
"H1ev9n9hoS",
"BJxF-hchiH",
"H1e-Yoqhor",
"r1gr0cc3jr",
"BJxTPBxM9S",
"SylrU09ptS",
"SJeZKdchKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724060,
1573854663385,
1573854497237,
1573854351325,
1573854208597,
1573854072542,
1573853901202,
1572107621232,
1571823181024,
1571756153378
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1466/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1466/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1466/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the loss landscape of neural networks by taking into consideration the symmetries arising from the parametrisation. Specifically, given two models $\\\\theta_1$, $\\\\theta_2$, it attempts to connect $\\\\theta_1$ with the equivalence of class of $\\\\theta_2$ generated by weight permutations.\\nReviewers found several strengths in this work, from its intuitive and simple idea to the quality of the experimental setup. However, they also found important shortcomings in the current manuscript, chief among them the lack of significance of the results. As a result, this paper unfortunately cannot be accepted in its current form. The chairs encourage the authors to revise their work by taking the reviewer feedback into consideration.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for their feedback and their time. We appreciate the reviewer sharing typos and inaccuracies with us. In our revised paper, we have corrected for those we immediately agree with. For others, we address them below.\", \"pam\": \"Please see our general comment discussing the role of PAM in this paper.\\n\\n4. We have revised the section on the background for Neuron Alignment. We have focused on increasing readability and providing references for the linear assignment problem. Our notation is consistent with Li et al. (2016) with some alterations made for the purpose of clarity.\\n\\n7. We have provided more detail in the proof in Section D.1. This mainly includes more definitions related to showing that the objective function satisfies the Kurdyka-Lojawiesics property. We believe that the proof should be much easier to follow now. \\n\\nOnce again, we would like to thank the reviewer for their time.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We would like to thank the reviewer for their feedback and their time.\\n\\n1. Please see our general comment on the motivation and contributions of this work. The other main empirical result of this paper concerned ensembling performance. Please see our general comment addressing our ensembling results.\\n\\n2. Please see our general comment discussing the role of PAM in this paper. \\n\\n3. We apologize for lack of detail in places. We now provide additional detail in the Experiment section on the methodology for learning the curves. We have fixed the lack of references to figures in the appendix. Also, we have anonymously shared the code with all reviewers and the AC in a private comment. Once again, we thank the reviewer for their time.\"}",
"{\"title\": \"On the Role of PAM\", \"comment\": \"First, we apologize that our reasoning for including PAM into this paper was not clear to the reviewers. To this end, we have revised the paper to eliminate the perceived disconnect of PAM from the rest of the paper. We have also worked to make the focus of this work more concise to the reviewer. Listening to the reviewers, we have made non-minor revisions to this work. To aid reviewers, we have highlighted key differences/additions in the new text.\\n\\nWe also include updated results in the revised version of this paper. The objective function used in PAM was modified so that the control point of the Bezier curve is a function of the permutation $P$ (See Equation 8 of the revised paper). The updated results can be found in the revised Table 2. We have introduced PAM results for the ResNet32 architecture, and a new architecture introduced in the revised paper called TinySix. We note in the paper that it was too computationally intensive to train curves using PAM for VGG16. We find that the PAM Unaligned case now outperforms the Unaligned case. In this new implementation, PAM displays a performance benefit in its own right. \\n\\nOverall, the reason for including PAM in this work is to provide a a theoretical framework for addressing the generalized optimization problem in Equation 5 through a rigorous method. Part of what makes this approach rigorous is our ability to establish convergence guarantees for a subset of \\\"nice enough\\\" neural networks. We note that establishing convergence for alternating minimization methods is typically nontrivial. The PAM Aligned case gives us an upper bound for judging the performance of the Aligned case. Since the performance of these two methods is comparable, this suggests that the aligned permutation $P_{Al}$ is already close to a locally optimal permutation. This is ideal as the aligned curves are much less expensive to compute than the PAM Aligned curves. Then we have an approximation method to solve curve finding up to symmetry for large cases like VGG16 where PAM becomes computationally infeasible. We strongly emphasize this motivation in the revised paper.\"}",
"{\"title\": \"Addressing Ensembling Performance\", \"comment\": \"The results suggest that the performance increase upon ensembling is significant for simple architectures, which correspond to TinyTen in our original experiments. To further highlight the strength of ensembling on the aligned path for underparameterized networks, we introduce the TinySix architecture to the paper. It is a a modified version of TinyTen with 4 layers removed. The results are consistent and in line with earlier literature (Ju et al. 2018), reporting more evident performance gain from ensembling for simpler architectures. It is noteworthy that the averaging technique used in the present study is the same as used in (Garipov et al. (2018)). We do forgo analyzing more complicated ensemble methods, which is a common practice even for papers with a main focus on ensembling. While the improvement may seem marginal as some of the reviewers have stated, it should be noted that the magnitude of the accuracy increase is similar to what was reported in (Garipov et al. (2018)), when their Fast Geometric Ensemble method was compared to the earlier Snapshot ensembling in (Huang et al. (2017)). This observation highlights the significance of neuron alignment for better ensembling, which is now made clear in the paper.\"}",
"{\"title\": \"Motivation and Contributions\", \"comment\": \"We address the motivation of this work in the second and third paragraphs of Introduction. The study of finding optimal curves between models, also known as mode connectivity, has been of recent interest in the deep learning community (Freeman & Bruna (2016); Garipov et al. (2018); Gotmare et al. (2018)). The parameterization of the models may still contain a weight ambiguity, i.e., the neurons in the same positions of different models but same architecture may not correspond to each other. Consequently, the curves connecting the models could fail to interpolate similar feature representations, caused by these so-called barriers, which in turn could break a critical structure in the interpolated networks. To this end, we want to know to what extent barriers between neural networks along optimal curves are truly just artifacts of symmetry. Understanding this problem could provide insight into the dynamics of training a neural network. Then explicitly, we are interested in finding low loss curves between networks up to symmetry, to better understand the loss landscape.\\n\\nWe reiterate the contributions summarized in Introduction. We formally generalize the curve finding problem to account for permutation ambiguity. We introduce a rigorous framework for solving this problem theoretically, known as proximal alternating minimization (Attouch (2010)). Establishing convergence for a nice subset of networks is a part of what makes it rigorous. We introduce neuron alignment as an inexpensive heuristic for approximating this permutation. Empirically, we show that the consideration of this alignment is critical for learning nearly flat loss curves which generalize better. PAM allows us to see that the permutation from neuron alignment is near the local optimal permutation. Lastly, we see a modest to notable improvement to ensembling, that is particularly notable for underparameterized models. We have added results from the new and updated experiments as well as additional text to clarify the significance of our results. Additionally, we now reference Appendix F in the main body of the text. This appendix gives insight into how the curve finding algorithm is related to neuron alignment.\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"We would like to thank the reviewer for their feedback and their time.\\n\\nRegarding the main qualms, please see our general comment on the motivation and contributions of this work. \\n\\nRegarding contribution 2, please see our general comment discussing PAM. \\n\\nRegarding contribution 3, we are interested in understanding the loss landscape of trained neural networks by exploring mode connectivity. In the context of this paper, training the curve can be seen as training an ensemble for a very low cost. Our results show that the mode connectivity via neuron alignment provides a method to identify models on the path that have low training loss as well as high accuracy and show a modest improvement in accuracy upon ensembling. As mode connectivity is an active research topic, we believe others could find this faster and efficient path training useful if their work does not require fixed symmetry.\\n\\nRegarding contribution 4, please see our general comment on the significance of the ensembling results.\\n\\nAgain, we thank the reviewer for their time and comments. Additionally, we appreciate the comments on the experimental methodology of this paper. We hope that we have resolved much of the misunderstanding regarding the motivation and contributions of this work.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper combines neuron alignment with mode connectivity. Specifically, it applies paths neuron alignment to the calculation of mode-connecting and empirical results show that alignment helps in finding better curves.\\nCombining neuron alignment with mode connectivity seems to be a good idea, but the message the authors want to convey is somewhat vague. Some key details are not presented clearly, and some contents seem to be irrelevant. The following are some detailed comments and questions:\\n1. One main contribution of this paper is the observation that the observation that alignment helps in finding better curves. An observation is excellent if it brings significant performance improvements in practice, or if it brings deep insights in the understanding of the field. However, for the former, the improvement in the performance is not that much; for the latter, there is hardly any insight conveyed by this paper. Therefore, this observation itself is not strong enough.\\n2. One contribution of this paper is applying proximal alternating minimization (PAM) when optimizing the parameters and proving its convergence. Nonetheless, PAM is only used in one model (TinyTen) and does not bring any improvement in the performance. It seems that there is no point in applying PAM and the contents related to PAM are all somewhat irrelevant.\\n3. Usually sufficient details help in good understandings, but in this paper, some key details are unfortunately missing. For example, in Algorithm 2, details on the optimization step is not clear: what is the optimization method the authors use other than PAM? Also, no comments are addressed on Figure 7 to Figure 10, either in the main body or in the appendix. I would like to see more explanations details. If the source code is provided, it will be better.\\nIn sum, the idea seems to be interesting, but the overall quality of the paper is still yet to be improved, and some key details need to be addressed more clearly before it can be accepted as a qualified submission.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper investigates the connection between symmetries in the neural network architectures and the loss landscape of the corresponding neural networks. In the previous works, there was shown that the two local minima of a neural network can be connected by a curve with the low validation/train loss along the curve. Despite the loss on the curve being close to the loss at the end points, there are segments of the curve on which the loss in higher than loss at the local minima. To overcome this problem, the authors proposed two-step procedure: 1. Align weights between two neural networks 2. Launch the path finding algorithm between aligned weights. In other words, the authors proposed to connect not original local minima but local minima that parametrize the same functions as the original ones, but have a different order of neurons. The authors also proposed PAM algorithm where they iteratively apply path finding algorithm and weights alignment algorithm.\\n\\nNovelty and significance. The idea to combine the symmetrization of NN architectures with path finding algorithms is new to the best of my knowledge. Experimentally, the authors showed that ensembling the midpoint and endpoints of the curve found via path finding algorithm coupled with the neural alignment algorithm delivers a better result than simple averaging of three independent networks. This is a new and significant result, since before the ensembling of points along the curve had the same quality as the ensemble of three independent networks or marginally better. \\nThe weak side of the paper is the PAM algorithm that occupies a significant part of the paper and does not deliver a significantly better result than the simple application of the neural alignment procedure before launching the path finding algorithm.\\n\\nClarity. Overall, the related work section contains all relevant references to the previous works to the best of my knowledge. The paper is well written, excluding the section Neuron Alignment that lacks notation description.\", \"the_paper_contains_several_typos_and_inaccuracies\": \"1. \\u201cHowever, We find its empirical performance is similar to standard curve finding, and we advocate the latter for practical use due to computational efficiency.\\u201d The word \\u201cWe\\u201d should start with the lowercase letter.\\n2. In the sentence \\u201cThe first question to address is how to effectively deal with the constraints in equation 6\\u201d the index i should be replaced with the index l.\\n3. Notation \\u03a0|Kl| introduced after equation 5.\\n4. In the section describing neuron alignment algorithm Lie et al. [1] used a different notation. So I would recommend to further extend this section and add all necessary notations. Also, I would recommend to add a direct link to the paper where the problem is described in matrix form.\\n5. In the Algorithm 1 \\u201cInitialize P \\u03b82 := [W\\u02c6 2 1 ,W\\u02c6 2 2 , ...,W\\u02c6 2 L ] as [W2 1 ,W2 2 , ...,W2 L ] for k \\u2208 {1, 2, ..., L \\u2212 1};\\u201d k is not used anywhere in notation.\\n6. In Figure 3, \\u201cmodel 2 aligned \\u201d sing is out of the plot box for ResNet-32 and VGG-16 architectures.\\n7. The appendix contains the sketch of the proof that is quite difficult to follow. I would recommend giving all necessary definitions as it is done in the [2] and extend the proof. \\n\\nOverall, it is quite an interesting paper but it contains some drawbacks.\\n[1] Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E Hopcroft. Convergent learning: Do different neural networks learn the same representations? In ICLR, 2016\\n\\n[2] H\\u0301edy Attouch, J\\u0301er\\u02c6ome Bolte, Patrick Redont, and Antoine Soubeyran. Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the kurdyka-\\u0142ojasiewicz inequality.Mathematics of Operations Research, 35(2):438\\u2013457, 2010\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Given two parameters theta_1 and theta_2 of the same architecture, the authors propose to learn a minimal loss curve between theta_1 and P theta_2, where P is a permutation matrix yielding another equivalent parameterization of the same network. The authors show that either by initializing P with neuron alignment or by optimizing P jointly with the curve, one can find a parameter-space with better accuracy and loss than by just learning a curve between theta_1 and theta_2. The authors also show that initializing P is sufficient to obtain these gains, avoiding the complexity of also optimizing for P. Furthermore, they show that ensembles across models from these curves have a very mild gain in accuracy to those of non-aligned curves.\\n\\nThe main qualm I have about this paper is about the significance of the contributions and the motivation.\\nAt the core, the authors propose to find a curve between theta_1 and P theta_2 where P comes from aligning theta_1 and theta_2 as in (Li et al. 2016.). This is lacking almost any motivation, or discussion on what problem they are trying to solve. Are they trying to find ensembles with lower error? If that is the case, well the results are evidence of a negative result in this respect, which is okay but given how ad-hoc and poorly motivated the method is to that objective it's not much of a contribution. Are they trying to better understand theoretically the loss landscape of neural networks? I don't think there's any theoretical gain in that regard from this paper either. They show that the curves between aligned networks are better, but they don't show how this relates to anything else in the published literature or open questions in the theoretical deep learning field.\\n\\nRegarding contribution 2, PAM is in the end shown to not converge to better curves than simply initializing with alignment. Also, doesn't the convergence result to a critical point also apply for standard descent methods? The convergence theorem doesn't seem to be much of a contribution in my opinion, either to the optimization or the deep learning community.\\n\\nRegarding contribution 3, I agree that better curves can be learned faster, but why is this a meaningful contribution? What problem that the deep learning or the theoretical machine learning cares about does this help solve?\\n\\nRegarding contribution 4, as the authors themselves admit, the improvement is very dim in comparison to non-aligned curves, and comparisons to other ensemble methods are not present.\\n\\nI want to acknowledge that while the motivation and contributions are in my opinion dim, I find the experimental methodology of this paper very solid. All the claims are to my extent very well validated with experiments, and the experiments are in general well designed to validate those claims. My problem is sadly not with the validity of the claims but with their significance.\"}"
]
} |
BJe4JJBYwS | CROSS-DOMAIN CASCADED DEEP TRANSLATION | [
"Oren Katzir",
"Dani Lischinski",
"Daniel Cohen-Or"
] | In recent years we have witnessed tremendous progress in unpaired image-to-image translation methods, propelled by the emergence of DNNs and adversarial training strategies. However, most existing methods focus on transfer of style and appearance, rather than on shape translation. The latter task is challenging, due to its intricate non-local nature, which calls for additional supervision. We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features. Specifically, we leverage VGG, which is a classification network, pre-trained with large-scale semantic supervision. Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones. We show that our method is able to translate between different domains, which exhibit significantly different shapes. We evaluate our method both qualitatively and quantitatively and compare it to state-of-the-art image-to-image translation methods. Our code and trained models will be made available. | [
"computer vision",
"image translation",
"generative adversarial networks"
] | Reject | https://openreview.net/pdf?id=BJe4JJBYwS | https://openreview.net/forum?id=BJe4JJBYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CflqCeKKIW",
"ryejuILOjH",
"BklJnxazsr",
"HJxGNlaGjS",
"HJgmd1aGoH",
"SygzVJpMiS",
"BJgbvGF-cH",
"Skl0SYi6Fr",
"rJljMjEctH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724029,
1573574258996,
1573208230801,
1573208105692,
1573207914869,
1573207850258,
1572078169337,
1571825990055,
1571601170642
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1465/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1465/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1465/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper addresses image translation by extending prior models, e.g. CycleGAN, to domain pairs that have significantly different shape variations. The main technical idea is to apply the translation directly on the deep feature maps (instead of on the pixel level).\\nWhile acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns:\\n(1) ill-posed formulation of the problem and what is desirable, (2) using fine-tuned/pre-trained VGG features, (3) computational cost of the proposed approach, i.e. training a cascade of pairs of translators (one pair per layer). \\nAC can confirm that all three reviewers have read the author responses. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"Dear reviewers,\\nWe implemented several of your suggestions and comments, in a revised version of the paper (uploaded), and plan to continue to address the others.\", \"we_briefly_summarize_the_changes_in_this_revised_version\": [\"Figure 1 caption explained better, as well as modifications to Figure 3 caption.\", \"Emphasis on our conceptual novelty w.r.t. other UNIT methods.\", \"Additional ablation study:\", \"FID comparison of different layers (layers 3,4,5) translations for 3 datasets in Table .1.\", \"Qualitative comparison to a fine-tuned VGG network, for the zebra and giraffe dataset (Figure 9).\", \"Qualitative comparison to AlexNet (pretrained on ImageNet), for the zebra and giraffe dataset (Figure 9).\", \"We strongly believe that we propose a conceptually novel approach for large geometric deformations image-to-image translation.\", \"If you have any further questions or suggestions, please do not hesitate to let us know.\", \"Thank you again for your time and comments,\", \"The Authors\"]}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for taking the time to review our paper and for your thoughtful suggestions and questions.\\n\\n--- cycle consistency to the deeper layers rather than at the pixel level. Are there other methods which do this ?\\nWe are not aware of any image to image translation methods, translating deep features directly or applying cycle consistency on pertained features. For completeness, as we point out in the related work, LOGAN (Yin et al.) pretrain an autoencoder, for point clouds, to create an overcomplete latent space. Then, translation is achieved between point clouds encoded by vectors in this latent space. \\nHowever, the deep features that we use, are known to capture higher level semantics, as they were extensively trained for classification tasks. The key contribution of our work is to leverage the power of these deep features for the UNIT task. \\n\\n--- No comparison to TransGaGa is provided \\nUnfortunately, the authors of TransGaGa are unable to publicly release their code, thus we are unable to compare our method to them. However, as acknowledged by the author of TransGaGa, both in the Appendix and in a private communication, it is robust to the single foreground and somewhat simple background. But, when the datasets are wild, busy and collected in unconstrained environments, TransGaGa fails.\\n\\n--- I think the paper could be improved in explaining the conceptual novelty of the paper (especially with respect to GANimorph and TransGaGa).\\nWe will try to explain this point better.\\nOur conceptual novelty, can be viewed as a transfer learning for image translation as we are translating high level semantics, encoded in the deeper layers of a pre-trained classification network, a.k.a deep features. This is in contrast to existing UNIT methods, which learn to translate the images directly. GANImorph introduces architectural changes to cycleGAN, enabling higher deformation, and, as we show, we outperform this approach. TransGaGa, on the other hand, assumes intra and inter geometry consistency across the domains. This enables TransGAGA to disentangle geometry from appearance, in an unsupervised manner, and translate both separately (but this only works in limited scenarios, as explained earlier).\\n\\n--- An ablation study should be added (in FID scores) for a single layer (3,4, or 5).\\nWe will add such an ablation in our revision.\\n\\n--- Would it be possible to not use pertained feature from VGG-19 ? \\nVGG19 is well suited for feature extraction as it gradually reduces the spatial dimension of the input while at the same time increasing the channel dimension. It has been shown in many previous works, that VGG19 extracts meaningful high level semantics, useful for style transfer, image analogy, etc. However, for the sake of completeness we can test and report the ability to use other feature extraction networks, such as AlexNet or inception network.\\n\\n--- The authors could add some text on the lack of diversity for the translations in the limitation section.\\nYes, the results are deterministic, similar to cycleGAN, this is indeed a limitation currently, common to UNIT methods. It would be indeed interesting to investigate how stochasticity may be added, as in many to many image translation methods, perhaps even to each layer translation separately.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for taking the time to review our paper and for your thoughtful suggestions and questions.\\n\\n--- Some figures are hard to understand without looking at the text\\n---The \\u201cCoarse to fine conditional translation\\u201d\\u2026 I suggest mentioning it in previous sections for easy understanding.\\nWe will revise the captions of Figure1 and Figure3 to be more self-contained. The translation process is also explained in the introduction and at the beginning of the methods, and we will stress more the coarse to fine conditional translation.\\n\\n--- As to the t-SNE visualization in Figure 9, different methods seem to use different N-D to 2-D mapping functions. This may lead to an unfair comparison.\\nAs common for domain adaptation tasks, we calculate the t-SNE based on the source, target and translated features together. \\n\\n--- Finetuning VGG on the two domains or training an auto-encoder on the two domains.\\nFine-tuning VGG features is an interesting idea, which we did try for some of the datasets. However, we noticed this to produce slightly visually inferior results. This might be attributed to fixating on the exact differences between zebra and giraffe: scale, poses, and even the background.\\nThe use of an autoencoder, while enabling self-supervised semantics extraction, we believe, will struggle to achieve high quality semantics as successfully as VGG pertained on ImageNet. If requested, we could experiment with an autoencoder based on VGG architecture and report the results.\"}",
"{\"title\": \"Additional comment\", \"comment\": \"--- \\\"clamping is potentially a harmful irreversible operation\\\" but that harmful results were not observed.\\nWe noted here that this operation is irreversible, thus, we should be careful when using it. However, for a specific domain, the inverter networks (from features to image) was able to overcome such clamping process at least visually. We can measure and report explicitly the reconstruction loss from features to image space, with and without clamping, though visually we did not notice such difference.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for taking the time to review our paper and for your thoughtful suggestions and questions.\\n\\n--- The problem itself is ill-posed. \\nGenerally UNIT (unpaired image translation) is an ill-posed task: what should a real image look like when translated to a Monet painting? One can imagine the outcome, yet there\\u2019s no precise definition. We would argue that the degree of ill-posedness depends on the domain. In the case of animal to animal translation, you would expect the result to contain a realistic looking animal with the same:\\n 1. Semantic parts of an animal (i.e. head of a zebra to head of a giraffe). That also includes translating the correct amount of instances (i.e. 2 zebras to 2 giraffes). \\n 2. Location and scale of the objects (i.e. a small zebra at the left corner of the image should translate to small giraffe at the same location). \\n 3. Pose \\n 4. Background\\nWhile we mostly succeed at 1-3, preserving the background is indeed problematic when translating deep features. However, this is common in UNIT. While shape non-deforming methods, such as cycleGAN, might not change the structures in the background, the color/style is typically changed. Shape deforming methods exhibit changes in both style and geometry of the background, see for example the recently proposed TransGaGa.\\nWhile we made some preliminary attempts to incorporate an attention mechanism, the results were unsatisfactory, and we therefore stopped pursuing this direction, as we felt it to be outside the main focus of this work.\\n\\n--- One question I had, for example: could we be getting similar results if we used the VGG bottleneck as the noise vector in an InfoGAN?\\nUsing a different architecture instead of cycleGAN for unpaired deep feature translation is indeed interesting. Could the reviewer please elaborate exactly how did he envision here the use of infoGAN? Is the noise composed of a domain-part (i.e., zero or one with p=0.5 for each) and the VGG bottleneck features instead of the \\\"traditional\\\" noise?\\nRegardless, directly inverting the bottleneck is difficult. We refer the reviewer to the results of the inversion network proposed by Dosovitskiy and Brox on AlexNet for different layers (\\\"Generating Images with Perceptual Similarity Metrics based on Deep Networks\\\"). In addition, as we show in our ablation study, the cascaded manner of our translation further improves the result achieved by the deepest layer translation only.\\n\\n---Why wasn't a final translator used for the final image, conditioned on the final \\\\tilde{b}_1? \\nWe noticed that shallow layers contribution was negligible. Thus, we omitted the use of \\\\tilde{b}_1.\\n\\n--- Is the VGG network pretrained on ImageNet? Why wasn't another task used that could be retaining more of the relevant features? eg on semantic segmentation\\nYes, the VGG was pre-trained on ImageNet, we will clarify it in our revision.\\nVGG pretrained on ImageNet is widely used for feature extraction, from perceptual similarity to cross domain correspondence. It is remarkable that a network pretrained only with image-level annotations can assist in the translation process.\\nSemantic segmentation networks require more elaborate supervision (pixel-level annotation) and allow a different kind of translation approaches, which can directly use the segmentation maps.\\n\\n --- Could this be used for networks pretrained on other datasets? Presumably ImageNet has information about the animals translated in this paper. Even better, could we somehow learn these features for the domain pairs automatically somehow?\\n\\nYes, different networks can be used, as the approach is generic, although, a good feature extraction network, such as VGG pretrained on ImageNet, is required for a meaningful translation. Please note that while ImageNet does contain several of the animals translated in this paper, it does *not* contain giraffe and the different types of dogs and cats presented. We believe that learning the features in a self-supervised manner, will not yield the same quality as VGG features and fine-tuning VGG on the specific domains did not yield better results. \\n\\n--- How meaningful is the FID score really in this case?\\nFID metric is still commonly used to assess how close fake and real samples are.\\nFID uses layer of inception trained on ImageNet, thus, it is closely related to our deep features translation. In a sense, we minimize it directly. \\n\\n --- How were the 10 GANs tuned\\nThe GANs were tuned manually, experimenting with several architecture (similar to all layers), and losses. Same parameters where used for all translation tasks. We found this process to be relatively simple. \\n\\n--- On p.7 it is mentioned that the number of instances is preserved, however, it should be made clear that it's preserved in some (or most if that is what was observed) of the examples.\\nIn most cases the number of instances was preserved, we will clarify it in our revision, thanks.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"* Summarize what the paper claims to do/contribute.\\nThis paper claims to extend existing image translation works, like CycleGAN, to domain pairs that are not similar in shape. It is proposed to do so by using a VGG network trained on classification (I assume on Imagenet), extracting features from the two domains and learn 5 CycleGANs to translate for each level of the feature hierarchy. At each level of the hierarchy the translation from the previous level is used to condition the translation for the current level. During inference, the final image translation is done by \\\"feature inversion\\\" (a technique proposed in Dosovitsikiy and Brox, 2016) from the final feature layer. The technique is show on example from a number of pairs of domains like Zebra-to-Elephant (and back), Giraffe-to-Zebra (and back), Dog-to-Cat (and back) and is compared with a number of baselines qualitatively and quantitatively with the FID score. \\n\\n* Clearly state your decision (accept or reject) with one or two key reasons for this choice.\\nWeak Reject.\", \"major_reasons\": [\"The problem itself, as stated in the introduction, seems ill-posed to me. One of the struggles I had while looking through the results was to understand what the images should be looking like. ie What should a zebra translated to a giraffe look like? The motivation for such a problem is also not immediately clear either.\", \"Most of the resulting images do not seem \\\"translated\\\" to me. As stated in the paper (end of p.2) \\\"one aims to transform a specific type of object without changing the background.\\\" As one can see in eg Fig. 1 the resulting translations are completely different images with the foreground object of the new domain in roughly similar poses. The background in most cases does not persist. What I suspect is actually happening here is that the high-level semantics from the first image are used as some sort of noise to generate new images from the new domain. One question I had, for example: could we be getting similar results if we used the VGG bottleneck as the noise vector in an InfoGAN? Since the VGG network is pretrained and used in the same way in both domains, I imagine we would be seeing something very similar. (and it would be def. preferrable to tuning 10 GANs!)\", \"Provide supporting arguments for the reasons for the decision.\", \"Some of the decisions made in the paper were unclear and not supported adequately. The questions (in rough order of importance) that made some of the contributions unclear to me:\", \"Why wasn't a final translator used for the final image, conditioned on the final \\\\tilde{b}_1?\", \"Is the VGG network pretrained on ImageNet? Why wasn't another task used that could be retaining more of the relevant features? eg on semantic segmentation\", \"Could this be used for networks pretrained on other datasets? Presumably ImageNet has information about the animals translated in this paper. Even better, could we somehow learn these features for the domain pairs automatically somehow?\", \"How meaningful is the FID score really in this case?\", \"How were the 10 GANs tuned?\", \"Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.\", \"It is mentioned on p.4 that \\\"clamping is potentially a harmful irreversible operation\\\" but that harmful results were not observed. As I was reading that I was wondering how these results would actually look like.\", \"On p. 6 it is mentioned that the number of images for 2 categories are reported in another paper. I think it'd take less space to actually report the number of images here.\", \"On p.7 it is mentioned that the number of instances is preserved, however it should be made clear that it's is perserved in some (or most if that is what was observed) of the examples.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new cascaded image-to-image translation method to address the I2I tasks where the domains have exhibit significantly different shapes. The proposed method train cycle GAN on different levels of feature extracted by pre-trained VGG and combine the futures with the AdaIN layer to keep the correct shape from the deep features.\", \"pros\": \"1. The proposed method seems to work well on different shape I2I datasets without using semantic masks compared to previous works.\\n2. The idea of cascaded translators sounds simple and reasonable which can probably benefit other related tasks. The way of applying AdaIn to combine features of different levels is also a nice trick to keep the correct shape from deep features.\\n3. The paper writing is OK, but some explanation and organization should be improved as mention in cons.\", \"cons\": \"1. Some figures are hard to understand without looking at the text. For example, in Figure 1, the caption does not explain the figure well. What does each image, the order, and the different sizes mean? As to Figure 3, the words \\u201ctop left image\\u201d, \\u201cright purple arrows\\u201d are a bit confusing.\\n2. The \\u201cCoarse to fine conditional translation\\u201d section describes the conditional translation in the shallow layers. I suggest mentioning it in previous sections for easy understanding.\\n3. As to the t-SNE visualization in Figure 9, different methods seem to use different N-D to 2-D mapping functions. This may lead to an unfair comparison.\", \"suggestions\": \"1. The authors use the pre-trained classification network VGG for feature extraction and then train dedicated translators based on these features. I wonder if the authors also tried finetuning VGG on the two domains or training an auto-encoder on the two domains. The domain-specific knowledge may help to improve the results and alleviate the limitations presented in the paper, e.g. background of the object is not preserved, missing small instances or parts of the object due to invertible VGG-19.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new method for image-to-image translation. The problem of most existing methods is that they are good in translating style (e.g from photo to Van Gogh) but do not allow for significant changes in shape (e.g. from zebra to giraffe). The authors address this by performing the translation in a cascaded fashion starting from a semantic (deep) level (fifth layer of VGG). The underlying idea is that translating at this more semantic level puts less spatial constraints on the final resulting images (making translations from zebra to giraffe possible). After the fifth layer is translated other layers are translated conditioned on the previous translation results.\\n\\nThe method is compared to other translation methods including DRIT, MUNIT, GANimorph. Both visually and quantitatively as measured by FID. The results of the proposed method are superior. No comparison to TransGaGa is provided (but I could not find code for this method).\\n\\nMy recommendation is borderline accept. The proposed method is simple. The experimental results are limited but show both visually and quantitatively superior results. Especially FID scores are much better. I think the paper could be improved in explaining the conceptual novelty of the paper (especially with respect to GANimorph and TransGaGa).\\n\\n1. I like the idea of applying the cycle consistency to the deeper layers rather than at the pixel level. Are there other methods which do this ? It could be highlighted more as part of the contribution.\\n2. An ablation study should be added (in FID scores). I would like to see the necessity of the cascade (which is in the title) confirmed: results for only translating a single layer (3,4, or 5) should be compared to translating 3,4,5 together as in proposed method. \\n3. Would it be possible to not use pretrained feature from VGG-19 ? This might also be a limitation. In principle, I guess you could train everything end-to-end, or is this impractical because of the feature inversion. \\n4.The authors could add some text on the lack of diversity for the translations in the limitation section. I understand there is no diversity and the translation is deterministic. \\n \\nIn general, I think the paper clearly explains what it does, and it also shows cases that it performs better than state-of-the-art. The paper could be much improved in its analysis of the reasons for its better performance, analyzing key aspects of its design like cycle GAN on features, pretrained VGG features and the use of cascaded generation of the final image.\"}"
]
} |
Hyg4kkHKwH | V1Net: A computational model of cortical horizontal connections | [
"Vijay Veerabadran",
"Virginia R. de Sa"
] | The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes. Such representations are required to be invariant to the sensory inconsistencies caused by dynamically varying lighting, local texture distortion, etc. A key architectural feature combating such environmental irregularities is ‘long-range horizontal connections’ that aid the perception of the global form of objects. In this work, we explore the introduction of such horizontal connections into standard deep convolutional networks; we present V1Net -- a novel convolutional-recurrent unit that models linear and nonlinear horizontal inhibitory and excitatory connections inspired by primate visual cortical connectivity. We introduce the Texturized Challenge -- a new benchmark to evaluate object recognition performance under perceptual noise -- which we use to evaluate V1Net against an array of carefully selected control models with/without recurrent processing. Additionally, we present results from an ablation study of V1Net demonstrating the utility of diverse neurally inspired horizontal connections for state-of-the-art AI systems on the task of object boundary detection from natural images. We also present the emergence of several biologically plausible horizontal connectivity patterns, namely center-on surround-off, association fields and border-ownership connectivity patterns in a V1Net model trained to perform boundary detection on natural images from the Berkeley Segmentation Dataset 500 (BSDS500). Our findings suggest an increased representational similarity between V1Net and biological visual systems, and highlight the importance of neurally inspired recurrent contextual processing principles for learning visual representations that are robust to perceptual noise and furthering the state-of-the-art in computer vision. | [
"Biologically plausible deep learning",
"Recurrent Neural Networks",
"Perceptual grouping",
"horizontal connections",
"visual neuroscience",
"perceptual robustness",
"Gestalt psychology"
] | Reject | https://openreview.net/pdf?id=Hyg4kkHKwH | https://openreview.net/forum?id=Hyg4kkHKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CwtCXuKewH",
"ryeD3Mo3jB",
"HkeRt1ihiH",
"S1eQOp93ir",
"HJgoWw8U9r",
"SylfsS40Fr",
"B1gvZbERKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798724000,
1573855919159,
1573855110461,
1573854571026,
1572394755248,
1571861913533,
1571860734598
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1464/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1464/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1464/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1464/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1464/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1464/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a neurally inspired model that is a variant of conv-LSTM called V1net. The reviewers had trouble gleaning the main contributions of the work. Given that it is hard to obtain state of art results in neurally inspired architectures, the bar is much higher to demonstrate that there is value in pursuing these architectures. There are not enough convincing results in the paper to show this. I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Official authors response to R2\", \"comment\": \"We thank you for taking the time in carefully reviewing our submission. We shall address your concerns with regard to our work below:\\n\\n(1) Advancement of SOTA in computer vision: We acknowledge this valid concern; upon adding data augmentation and training on larger batch sizes, we were able to improve V1Net's performance as shown in the last row of Table 1. However, as shown in our Table. 1, V1Net with ~500k parameters performs closely to several SOTA boundary detection methods with orders of magnitude more number of parameters. We also do not utilize ImageNet pretraining while training V1Net on BSDS500, we expect such common practices used by the computer vision community to boost our performance; we will include results from our experiments in this direction in a future revision.\\n\\n(2) Motivation for texturized challenge: As mentioned in our response to R1, we proposed to use FFT texturization as one way to evaluate robustness as we find evidence in previous vision science literature on how phase information in images is crucial for humans to perceive natural images [1,2] relative to amplitude/magnitude information. We do not intend to claim that FFT texturization is the objectively correct way to evaluate perceptual robustness, however, our choice of such a manipulation is explained by previous use of this manipulation in vision science literature and the robustness of humans to this manipulation.\\n\\n(3) Value of OOD generalization to stylized stimuli: In this result, we wanted to demonstrate how recurrent horizontal connections improve the ability of feedforward deep nets to generalize/domain transfer beyond the training data distribution without explicit supervision. We found it interesting that a V1Net that was not trained to predict boundaries on style-transferred stimuli could invariantly regardless predict object boundaries with a reasonable precision. We are currently working on performing this experiment with quantitative results (and comparisons with SOTA boundary detection models); we hope that these results will add more value to our OOD generalization experiment.\\n\\n(4) Novel insights about brain function: We acknowledge the lack of novel insights about brain functioning in our work; however, our results that suggest the improved robustness and boundary detection performance of a full V1Net (relative to various lesions of the horizontal connections and their nonlinearity) can be observed as a re-validation of the importance of horizontal connections to computer vision models (in addition to biological vision).\\n\\n(5) Clarity issues on the paper's objectives and contributions: Our apologies for the lack of clarity in this area of our paper. Our contribution was to test whether horizontal connections play a role in improving the perceptual robustness and early visual task performance of artificial visual representations. To test this hypothesis, we proposed V1Net, a novel model of horizontal connections that is inspired by previously proposed small-scale models (which don\\u2019t scale well to today\\u2019s computer vision benchmarks). Being a model that can be easily incorporated into currently existing DCN implementations, we hope to encourage the computer vision community to utilize this simple addition to existing DCNs for improving robustness and early visual task performance.\", \"references\": \"1. Thomson, M. G., Foster, D. H., & Summers, R. J. (2000). Human sensitivity to phase perturbations in natural images: a statistical framework. Perception, 29(9), 1057-1069.\\n2. Tadmor, Y., & Tolhurst, D. J. (1993). Both the phase and the amplitude spectrum may determine the appearance of natural images. Vision research, 33(1), 141-145.\"}",
"{\"title\": \"Official authors response to R3\", \"comment\": \"We thank you for taking the time in carefully reviewing our submission. We shall address your concerns with regard to our work below:\\n\\n(1) Reading clarity issues: Thanks for pointing out the clarity issues in our paper, we elaborate more on terms such as the one mentioned in your review in order to ease the paper reading. We have also made the symbol usage consistent in our uploaded revision's figures and equations. \\n\\n(2) Comparison to standard ConvLSTM: This is a nice point, thank you. We have added the standard ConvLSTM equations next to V1Net\\u2019s equations.\\n\\n(3) Clarification on colored lines in Fig. 1: Following the convention used in neuroscience literature [3], our red, dark and light blue lines correspond to linear excitation (W_exc), nonlinear shunting (W_shunt) and linear subtractive inhibition (W_inh) operations respectively. We have made this clear by explicitly mentioning the weight variable names in our figure.\\n\\n(4) Reg. related work in ML/AI on lateral connections: Thanks for pointing this out, we have added the mentioned and additional related work from the ML/AI community on learning spatial dependencies.\\n\\n(5) Implementation details + image input to RNNs: Our apologies for the missing details. Similar to [2], we use activations from the bottom layer to set X_t for all timesteps. We would like to note that while the input is static, dynamics in response are possible due to the horizontal recurrent connections that unroll in time. We have added this information and more elaborate implementation details to our revision (in Appendix A.1). \\n\\n(6) About emergent neurally plausible horizontal connections: Thanks for your interest in this line of our ongoing work. To add more detail to Fig. 6, we notice strong (sometimes oriented) excitation among kernels that detect similar features. This observation is validated by previous experimental neuroscience findings that report a similar like-to-like excitation among simple cells with similar orientation tuning in the primate visual cortex. We would also like to point out the learning of other neurally plausible connectivity structures such as center-on surround-off and border-ownership [1] cells in our V1Net model.\\n\\n(7) Quantitative analyses: We have additionally compared our model's boundary detection performance to the current state of the art boundary prediction methods along with their model parameter count. We hope you value the comparable performance of our model (w/ 500k parameters) to other SOTA models with orders of magnitude more number of parameters.\", \"references\": \"1. Hesse, J. K., & Tsao, D. Y. (2016). Consistency of border-ownership cells across artificial stimuli, natural stimuli, and stimuli with ambiguous contours. Journal of Neuroscience, 36(44), 11338-11349.\\n\\n2. Zamir, A. R., Wu, T. L., Sun, L., Shen, W. B., Shi, B. E., Malik, J., & Savarese, S. (2017). Feedback networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1308-1317).\\n\\n3. Pastore, V. P., Massobrio, P., Godjoski, A., & Martinoia, S. (2018). Identification of excitatory-inhibitory links and network topology in large-scale neuronal assemblies from multi-electrode recordings. PLoS computational biology, 14(8), e1006381.\"}",
"{\"title\": \"Official authors response to R1\", \"comment\": \"We thank you for taking the time in carefully reviewing our submission. We shall address your concerns with regard to our work below:\\n\\n(1) By stating \\u2018incorporation of V1Net within existing DCN implementations\\u2019, we mean to use V1Net as an additional layer (similar to Batch Normalization) that computes a nonlinear recurrent function of the input to each layer in the (deep) neural network. We conceptualize this as similar to normalizing the feature maps of a neural network\\u2019s layer L by taking into consideration a learned dynamic interaction between the different feature channels in that layer.\\n\\n(2) We have developed V1Net as a modification to a ConvLSTM; this is due to the success of a relevant prior model [3] that learns recurrent functions of static images, also derived from ConvLSTMs. By incorporating linear and nonlinear contextual interactions in a ConvLSTM, we believe that V1Net introduces a relatively simple technique to mix the computations of long-range horizontal connections (found in visual neuroscience literature; such as surround inhibition and excitation) into DCNs without deviating largely from standard deep learning modules that have been performing well on recent ML benchmarks. We do not claim V1Net to be the most suitable formulation of horizontal connections, however our learned neurally plausible connection structure (shown in Fig. 6.) seems to show evidence for V1Net\\u2019s similarity to biological horizontal connections.\\n\\n(3) Thanks for raising the interesting concern of model simplicity, we did not attempt to make V1Net simpler than a traditional Convolutional LSTM in our experiments. We have compared V1Net to both parameter-matched and receptive-field matched baseline ConvLSTM models on the Texturized-Challenge and BSDS500, and the results suggest that V1Net may not be computationally more simple than ConvLSTM models. These comparisons resulted in ConvLSTMs and V1Net obtaining roughly the same accuracy on the clean version of Texturized challenge across multiple random initializations, and V1Net obtains a better performance on the BSDS500 benchmark.\\n\\n\\nWe proposed to use FFT texturization as one way to evaluate robustness as we find evidence in previous vision science literature on how phase information in images is crucial for humans to perceive natural images [1,2] relative to amplitude/magnitude information. We do not intend to claim that FFT texturization is the objectively correct way to evaluate perceptual robustness, however our choice of such a manipulation is explained by previous use of this manipulation in vision science literature and the robustness of humans to this manipulation.\\n\\n(4) We acknowledge the lack of comparison to state-of-the-art models on CIFAR10, we are working on this and will add this as part of the revision in the future. \\n\\n(5) Thanks for pointing out the issue with our CIFAR results figure, we have updated this figure in our revision.\", \"references\": \"1. Thomson, M. G., Foster, D. H., & Summers, R. J. (2000). Human sensitivity to phase perturbations in natural images: a statistical framework. Perception, 29(9), 1057-1069.\\n2. Tadmor, Y., & Tolhurst, D. J. (1993). Both the phase and the amplitude spectrum may determine the appearance of natural images. Vision research, 33(1), 141-145.\\n3. Zamir, A. R., Wu, T. L., Sun, L., Shen, W. B., Shi, B. E., Malik, J., & Savarese, S. (2017). Feedback networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1308-1317).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I first want to thank the authors for their proposed approach in this paper. Authors made an attempt to bridge the gap between natural (primate) vision and NN models. The paper is easy to read and understand. The authors proposed a Conv-LSTM-inspired model called V1Net. The model shows some merits in detecting the correct labels for noised inputs.\\n\\nUnfortunately, the paper lacks some critical analysis, and V1Net usability is limited in real-life. Commonly, the community expects a certain number of experiments to back a claim. Specifically I have the following questions:\\n\\n1. \\u201cV1Net can be flexibly incorporated as a module in existing implementations of DCNs\\u201d, can you elaborate how? Current architecture rarely consider using conv-lstm to solve tasks such as object detection. Not saying the current trend is correct or incorrect in doing so, but lack of experiments and details leave this claim unsupported. \\n2. The leap between horizontal connectivity and V1Net is rather unmotivated. Specifically, authors should explain why V1Net is the only (or the most suitable) way of implementing horizontal connectivity. \\n3. Why is the FFT texturization the correct way to evaluate the robustness? Specifically, could it be that V1Net being a simpler model (such as the case in Fig 4, where the performance is lower than conv-lstm for clean data) simply generalizes better to noisy input. Simpler models sometimes have the tendency to remove noise better. \\n4. Necessary comparisons are not made with at least a few state of the art models in CIFAR. As it stands the impact is limited on community who uses conv-lstm for CIFAR classification.\\n5. Figure 4 lacks the required standards of a scientific article figure. Borders on only 3 sides, and DPI seems to be low as text is pixelized.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes, inspired by the Primates brain, to add horizontal inhibitory and excitatory connections. In practice, the work proposes a variant of convolutional LSTM cells, that incorporates additional convolutions.\\n\\nOverall, the paper is hard to read, the experimental setting does not fully convince. It seems hard to reproduce the results using the information given in the paper. A lot of the method section focuses on intuition, using vague vocabulary which makes it hard to understand concretely what is done in practice. In particular, the contributions are not clear enough. The difference between the proposed approach and existing works needs to be made clearer. \\nThe idea is interesting, however, and the paper would benefit greatly from addressing those issues. \\n\\n\\n\\nMain points\\n\\n\\nThe claims seem somewhat bigger than the actual contributions. In practice, the contribution is a modification to an LSTM cell.\\n\\nThe paper is not easy to read, and mixes various terms without introducing them (e.g presynaptic activity is used to introduce the method but never introduced, not even in related work). It would be good to use standard notation and math font (e.g. small bold letter for vectors, etc) and to define notations. In Figure 1, the kronecker symbold represents convolutions but in (1), the hadamard product (*) represents convolutions. These inconsistensies make the paper harder to follow.\\n\\nThe problem of clarity extends to the experimental section, where the experiments are not clearly explained, \\n\\nThe method section could be more detailed, in particular, a mathematical comparison of LSTM vs the proposed approachh would be useful. Figure 1 is unclear: what do the red, dark blue and light blue lines correspond to mathematically?\\n\\nThe related attempts in ML and Deep learning should be reviewed ( e.g. Lateral Inhibition-Inspired Convolutional Neural Networkfor Visual Attention and Saliency Detection, AAAI 2018).\\n\\nThe experimental setting is not convincing. It seems that a simple state-of-the-art CNN architecture would do better than the proposed approaches. Comparing a ConvLSTM with a single convolution does not seem fair.\\n\\nVery little implementation details are given. The authors mention a ConvLSTM2D layer is used. However, the inputs are static images: how exactly was the experiment done? Where does the time come from since the dataset considered is static?\", \"about_emergent_neurally_plausible_horizontal_connections\": \"that section is interesting but would benefit from being more detailed and rigorous. There is no actual measure or study of the emergence of said connections.\\n\\n3.2 is misleading as it corresponds to a setting that is never used in the experimental setting. \\n\\nMost of the results are qualitative, except for Table 1. It would be useful to have quantitative comparisons on established benchmarks.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose to modify a convolutional variant of LSTM (ConvLSTM) to include horizontal connections inspired by known interactions in visual cortex: excitation, subtractive inhibition (linear) and shunting multiplicative gating (nonlinear). They evaluate their V1Net model on a new task they call the texturized challenge (spectrally perturbed CIFAR-10 images) and on contour segmentation on BSDS500 and show that their approach outperforms some baselines.\", \"strengths\": [\"The biological motivation is quite clear\", \"Architecture is simpler than that of previous related work (hGRU)\"], \"weaknesses\": [\"Not clear what the objectives/contributions are\", \"No advancement of state of the art in computer vision\", \"No novel insights about brain function\", \"Motivation of the \\\"texturized challenge\\\" is unclear\", \"Performance on BSDS500 is far from state of the art\", \"Value of the qualitative analysis on stylized ImageNet is unclear\", \"Overall, I'm not sure what the goal of this paper is. It neither presents an advance of the state of the art in any computer vision problem nor does it lead to any novel insights about the brain. The lack of a clear statement about the contributions of the paper seems to confirm this impression \\u2013 the authors don't seem to know either.\"]}"
]
} |
r1eX1yrKwB | Distribution Matching Prototypical Network for Unsupervised Domain Adaptation | [
"Lei Zhu",
"Wei Wang",
"Mei Hui Zhang",
"Beng Chin Ooi",
"Chang Yao"
] | State-of-the-art Unsupervised Domain Adaptation (UDA) methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains. Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA. In particular, we propose Distribution Matching Prototypical Network (DMPN) to model the deep features from each domain as Gaussian mixture distributions. With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains. In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations. The first one minimizes the distances between the corresponding Gaussian component means of the source and target data. The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution. To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together. Extensive experiments are conducted over two UDA tasks. Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches. More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset. The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes. | [
"Deep Learning",
"Unsupervised Domain Adaptation",
"Distribution Modeling"
] | Reject | https://openreview.net/pdf?id=r1eX1yrKwB | https://openreview.net/forum?id=r1eX1yrKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"8efzvgGukX",
"SklF3jHIir",
"BJxtpvB8sB",
"BJlDLqTMsH",
"r1xyE5pzsB",
"B1xu9qHGor",
"rJxzTD6bor",
"BkxKqPpbiS",
"rJgKuw6ZsS",
"S1lOfD6bir",
"BJeJePTbiS",
"BJldN2FWqH",
"SygAfV3gqH",
"HJenFJCaFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723969,
1573440433070,
1573439424555,
1573210702569,
1573210663119,
1573178000312,
1573144506395,
1573144464750,
1573144433295,
1573144335989,
1573144294729,
1572080688284,
1572025366467,
1571835780483
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1463/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1463/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1463/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1463/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1463/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper addresses the problem of unsupervised domain adaptation and proposes explicit modeling of the source and target feature distributions to aid in cross-domain alignment.\\n\\nThe reviewers all recommended rejection of this work. Though they all understood the paper\\u2019s position of explicit feature distribution modeling, there was a lack of understanding as to why this explicit modeling should be superior to the common implicit modeling done in related literature. As some reviewers raised concern that the empirical performance of the proposed approach was marginally better than competing methods, this experimental evidence alone was not sufficient justification of the explicit modeling. There was also a secondary concern about whether the two proposed loss functions were simultaneously necessary. \\n\\nOverall, after reading the reviewers and authors comments, the AC recommends this paper not be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"7) a) \\\"when L1-distance between two distributions is small, H-delta-H divergence is small\\\" is proved in [R1], but \\\"minimizing GCMM loss reduces the L1-distance\\\" and \\\"minimizing PDM loss also reduces the L1-distance\\\" are not mathematically proved in this paper. These proofs are necessary to obtain the bound of the target error in the proposed method.\\n\\n7) b) Thank you. I understand the difference.\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"1) a) Thank you for your thorough explanation. I understand that minimizing the two losses is a sufficient condition for distributional matching.\\n\\n1) b) So, using PDM loss is enough in the ideal case, and GCMM loss works like a kind of regularization for PDM loss, while it does not change the goal. \\\"Working at different levels\\\" sounds good, but it is quite better if the authors can show a good example in which minimizing PDM loss is not enough and GCMM loss helps. (reverse case is shown in the authors' response of 1-a)\\n\\n3) Sorry for typos. They should be mu^{es} and mu^{et}. Are they estimated in each mini-batch? If so, does the computed GCMM loss result in the average of the distance between the means? It seems to depend on the class priors, but it does not in Eq. (5).\"}",
"{\"title\": \"Response to Reviewer #3 Part 1\", \"comment\": \"Thank you for your response. Here we answer the questions you have mentioned:\\n\\n1). a). Ideally minimizing the two discrepancy losses is a sufficient condition for exact feature distribution matching between the source and target data but is not a necessary condition. The first thing we need to keep in mind is that the source feature distribution is not fixed during training. So if we are able to ideally minimize the two losses, then it is natural to assume that the classification loss on the labeled source data can also be ideally minimized. In that case, the source feature Gaussian mixture distribution collapse to N feature vectors, where each vector represents the Gaussian component mean for one class and all source data in the same class will have the exact same feature representation as the Gaussian component mean for that class. If the two discrepancy losses are ideally minimized, then the target feature distribution also collapses to the N feature vectors, which is an exact feature distribution matching between the source and target data. Thus ideally minimizing the two discrepancy losses ensures exact feature distribution matching between the source and target data. Combining the classification loss and the two discrepancy loss fucntions, this is exactly the objective function our method is trying to minimize. Thus, our method approaches exact feature distribution matching between the source and target data as we minimize its objective.\\n\\nSuppose we have exact feature distribution matching between the source and target data. Then the GCMM loss is ideally minimized, which is 0. If we assume the source feature distribution does not collapse to the N component means, then PDM loss can be further minimized until the target feature distribution collapse into the N Gaussian component mean vectors. Thus, exact feature distribution matching does not prove the two discrepancy loss functions are ideally minimized. However, there is no need to worry here, as keep minimizing the PDM loss maintains the decision boundary. Our method, which keeps minimizing the PDM loss, will not result in accuracy drop for the target data.\\n\\nb). No, theoretically by only minimizing the PDM loss can achieve exact feature distribution matching. As shown in our experiment results, DMPN_PDM already performs quite well. \\n\\nThe reason why we need both is that they work at different levels. GCMM brings the entire target Gaussian component closer to the source Gaussian component and PDM generates target features closer to the source feature distribution. GCMM works at the class level and PDM works at the sample level. In some sense, they complement each other. In the extreme case when each class has only one data point, GCMM loss reduces to PDM loss, thus they are not conflicting each other but only working at different levels to achieve the same goal.\\n\\nThe optimal point for GCMM loss, is when the Gaussian component means of the target features coincide with the source feature Gaussian component means. Thus minimizing the GCMM loss, pulls the target Gaussian component means towards the source Gaussian component means, which helps to decrease the PDM loss. Similarly, minimizing the PDM loss ensures the embedding function to generate the target features closer to the source Gaussian component means, which in turn minimize the GCMM loss. Therefore, minimizing the two discrepancy losses boost each other and will not result in bad behavior for optimization.\\n\\n3). No, we do not keep global statistical estimators for mu_s and mu_t like batch-normalization layer. The distribution parameters for source data is learned automatically by back-propagation, which includes mu_s. We do not need global mu_t for the target data. We estimate mu_t in each mini-batch.\\n\\n6). If the co-variate shift assumption does not hold, we may assume uniform class prior when the label shift is mild. We cannot assume uniform class prior when the label shift is severe, as severe label shift will degrade the performance of the transfered model in the target domain [1] by a lot.\"}",
"{\"title\": \"Response to Reviewer #3 Part 2\", \"comment\": \"7). a). First, let us set the ground, we can have different measurements of distribution discrepancy. H-delta-H divergence is one measurement for distribution discrepancy and the larger the distribution discrepancy the larger the H-delta-H divergence.\\n\\nIn our paper, we model the feature distribution as Gaussian mixture. \\nGCMM is a measurement of the distribution discrepancy between two Gaussian mixture distributions. It measures the distance between the coresponding Gaussian component means of two Gaussian mixture distributions. If two Gaussian mixture distributions have quite different Gaussian component means, then GCMM will be large.\\n\\nPDM is another measurement of the distribution discrepancy between two distributions. It measures the negative log likelihood of data points from P1 on P2. The larger the distribution discrepancy, the larger the PDM loss.\\n\\nNow, we have three measurements for distribution discrepancy. Intuitively, these measurements are all positively related to distribution discrepancy. Thus, reducing two of them will also reduce the third one. Thus we claim \\\"The H-delta-H divergence is small when the two distribution discrepancy is small.\\\".\\n\\nTo make it more formal or make it more clearer. Let us introduce the forth measurement, the L1-distance bewteen two distributions as the default one.\\n\\nNow, clearly when L1-distance between two distributions is small, H-delta-H divergence is small;\\nminimizing GCMM loss reduces the L1-distance between two Gaussian mixtures, as the two distributtions are moved closer;\\nminimizing PDM loss also reduces the L1-distance between two Gaussian mixtures, as data points from one distribution are moved closer to another distribution;\\nThus, we can conclude, \\\"The H-delta-H divergence is small when the two distribution discrepancy is small.\\\"\\n\\nSorry if there are too many words to read, we just want to make the question clearer.\\n\\nThe above is our full justification why minimizing GCMM and PDM reduces H-delta-H divergence. \\n\\nOur clarification of the relationship between the two distribution discrepancy losses and the H-delta-H divergence is updated in the paper as \\\"In DMPN, we minimize the first term through minimizing the domain discrepancy losses, as H-delta-H is small when the source features and target features have similar distribution and minimizing the domain discrepancy losses makes the source and target feature to distribute similarly.\\\"\\n\\nThanks.\\n\\nb). Our justification of using noisy target labels is different from the justification in [2]. Our justification is derived based on the theory in Supervised Domain Adaptation (Lemma 4 in [3]), while Saito et al.'s is based on Unsupervised Domain Adaptation (Theorem 2 in [3]). As we are trying to justify using noisy labeled target data, so deriving the justification from Supervised Domain Adaptation is more proper.\\n\\n\\n\\n\\n[1] Lipton, Z. C., Wang, Y. X., & Smola, A. (2018). Detecting and correcting for label shift with black box predictors. arXiv preprint arXiv:1802.03916.\\n[2] Saito, K., Ushiku, Y., & Harada, T. (2017, August). Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 2988-2997). JMLR. org.\\n[3] Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine learning, 79(1-2), 151-175.\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response.\\n\\n2) 4) 5) 8) 9) make sense to me. Especially, thank the authors for the additional experimental result on the Office-Home dataset. \\n\\n1) I got the intention of the authors, but I am not yet convinced with how the losses work. I want to confirm the following points one by one.\\n - Are the source and target distributions perfectly matched if and only if the both losses are ideally minimized?\\n - Can it be achieved only when we use both losses?\\n -- If yes, what is the difference between the two losses at the optimal point?\\n -- If no, why do we need both? If it affects the optimization dynamics (not the goal of the optimization), could you explain it intuitively?\\n\\n3) I want to confirm my understanding on how to calculate GCMM loss. Since mu_s and mu_t are the averaged features over the all data of the respective domain, I was thinking that they are initialized after the pre-training and are updated by using each mini-batch like stats in the batch-normalization layer. Is it correct? If so, when a certain class does not appear in pseudo-labeled target data after the pre-training, and we cannot calculate GCMM loss due to lack of the initial mu of that class.\\n\\n6) Assuming covariate shift does not support why we can assume the uniform class prior at the target domain. But, anyway, I understand that this problem is remained in the future works.\\n\\n7) I am asking what is \\\"the two distribution discrepancy\\\" in the proposed method. For example, adversarial training in the original GAN minimizes JS divergence. Since JS divergence can bound L1 distance that can bound the H-delta-H divergence [R1], using the adversarial training for domain adaptation is meaningful from the perspective of Ben David's paper. The statement in 3.5 justifies that we can use noisy target labels, not the proposed losses, because the authors do not clarify the relationship between the proposed losses and H-delta-H divergence. And, the justification of using the noisy target labels seems to be already shown in [R2].\\n\\n[R1] Domain Adaptation: Learning Bounds and Algorithms, COLT 2009\\n[R2] Asymmetric Tri-training for Unsupervised Domain Adaptation, ICML 2017\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for reviewing our paper and your appreciation of our idea. Here we answer your concerns and clarify some of the weak points you mentioned:\\n\\n1). These two losses actually serve with different purposes when we design them. The GCMM loss brings the two distribution closer via minimizing the corresponding Gaussian Component means of the source and target data. And the PDM loss shapes the target feature distribution similar as the source feature distribution via minimizing the likelihood of generating the target feature from the source feature distribution. In this sense, they complement each other, to match the target feature distribution to be exactly like the source feature distribution. Furthermore, these two losses also reduce distribution discrepancy at different levels, GCMM reduces distribution discrepancy at the class-level and PDM reduces distribution discrepancy at the sample-level, thus in this sense, they also complete each other for domain adaptation.\\n\\n2). We want to clarify here. Our method does not learn the distribution parameters for the target data. We learn the distribution parameters of the source data. We use the empirically calculated distribution parameter estimator of the source and target data to minimize the distribution discrepancy loss function. Thus, we cannot \\\"calculate a standard divergence between source and target data distributions by using the parameters of GMs.\\\" For the GCMM loss, our method minimizes the euclidean distance between the corresponding Gaussian Component means of the source and target data for each class. PDM loss minimizes the likelihood of generating the target feature with the source feature distribution.\\n\\n3). We will ignore data from that class in the batch in that training iteration. As training data are sampled randomly in each iteration, and in the end, all data updates the model.\\n\\n4). Yes, you are correct. We forgot to average the term when writing the paper. We have corrected it in the revised paper. Thanks for pointing this out.\\n\\n5). We have added a sensitivity experiment on the confidence threshold. The results are in the appendix of the revised paper. Here is the summary:\", \"confidence_threshold\": \"0.6, 0.7, 0.8, 0.9\", \"mean_accuracy\": \"81.3, 81.4, 81.4, 81.5\\n\\nThe results show that our method is also robust against confidence threshold.\\nFor our proposed probability based weighting mechanism, as there is no hyper-parameter in there, so there is no need to provide sensitivity analysis on it.\\n\\n6). We know p(c) in the source domain, as it has labels. We do not know p(c) in the target domain, but we can estimate it. In this paper, we assume p(c) is uniform, as we focus on co-variate shift in this paper. Our work can be easily augmented to work for label shift too, once we estimate the target label distribution. However, we leave this as future work.\\n\\n7). The H-delta-H divergence is small when the two distribution discrepancy is small. As GCMM loss brings the two distribution closer, and PDM loss shapes the two distribution to be alike, the source and target feature distribution discrepancy will be smaller. Thus H-delta-H becomes smaller as we minimize GCMM loss and PDM loss. We have updated the paper on this part to make it clearer. Thanks for indicating this.\\n\\n8). We have added an experiment on the Office-Home dataset in the appendix of our paper. Our paper performs the best in all the transfer tasks in Office-Home compared to state-of-the-art UDA methods, showing that it also works for this more challenging dataset. \\n\\n9). Thanks for pointing out some of our typos, we have made the changes in our revised paper.\"}",
"{\"title\": \"Response to Reviewer #2 Part 1\", \"comment\": \"Thanks for reviewing our paper. You rejected our paper based on two reasons: \\\"This paper should be rejected because (1) the novelty of the main idea is marginal, and (2) the performance gain over the baseline methods is also marginal.\\\". We know it is difficult to argue about the novelty part, as different people have different tastes, however we want to have a try.\\n\\nAs a researcher in the area of domain adaptation, you and us all agree on the importance of this area and have read a lot of great works and come across tons of ideas in this area. But in all of these works and ideas, as far as we are concerned, none of them thought about modeling the feature distribution for domain adaptation though it facilitates us to better measure the distribution discrepancy across domains. This is the gap in the area of domain adaptation we are trying to fill with this work. You mentioned \\\"the novelty of the main idea is marginal\\\", so we want to ask which work do you have in mind that generates our idea as a marginal when you claim that? If you have, please provide us the example. Thanks very much.\\n\\nYou mentioned \\\"Pan et al.[1] already proposed the idea of transferring the knowledge from the source to the target using the prototype of each class.\\\", however, we want to clarify again that our work is not about applying prototypical network for domain adaptation, the main idea in our work is to model the feature distribution for domain adaptation, which is a new methodology for domain adaptation. Thus, inspired from Wan et al.'s [2] work, we model the feature distribution as Gaussian Mixture. In the Related Works section, we cite Snell et al.'s [3] paper, showing that learning prototypical network is equivalent to modeling feature distribution as exponential density. This statement shows the only connection between our work and Pan et al.'s. However, the equivalence expressed in the statement is only true for training a model in a single domain. Our work is way different from Pan et al.'s work in the setting of domain adaptation.\\nFirst, we base on different ideas. While Pan et al. propose a novel idea to remold Prototypical Network (PN) for domain adaptation, as stated in their paper, our work is based on the idea that almost all existing domain adaptation methods are minimizing the feature distribution discrepancy for effective knowledge transfer from source domain to target domain, however none of them explicitly models the feature distribution though intuitively it facilitates the measuring of distribution discrepancy, thus minimizing the measurement reduces the discrepancy. \\nSecond, the two works propose different distribution discrepancy loss functions. While Pan et al. proposes multi-granular distribution discrepancy loss functions at both class-level and sample-level. Our work proposes two novel distribution discrepancy loss function based on probability, one is Gaussian Component Mean Matching and one is Pseudo Distribution Matching. The two distribution discrepancy loss functions work at different aspects and complement each other, where GCMM brings the two distribution closer, PDM shapes the two distribution alike. The two distribution discrepancy loss functions also work at different levels. GCMM reduces domain discrepancy at class level, while PDM reduces domain discrepancy at sample level. We all know that discrepancy loss functions play the central role in a domain adaptation method and devising new domain discrepancy loss functions for domain adaptation is an active research area [4,5,6]. Thus, researchers in the area of domain adaptation would not ignore the two novel discrepancy loss functions we put forward. Furthermore, the idea that modeling the feature distribution enables us to propose new distribution discrepancy loss functions inspires further exploration in this direction to device more novel distribution discrepancy measures.\\n\\nYou further mentioned \\\"It is required to explain why explicit modeling performs better than implicit modeling of prototypes by theory or practice.\\\". As we are exploring the direction of modeling feature distribution for domain adaptation, we do not have much theory to back it up currently. Indeed, modeling the feature distribution as Gaussian Mixture enables us to propose two novel domain discrepancy loss functions. One is Gaussian Component Mean Matching (GCMM) and one is Pseudo Distribution Matching (PDM). For GCMM, Pan et. al. have proposed a similar one, which they called general purpose domain adaptation, but theirs is more complicated and does not inherit a probability interpretation.\"}",
"{\"title\": \"Response to Reviewer #2 Part 2\", \"comment\": \"For the second reason of rejection, we do not agree as well. For the digits image transfer tasks, state-of-the-art results are already quite high, all above 92%, thus a 1~3% of accuracy increase should be considered as significant. Our method has improved on transfer M->U by 2.6% and on transfer S->M by 3.8% compared to the second best. Taking the results in context, it is not fair to consider these improvements as marginal. For VisDA-2017 dataset, our method improved from the second best by 1%, having a accuracy results of 81.4%. And it is 1.4% lower than [7], which won the first place in the VisDA-2017 competition. Thus, our improvement of 1% in this task should not be considered as marginal either.\\n\\nFor some further questions, you mentioned \\\"Why the authors don't use the estimated covariance matrix to measure the distance in eq.5?\\\" Yes, we have tried that to come up with a correlation distance similar as Deep Coral [5], however it does not perform well, so we do not report it in the paper.\\n\\nFor your question \\\"The paper should show the sensitivity of ways to determine the weights. What happens if values of 0.1 and 0.9 are changed in (pi-0.1)/0.9 on page 6\\\". The value 0.1, and 0.9 are set based on probability, because we have 10 classes for the digits image transfer, so a random prediction would have probability 0.1. If we directly use the probability based weighting, then a random prediction would also contribute to the training, we do not want that to happen, so we weight with (pi-0.1)/0.9, thus random predictions will have weight 0 and the perfect prediction has weight 1. So if the task has n classes, our method will weight the samples by (pi-1/n)/(1-1/n). There is really no reason why we want to tune these two parameters, as we are weighting the data points by probability.\\n\\nAs suggested by one of the reviewers, we have provided further sensitivity analysis of our method on the confidence threshold. Our default confidence threshold value is set to be 0.8. And we have experiment it with some other confidence values, 0.6, 0.7 and 0.9. The experiment results are on the revised paper. Please check the results in Figure 4 in the Appendix. The results show our method is also robust against confidence threshold value. \\n\\nAs also suggested by one of the reviewers, we have provided further experiment on the more challenging Office-Home dataset. Our method performs the best in all transfer tasks than state-of-the-art UDA methods. The results are in our revised paper (Table 3 in the Appendix), you are welcomed to check that out.\\n\\nWe argue that our work has been severely undervalued by the reviewers. If you think our argument is invalid in some aspects, please indicate. Thanks.\\n\\n\\n[1] Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, and Tao Mei. Transferrable proto-typical networks for unsupervised domain adaptation. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2239\\u20132247, 2019.\\n[2] Weitao Wan, Yuanyi Zhong, Tianpeng Li, and Jiansheng Chen. Rethinking feature distributionfor loss functions in image classification.2018 IEEE/CVF Conference on Computer Vision andPattern Recognition, pp. 9117\\u20139126, 2018.\\n[3] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. InAdvances in Neural Information Processing Systems, pp. 4077\\u20134087, 2017.\\n[4] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I Jordan. Learning transferable featureswith deep adaptation networks.arXiv preprint arXiv:1502.02791, 2015.\\n[5] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. InEuropean Conference on Computer Vision, pp. 443\\u2013450. Springer, 2016.\\n[6] Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl \\u0308ager, and SusanneSaminger-Platz. Central moment discrepancy (cmd) for domain-invariant representation learn-ing.arXiv preprint arXiv:1702.08811, 2017.\\n[7] Geoffrey French, Michal Mackiewicz, and Mark H. Fisher. Self-ensembling for visual domainadaptation. InICLR, 2018.\"}",
"{\"title\": \"Response to Reviewer #1 Part 1\", \"comment\": \"Thanks for reviewing our paper. Here are some points that are not fair to our work based on the weaknesses you have mentioned and we want to argue about them:\\n\\n1). There is some connection between our work and the work in [1]. As we have stated in the Related Works section, in [2], \\\"learning PN is equivalent to performing mixture density estimation on the deep features with an exponential density\\\". Thus, modeling the feature distribution as Gaussian Mixture, which is a type of exponential density, is equivalent to learn a Prototypical Network. This statement induces some connection between our work and the work in [1]. However, this equivalence is only true when learning a model in a single domain. Our work is way different from Pan et al.'s work in the setting of domain adaptation. \\n\\nFirst, we are based on different ideas. While Pan et al. propose a novel idea to remold PN for domain adaptation, as stated in their paper, our work is based on the idea that almost all existing domain adaptation methods are minimizing the feature distribution discrepancy for effective knowledge transfer from source domain to target domain, but none of them explicitly models the feature distribution though intuitively it facilitates the measuring of distribution discrepancy. \\n\\nSecond, the two works propose different distribution discrepancy loss functions. While Pan et al. propose multi-granular distribution discrepancy loss functions at both class-level and sample-level, our work proposes two new distribution discrepancy loss functions based on probability, one is Gaussian Component Mean Matching (GCMM) and one is Pseudo Distribution Matching (PDM). These two discrepancy loss functions work at different aspects and complement each other, where GCMM brings the two distributions closer, while PDM shapes the two distributions alike. One central component of a domain adaptation method is the distribution discrepancy loss function, as most domain adaptation methods follow a similar framework to minimize the distribution discrepancy loss function together with a classification loss function for knowledge transfer. Thus, you may think our work is very similar to Pan et al.'s. This is because almost all domain adaptation methods follow this similar framework. We do not agree with the claim that \\\"the primary difference between our work and Pan et al.'s is a loss term incentivizing a Gaussian mixture distribution over features.\\\" Due to the central role distribution discrepancy loss functions play in a domain adaptation method, devising new distribution discrepancy loss functions is an active research area in domain adaptation [3,4,5]. Please do not ignore the two novel discrepancy loss functions we propose based on our feature distribution modeling. \\n\\nThird, in terms of training algorithm, our method learns the distribution parameters automatically, while the Pan et al.'s work needs to manually calculate the prototypes for assigning pseudo labels. \\n\\nFinally, our proposed method fills the important gap in the area of domain adaptation, and to the best of our knowledge, no existing UDA methods have tried to model the feature distribution for domain adaptation.\\n\\n2). For the digits image transfer tasks, state-of-the-art results are already quite high, all above 92%, thus a 1~3% of accuracy increase should be considered as significant. There is not much room for a new method to make a huge improvement. If we treat the \\\"Train-on-target\\\" accuracy as the upper bound, the difference of accuracy between the second best results and the upper bound is quite limited, being 5.2%, 2.6%, 6.3% for the transfer M->U, U->M, S->M respectively. For VisDA-2017 dataset, our method improved from the second best by 1%, having a accuracy results of 81.4%. \\nAnd it is 1.4% lower than [6], which won the first place in the VisDA-2017 competition. Thus, although this improvement is not huge, but it should not be considered as marginal.\"}",
"{\"title\": \"Response to Reviewer #1 Part 2\", \"comment\": \"3). For your advice of adding more explicit comparison between our work and [1,7], we have added some comparisons in the paper, hopefully it will make the paper clearer.\\n\\nAs suggested by one of the reviewers, we have provided further sensitivity analysis of our method on the confidence threshold. Our default confidence threshold value is set to be 0.8. And we have experimented it with some other confidence values, 0.6, 0.7 and 0.9. The experiment results are on the revised paper. Please check the results in Figure 4 in the Appendix. The results show our method is also robust against confidence threshold value. \\n\\nAs also suggested by one of the reviewers, we have provided further experiment on the more challenging Office-Home dataset. Our method performs the best in all transfer tasks than state-of-the-art UDA methods. The results are in our revised paper (Table 3 in the Appendix), you are welcomed to check that out.\\n\\nFinally, after our clarifications, we hope you have a better understanding of our work and give a more fair grade to our work. Thanks.\\n\\n[1] Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, and Tao Mei. Transferrable proto-typical networks for unsupervised domain adaptation. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2239\\u20132247, 2019.\\n[2] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. InAdvances in Neural Information Processing Systems, pp. 4077\\u20134087, 2017.\\n[3] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I Jordan. Learning transferable featureswith deep adaptation networks.arXiv preprint arXiv:1502.02791, 2015.\\n[4] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. InEuropean Conference on Computer Vision, pp. 443\\u2013450. Springer, 2016.\\n[5] Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl \\u0308ager, and SusanneSaminger-Platz. Central moment discrepancy (cmd) for domain-invariant representation learn-ing.arXiv preprint arXiv:1702.08811, 2017.\\n[6] Geoffrey French, Michal Mackiewicz, and Mark H. Fisher. Self-ensembling for visual domainadaptation. InICLR, 2018.\\n[7] Weitao Wan, Yuanyi Zhong, Tianpeng Li, and Jiansheng Chen. Rethinking feature distributionfor loss functions in image classification.2018 IEEE/CVF Conference on Computer Vision andPattern Recognition, pp. 9117\\u20139126, 2018.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces Distribution Matching Prototypical Network (DMPN) for Unsupervised Domain Adaptation (UDA). The proposed method explicitly models the feature distribution as a Gaussian mixture model in both source and target domains. Then the method aligns the target distribution with the source distribution by minimizing losses, which are called Gaussian Component Mean Matching (GCMM) and Pseudo Distribution Matching (PDM).\\n\\nThis paper should be rejected because (1) the novelty of the main idea is marginal, and (2) the performance gain over the baseline methods is also marginal.\\n\\nPan et al. already proposed the idea of transferring the knowledge from the source to the target using the prototype of each class. It is required to explain why explicit modeling performs better than implicit modeling of prototypes by theory or practice.\\n\\nIn table 2, the proposed method seems better than TPN, but in the appendix, by comparing then in each category, the proposed method wins six categories, whereas TPN also wins six categories. Therefore, it is hard to say the proposed DMPN is more effective than another method.\\n\\nEach prototype is modeled using a mean and a covariance matrix. Why the authors don't use the estimated covariance matrix to measure the distance in eq.5?\\n\\nBecause the proposed method uses pseudo-labeling for the target domain, it seems that the weights to determine unreliable examples are crucial. The paper should show the sensitivity of ways to determine the weights. What happens if values of 0.1 and 0.9 are changed in (pi-0.1)/0.9 on page 6?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper develops a new method for adapting models trained on labeled data from some source domain to unlabeled data in a target domain. The authors accomplish this by adapting a technique from [1] and [2] enforcing that the deep features learned during training approximately follow a Gaussian mixture distribution. With the learned features in this form, the authors ensure domain adaptation by minimizing the discrepancy between the distributions arising from the source and target datasets.\", \"strengths\": [\"The paper's experiments show an improvement in the model's performance relative to past work, utilizing a large number of comparison models.\", \"The use of explicit distributional information within the learned representations seems like a good fit for the task at hand, and the authors' experiments back this up.\"], \"weaknesses\": \"- The proposed method for unsupervised domain adaptation is very similar to the prototypical networks approach in [3], with the primary difference being a loss term incentivizing a Gaussian mixture distribution over features.\\n - While the authors achieve improved performance over [3], the gains in classification accuracy on the target dataset aren't especially huge (~1-3%).\\n - The paper is a bit hard to follow, and would be improved by giving a more explicit comparison of the methods used here to past work, especially [1] and [3].\\n\\n\\n[1] Weitao Wan, Yuanyi Zhong, Tianpeng Li, and Jiansheng Chen. Rethinking feature distribution for loss functions in image classification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9117\\u20139126, 2018.\\n\\n[2] Hong-Ming Yang, Xu-Yao Zhang, Fangying Yin, and Chenglin Liu. Robust classification with convolutional prototype learning. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3474\\u20133482, 2018.\\n\\n[3] Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, and Tao Mei. Transferrable prototypical networks for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2239\\u20132247, 2019.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"<Paper summary>\\nThe authors proposed Distribution Matching Prototypical Network (DMPN) for unsupervised domain adaptation. DMPN extracts features from the input data and models them as Gaussian mixture distributions. By explicitly modeling the distributions that the features follow, the discrepancy between the distribution of source data and that of target data can be easily evaluated. DMPN is trained by jointly minimizing two kinds of loss, which are classification loss on the source data and domain discrepancy loss that is calculated via the explicit models. Experimental results on two popular benchmark datasets validate the advantage of DMPN over other state-of-the-art methods. \\n\\n<Review summary>\\nThe proposed method seems simple but empirically performs well. The paper is well written and easy to follow, so we can maybe easily implement it. However, I have several concerns mainly about the details and theories of the proposed method, which makes my score a bit lower than the border line. Given clarifications in an author response, I would be willing to increase the score.\\n\\n<Details>\\n* Strength\\n + The motivation of using ProtoNet for domain adaptation seems reasonable.\\n + The proposed method performs well in the experiments.\\n + The paper, especially the experiment section, is well written and easy to follow.\\n\\n\\n* Weakness and concerns\\n - Several points on the proposed loss (GCMM and PDM) are not sufficiently discussed.\\n -- Why do we need two kinds of loss? These losses seem to play almost same role. Since PDM loss corresponds to target-side log likelihood regularization term (Eq. (3)), I wonder if we really need GCMM loss. \\n -- Since the authors explicitly model the feature distributions by Gaussian mixtures (GMs), it might be possible to calculate a standard divergence between source and target data distributions by using the parameters of GMs. Compared with such a straightforward approach, the proposed method seems to be ad-hoc and is not theoretically validated. What term of divergence (or distance) does it minimize?\\n -- When a certain class does not appear in pseudo-labeled target data, how can we calculate GCMM loss? (specifically, \\\\mu^{et}_c)\\n -- Are Eq. (3) and Eq. (6) correct? These are defined as total loss, not average, over each domain. It means that the scale of the coefficients for these terms changes according to the number of training data, but the sensitivity analysis in Fig. 2 does not show such effect.\\n -- Since the proposed losses heavily depend on the pseudo labels on the target data, it should be important to carefully set a proper threshold for the confidence. Is the proposed method sensitive against the change of this threshold? If so, how can we tune it?\\n -- How can we know p(c) in advance?\\n\\n - The theory shown in 3.5 is not sufficiently validated. \\n -- The authors state ````we minimize the first term through minimizing the domain discrepancy losses,\\\" but it is not sufficiently supported, because the relationship between the proposed losses and H-delta-H divergence is not clear. \\n\\n - I am concerned about whether the proposed method works well with harder datasets such as Office-Home dataset, because each class data are modeled by a simple Gaussian distribution in the proposed method. \\n\\n\\n* Minor concerns that do not have an impact on the score\\n - Using both f^s_i and F(x^s_i; \\\\theta) is confusing.\\n - Typo in Eq. (7): PMD -> PDM\"}"
]
} |
H1lQJ1HYwS | Deep amortized clustering | [
"Juho Lee",
"Yoonho Lee",
"Yee Whye Teh"
] | We propose a \textit{deep amortized clustering} (DAC), a neural architecture which learns to cluster datasets efficiently using a few forward passes. DAC implicitly learns what makes a cluster, how to group data points into clusters, and how to count the number of clusters in datasets. DAC is meta-learned using labelled datasets for training, a process distinct from traditional clustering algorithms which usually require hand-specified prior knowledge about cluster shapes/structures. We empirically show, on both synthetic and image data, that DAC can efficiently and accurately cluster new datasets coming from the same distribution used to generate training datasets. | [
"clustering",
"amortized inference",
"meta learning",
"deep learning"
] | Reject | https://openreview.net/pdf?id=H1lQJ1HYwS | https://openreview.net/forum?id=H1lQJ1HYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0WCcfVjiy9",
"S1gUbYntiB",
"H1gs9okQor",
"Ske6diyQsB",
"BklQVikXoS",
"Syx3iqyXor",
"HkersYSG9B",
"HygfUNzxcB",
"r1gwMlFp_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723940,
1573665022459,
1573219218529,
1573219189481,
1573219115108,
1573218979837,
1572129180620,
1571984457661,
1570766862663
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1462/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1462/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1462/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1462/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1462/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces a new clustering method, which builds upon the work introduced by Lee et al, 2019 - contextual information across different dataset samples is gathered with a transformer, and then used to predict the cluster label for a given sample. All reviewers agree the writing should be improved and clarified. The novelty is also on the low side, given the previous work by Lee et al. Experiments should be more convincing.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"My Concerns\", \"comment\": \"The fragmented clusters look really bad considering that these clusters themselves are so well separated but still authors argue that splitting these tight clusters should not be called failure. I do not agree with this claim.\\n\\nTraditional learning algorithm, of course, is all based on the assumptuion that raining and testing data have similar distribtuion. This, however, can be reasonably achieved;\\n\\nIn the proposed paper, it is much harder to practically guarantee that the training and testing data have the same clustering structure. Note that not only clustering structure similarity can be more difficult to handle than ``distribution similarity'', but in practice without clustering the testing data , how can you make sure the training data has a similar clustering structure? this is a chicken-and-egg problem which is not feasible\"}",
"{\"title\": \"Response to the review\", \"comment\": \"[Clusters only meaningful in global context] Please note that our network takes the entire set as an input for each step of filtering. Our system filters out a single cluster at a time, taking into account the topology of the whole dataset. Although we cannot describe what is going on under the hood of DAC (interpreting any neural network is a very challenging problem), our intuition is that its self-attention layers learn to identify clusters by comparing nearby data points while also taking global context into account. Rather than interpreting how our model extracts cluster information, we demonstrated through experiments that its learned strategy can successfully cluster previously unseen datasets in many different settings.\\n\\n[Anchor points] We agree with your point that bad anchor points may harm clustering, in the case of synthetic 2D data. Empirically, we found that the anchor point method is very effective in high-dimensional image datasets; in the paper, we apply anchored filtering only to image data. Our intuition for this is that since the anchor method selects a cluster for the model, it accelerates training in the early stages where the model must learn low-level features from scratch. Additionally, anchoring forces the network to not be biased towards always identifying a specific cluster (e.g. leftmost, darkest\\u2026).\\n\\n[Visualization] Please refer to our response to R2.\\n\\n[Are the learned clustering rules useful?] Think of the warped Gaussian datasets (sec 5.2) for instance. If we naively apply MoG in this case, the algorithm would completely fail because of the wrong assumption of the cluster shapes. On the other hand, DAC makes no assumption on the shapes of the clusters. DAC succeeded in learning the densities of clusters along with a strategy for assigning data points to those clusters, based on only the partition structures of the training datasets. I think this demonstrates that our argument is valid. \\nAs we clearly stated, our algorithm assumes that the datasets to be clustered are similar to the datasets seen during training. The model will fail If this assumption does not hold. However, please note that any machine learning algorithm based on empirical risk minimization would fail in this case. Adapting to change in dataset distribution is, in general, a very challenging open question and is beyond the scope of our paper.\"}",
"{\"title\": \"Response to the review\", \"comment\": \"[Novelty] We argue that our paper is not a mere application of the Set Transformer proposed in Lee et al., 2019. The core contribution of this paper is a framework for training a flexible and robust amortized clustering model. The Set Transformer is simply the architectural backbone we used to implement our framework, much like how one might use a standard ResNet backbone to verify a new idea in computer vision. We summarize our contribution as follows:\\n1. A new learning framework that decomposes the clustering problem into a sequence of filtering problems along with a learning objective to achieve this. This is completely different from the amortized clustering objective used in Lee et al., 2019, where they try to maximize the overall likelihood of datasets.\\n2. A way to apply the amortized clustering system beyond simple parametric families. We illustrated two examples (mixture of MAFs and mixture of neural statisticians) and demonstrated their effectiveness. In contrast, the clustering model of (Lee et al., 2019) is restricted to a mixture of Gaussians model and thus cannot be used on e.g. miniImagenet.\\n\\n[Parallel processing] The parallelization of our model is fairly simple, and our provided code is implemented in this way. Assume we are given D datasets, each with N images of size (H, W, C). The input to our DAC model would be a 5-dimensional tensor with shape [D, N, H, W, C]. After each step of filtering, we simply apply the appropriate filtering mask to each of the D datasets and feed the resulting 5-dimensional tensor through the network again for the next step. This whole process is easily parallelized on modern deep learning libraries (we used PyTorch). In contrast, the neural clustering process requires sequential sampling that iterates over datapoints, which cannot be parallelized. The number of forward passes required for each dataset is O(N) for the Neural Clustering Process (NCP) and O(K) for DAC. This allowed us to consider tasks with up to N=3000 2D points or N=100 images, while such large problems would be very time-consuming for NCP.\\n\\n[Fragmented clusters] While we agree that DAC did not find the \\u201cperfect\\u201d clustering on those examples, we think it is rather harsh to say the algorithm failed because of the fragmented cluster. Such fragmented clusters are commonly found in MoG-based clustering methods, due to the diagonal covariance assumption and the structure of the loss. We further point out that the DAC models used to generate Figs 1, 4 were trained on datasets with at most 4 clusters and 1000 data points, and were tested on a dataset with far more clusters and data points. Figs 1, 4 demonstrate that DAC is capable of generalizing to such drastically different datasets, which we attribute to our iterative filtering procedure. Results in Table 1 show that DAC was significantly better than previous methods in this particular task.\\n\\n[Uncertainty in cluster assignments] A simple way to accomplish this in DAC, which we considered in early experiments, is to use the assignment probability (instead of hard assignment masks) as input to the next filtering step. We excluded this from the manuscript as it did not increase clustering accuracy. As stated on page 8, we believe such uncertainty-aware amortized clustering is an important and interesting research topic, and we plan to investigate this problem in more detail in future work.\"}",
"{\"title\": \"Response to the review\", \"comment\": \"1. The main takeaway from our experiment is, 1) DAC trained in the way described in the paper can actually \\u201camortize\\u201d the clustering procedure so that it can accurately and efficiently cluster new datasets. 2) DAC can adapt to different numbers of clusters in datasets and can infer various cluster distributions. 3) Even though DAC is an amortized method, its clustering accuracy is comparable to or sometimes better than other clustering methods trained on datasets from scratch.\\n\\n2. The parameters to be learned are the parameters of the set-input neural networks (ISAB, PMA, MABs, \\u2026). We will make this clearer in the paper. The inference procedure after the training is the following. We simply feed the entire dataset into the network, which outputs \\\\theta and. \\\\theta and m are cluster parameters and mask, respectively (see eq 6). We repeat this procedure until either all data points are selected by a mask or m does not select any of the remaining points.\\n\\n3. Clusters are color-coded along with the density contour plots. In Figure 3, each datapoint is colored according to which cluster it was assigned to, and the contour plot depicts the density of each cluster. We appreciate the suggestion to separate the table into groups of methods.\\n\\n4. We showed in fig 1, 3 and Table 1 that DAC can generalize to datasets having different sizes and numbers of clusters. We are not sure whether DAC would generalize when the hyperparameters of cluster generating distribution changes, but such a setup violates the core assumption underlying our method, which is that the train- and test- datasets are generated through the same process.\\n\\n5. Thanks for your suggestion; we will think about a title that is proportional to the contribution of our method.\"}",
"{\"title\": \"Overall response to the reviews\", \"comment\": \"Overall, we acknowledge that our explanation of our method along with our choice of notation could confuse readers, especially those who are not familiar with this line of research. We will improve the overall presentation of our manuscript to be more clear and self-contained. Please refer to the individual comments below.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"[Overview]\\n\\nIn this paper, the authors proposed a new clustering method called deep amortized clustering (DAC). Inspired by Lee et al 2019, the authors exploited a transformer to gather the contextual information across different dataset points and then predict the cluster label for each data point. The main difference from Lee et al is that the proposed DAC sequentially estimate the cluster labels for the data points and thus more flexible to estimate the number of clusters in the whole dataset. Based on the proposed DAC method, the authors evaluated the performance on both unsupervised clustering and supervised clustering tasks. it turns out the proposed method has achieved better or comparable performance to previous work on various datasets while hold less computational cost. \\n\\n[Pros]\\n\\n1. In this paper, the authors extended the clustering method in Lee et al to a new method called Deep Amortized Clustering (DAC) for data clustering. In this new method, the number of clusters can be unknown at the beginning and the model itself will sequentially cluster the data points into different groups until are data points have been assigned to some clusters. This is an interesting method in that it does not need to specify the number of clusters at the beginning, and thus become more flexible.\\n\\n2. To achieve the DAC, the authors proposed two losses, one is Minimum Loss Filtering (MLF) and one is Anchor Filtering to cope with either multi-gaussian-distributed data points or even harder datasets. Meanwhile, the authors also proposed to estimate the density P(x; \\\\theta) in the case that the distribution is not knowing in prior.\\n\\n3. The authors evaluated the proposed method on both synthetic dataset and realistic dataset. On the synthetic dataset, the proposed method is compared with VBDPM and ACT-ST, two methods that can cope with dataset with unknown number of clusters. On the realistic datasets, the authors evaluated on the EMNIST which is of non-MoG distribution. Besides, the authors further evaluated the method on MiniImageNet features and Omniglot dataset, and showcased comparable performance to previous methods but much shorter running time.\\n\\n[Cons]\\n\\n1. Overall, the paper is poorly written and organized. First, the notations in the method section are hard to follow. There are a number of notations which are all capital characters, either representing a function or a method. Second, the whole training process and inference process of the proposed method is not clear to me. How the model is trained on the training set, what are the learnable parameters in the proposed model and what are the settings for the hyper-parameters, etc. Third, it is hard to get the takeaway messages from the experiment sections. The experimental settings for each subsection are not very clearly explained, and the analysis on the experimental results are also vague.\\n\\n2. In the method, the authors proposed Minimum Loss Filtering (MLF) for clustering with the loss function in Eq(5). During training, the authors use some training data with ground-truth labels to optimize the loss function. However, it is not clear which parameters will be learned in the optimization. Also, after the training, what the exact inference procedure should be is also not clear to me. Overall, it is really hard to me to follow this section on the filtering process. The authors should definitely describe the process more clearly.\\n\\n3. The experimental results shown in the paper are hard to interpret. First, the setting for each experiment is not clear to me. In Figure 3, it is hard to understand the figures clearly. In table 2 and table 3, some of the clustering methods are deterministic, such as K-means, Spectral clustering, DEC. However, some other clustering methods are learning to clustering methods, such as KCL and MCL. Putting all of the numbers in the same table is confusing and make it hard to compare. I would suggest the authors make a clear distinction between different methods: 1) deep clustering methods which directly cluster on top of the test set; 2) learning to clustering method which learn some parameters on training set and then generalize to test set; 3) amortized clustering, which also learn some parameters on training and then test on the test set with just one forward pass. Splitting the testing results into three group will be helpful to the readers to understand the paper and the proposed method.\\n\\n4. Another missed part in the model is the ablation study. How sensitive the model is to different training set, e.g., different training dataset size, different number of training clusters, and different hyper-parameters, etc. Without these information, it is hard to know how well robust the proposed DAC method can generalize.\\n \\n5. Finally, the proposed method was built upon Lee at al 2019, to extend the previous method to a sequential clustering problem. I think a title \\\"deep amortized clustering\\\" is a bit misleading and exaggerated on the proposed method.\\n\\n[Summary]\\n\\nIn this paper, the authors proposed a new method called Deep Amortized Clustering (DAC) for amortized clustering. Unlike the previous work Lee et al, the proposed DAC sequentially filter the data points from the whole set and construct the clusters gradually. This is a meaningful method in that it can be applied to those data without explicit number of clusters. However, the presentation of the method and experiments make it hard to follow, and thus hard to capture the contributions of the proposed method, and also its capacity. As also mentioned above, I would highly suggest the authors revise the paper so that it can present better the method and the experimental section.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe paper presents an amortized clustering method, called DAC, which is a neural architecture that allows efficient data clustering using a few forward passes. The proposed method is essentially based on the idea behind set-input neural networks [1], which consists of modeling the interaction between instances within a given dataset. Compared with the previous work [1], the main difference is that DAC does not need to specify the number of clusters, as in the case of Bayesian nonparametrics, making it more flexible for clustering complex datasets. It is empirically shown that DAC can efficiently and accurately cluster new datasets coming from the same distribution for both synthetic and image data.\", \"strengths\": \"Overall, I think the paper is well written and the relationship to previous works is well described. The empirical results seem promising, especially in terms of computational efficiency. The authors conduct some experiments on relatively large datasets, such as miniImageNet and tiereImageNet, which is indeed crucial for the practical applications of the proposed model.\", \"weaknesses\": [\"I think this is a good paper, but my major concern is the limited theoretical contribution, given the fact that this work is mainly based on set-input neural networks introduced in Ref. [1]. I would like the authors to clarify a bit more the novelty of the paper.\", \"The authors claim that DAC can process data points in parallel while Ref. [2] uses a sequential sampling procedure. However, there does not seem to be sufficient details on how to parallelize the proposed algorithm.\", \"As shown in Figs. 1 and 4, it seems that some clusters are split into two or three fragments. I think this simply means the failure of the proposed method on synthetic data.\", \"As also mentioned in the discussion on page 8, it would be important to consider uncertainties in cluster assignments, as already done in Ref. [2]. I would recommend the authors to provide some insight on how to take the cluster assignment uncertainty into account within current model.\", \"At the moment, I recommend a weak reject as the technical contribution of the paper seems rather limited, but I could be open to increasing my score if my concerns are addressed.\"], \"references\": \"[1] J. Lee, Y. Lee, J. Kim, A. R. Kosiorek, S. Choi, and Y. W. Teh. Set transformer: a framework for attention-based permutation-invariant neural networks. In Proceedings of International Conference on Machine Learning, 2019.\\n[2] A. Pakman, Y. Wang, C. Mitelut, J. Lee, and L. Paninski. Discrete neural processes. ArXiv:1901.00409, 2019.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed a deep amortized clustering framework which learns to cluster data efficiently based on the combination of set transformer and amortized clustering. The main motivation is to learn clustering rules from labeled data sets and generalize to new data sets, so as to avoid manually defining clustering criterion.\\n\\nNot an expert in this domain, I feel that the paper is not easy to read and many intuitions readers might be interested in are not explained very well. For example, the authors mentioned that the method proceeds sequentially, and each step identifies one cluster (the easiest cluster). However, in practical situations clusters might be meaningful only when considered in a global context, and / or under a certain scale, and more discussions are needed on how the proposed method achieves these goals. Also, how the set transformer extracts information useful for clustering is very unclear and needs more elaborations. \\n\\nThe authors used anchor points in harder problems, where the anchor points are uniformly sampled from the input data. One concern is that random sampling may lead to fluctuations in the learning process as well as very close anchor points which can be harmful for clustering. \\n\\nThe visualization of identified clusters seems a bit misleading. Some very compact clusters seem to be split into halfs (or with fragments of different colors) and does this indicate failed clustering on these simple data sets?\\n\\nFinally, whether useful rules can be learned for clustering from labeled data is still quite open and authors may want to give some convincing examples of such rules'' for which existing clustering criterion will fail but with learned rules it can be resolved. It looks to be that the result has to do with the clustering structures of the labeled data and how can one be sure that the training data have a similar clustering structure with the to-be-clustered-data? Without answering this basic concerns, the proposed method may be hard to be accepted.\"}"
]
} |
HygXkJHtvB | Using Objective Bayesian Methods to Determine the Optimal Degree of Curvature within the Loss Landscape | [
"Devon Jarvis",
"Richard Klein",
"Benjamin Rosman"
] | The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model. In this work, however, we aim to show that this is only true for a noiseless system and in general the trend of the model towards wide areas in the landscape reflect the propensity of the model to overfit the training data. Utilizing the objective Bayesian (Jeffreys) prior we instead propose a different determinant of the optimal width within the parameter landscape determined solely by the curvature of the landscape. In doing so we utilize the decomposition of the landscape into the dimensions of principal curvature and find the first principal curvature dimension of the parameter space to be independent of noise within the training data. | [
"Objective Bayes",
"Information Geometry",
"Artificial Neural Networks"
] | Reject | https://openreview.net/pdf?id=HygXkJHtvB | https://openreview.net/forum?id=HygXkJHtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"WqbKpPMPUf",
"Byl-b3VFir",
"SkgtRj4KjS",
"rylBzFNKoH",
"Byl2A8VtjH",
"SJgEDU4YoB",
"B1xXYr4YjH",
"SJgulr4KsH",
"HJx3a8J0FB",
"HJlP-ZkhFr",
"SJxwSVtcKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723911,
1573633017410,
1573632976808,
1573632269298,
1573631699910,
1573631579565,
1573631354793,
1573631215533,
1571841732268,
1571709182753,
1571619903121
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1461/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1461/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1461/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There has been significant discussion in the literature on the effect of the properties of the curvature of minima on generalization in deep learning. This paper aims to shed some light on that discussion through the lens of theoretical analysis and the use of a Bayesian Jeffrey's prior. It seems clear that the reviewers appreciated the work and found the analysis insightful. However, a major issue cited by the reviewers is a lack of compelling empirical evidence that the claims of the paper are true. The authors run experiments on very small networks and reviewers felt that the results of these experiments were unlikely to extrapolate to large scale modern models and problems. One reviewer was concerned about the quality of the exposition in terms of the writing and language and care in terminology. Unfortunately, this paper falls below the bar for acceptance, but it seems likely that stronger empirical results and a careful treatment of the writing would make this a much stronger paper for future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 2 of 2)\", \"comment\": \"Q8) On the other hand, as the authors used the spectrum properties of the Fisher information matrix, there are some recent works by Amari which can be cited.\\n\\nA8) Based on your suggestion, we found the paper \\\"Pathological spectra of the Fisher information metric and its variants in deep neural networks\\\" by Karakida, Akaho and Amari [2]. This is very exciting work which we will now include in this paper. Thank you for this helpful recommendation. Unfortunately this paper was only uploaded to ArXiv on the 14th October 2019 (after this conference's deadline) and as a result we could not include it in our original submission. None the less, this is a welcome opportunity to further contextualize our work in this paper.\\n\\n[1] Zhang, Yao, et al. \\\"Energy\\u2013entropy competition and the effectiveness of stochastic gradient descent in machine learning.\\\" Molecular Physics 116.21-22 (2018): 3214-3223.\\n[2] Karakida, Ryo, Shotaro Akaho, and Shun-ichi Amari. \\\"Pathological spectra of the Fisher information metric and its variants in deep neural networks.\\\" arXiv preprint arXiv:1910.05992 (2019).\"}",
"{\"title\": \"Response to Official Blind Review #1 (Part 1 of 2)\", \"comment\": \"We thank the reviewer for their constructive feedback.\\n\\nQ1) On the point of the poor writing quality.\\n\\nA1) We apologize for this and are working hard to improve the literary standard of the paper.\\n\\nQ2) In eq.(1), the authors equate the Fisher information matrix (which is an expected Hessian) to the Hessian matrix, this is subject to conditions which must be clearly given right before/after the equation.\\n\\nA2) Thank you for pointing this out. We will correct this error in the updated version.\\n\\nQ3) In the first equation in A.1, what is the subindex \\\"j\\\", \\\"Utilizing Laplace Approximation of the integral\\\": such approximations have conditions that must be clearly stated.\\\" and \\\"It is not clear how one can get the last approximation in page 12 from the previous equations.\\\"\\n\\nA3) The merits of the Laplace Approximation are discussed in [1]. We are adapting the discussion from [1] for the updated version of Appendix A. We are in the process of improving the general quality of Appendix A, including the discussion on the assumptions of the derivation and making the link between certain steps in the derivation more explicit. Thank you for the feedback and assistance in improving this Appendix.\\n\\nQ4) As a theoreiritical contirbution, the authors did not manage to converge to some simple and clear statements (theorems or equvalent).\\n\\nA4) We believe Reviewer #2 provides an excellent summary statement, and one which we have included in the paper. Namely, \\\"The authors provide theoretical arguments and claim that there exists an optimal width beyond which generalization can be poor\\\".\\n\\nQ5) It is hard to observe anything new, given the poor writing and organization.\\n\\nA5) We apologise if this was unclear and have made this clearer throughout the paper. In addition, we point to the last paragraph of the Introduction beginning at the bottom of Page 1 where we outline what we perceive to be our 3 main contributions. In summary:\\n1) We reflect that a correlation exists between energy and entropy as opposed to a competition or trade-off as was first presented in [1].\\n2) We reflect that an optimal level of curvature exists within the\\nlandscape which does not necessarily occur at the point in the landscape with the least curvature. We provide the novel perspective that the propensity of the model to find points of minimal curvature is a direct result of the model's propensity to overfit the training data.\\n3) We show that at the point in the landscape which corresponds to the Jeffreys prior the test error of the model reaches its minimum value and at this point the dimension of principal curvature of the model is at its maximum entropy. In doing so we also reflect the noise invariance of the dimension of principal curvature.\\n\\nQ6) The first 4 pages are mainly introductions of previous works.\\n\\nA6) We acknowledge that our work does rely heavily on past work and provides a detailed exposition of these past works, however, we view this as being necessary as we utilize a number of different field in this work. Namely Objective Bayes statistic, Information Theory, Differential Geometry and Machine Learning. We believe it necessary to not only provide sufficient background information for each field separately but also to illustrate the necessary overlap of the different concepts in these field for the full impact of this work to be seen. For example, a reader who is aware of the Jeffreys prior from an Objective Bayesian perspective may be unaware of its use as a right Haar measure in Differential Geometry. Thus, we aim to reflect the key fact that the Fisher Information, and as a result the Jeffreys Prior, is the commonality between the fields and guides our argument from the Objective Bayesian perspective of Section 3 to the Information Geometry perspective in Sections 4 and 5. We are, however, working at reducing the excess information in the paper, such as the overlap between the MDL property and Bias-Variance Dilemma, and restructuring aspects of our arguments to ensure that our contributions are clearer.\\n\\nQ7) The authors used information geometry and minimum description length to explain the generalization of deep learning. This is a small area. It is hard to miss closely related works by simple searching. Instead, the authors only cited Rissanen (1978).\\n\\nA7) Given the fact that the Minimum Description Length Principle can be equally phrased in light of the Bias-Variance trade-off which is also discussed in this work we see this as an opportunity to reduce the length of this work closer to 8 pages in line with the Conference standards and will, thus, rephrase our argument more in term of the Bias-Variance trade-off.\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"We thank the reviewer for their constructive feedback.\\n\\nQ1) The authors should discuss the architecture design choices used for the synthetic data-generating model.\\n\\nA1) In light of the requests of Reviewer #3 we have updated our experimental procedure to be more general and utilize larger networks with more variance in their design. Please see the general comment on our updated experimental procedure above, titled \\\"General Comment on Updated Experimental Procedure\\\", as this was also raised by another reviewer. We will be more explicit about our design choices in the updated version of the paper and agree that this requires more discussion. Our aim, however, with the generating model was to create a complicated function for the training network to model. As a result we began with non-linear sigmoidal layers to create a complex function. The linear layer in the output on the generating model was then used to obtain the scalar output for the regression task. In our updated experimental procedure we utilize more general and larger generating networks. We also discuss this aspect in the general comment on our updated experimental procedure. \\n\\nQ2) Why are the last 3 layers of the larger model comprise of linear mappings?\\n\\nA1) This was merely to over-parametrize the model. Naturally any consecutive linear layers can equally be compressed into a single layer, however, the addition of more linear layers does increase the expressive power of the model and aids in overfitting. The impact of this design decision on the loss landscape is evident in the work on alpha-scaling [1] in which it is shown that by placing more weight on one layer of linear parameters while proportionally decreasing the weight on the following linear layer it is possible to move to an area in the landscape with different width but without changing the model behaviour. This is a direct result of the linear layers being over-parametrized and when parameter weight is spread over more linear layers wider landscapes will occur. In line with our work, we believe the wider areas to overfit more and, thus, the inclusion of more linear layers will help enforce that the training network overfits the training data.\\n\\nQ3) Fig 1 is not clear. What does n=23 signify in the caption?\\n\\nA3) n represents the number of datapoints, separate trainings run, in generating the figure. We will include this in the caption.\\n\\nQ4) More discussion is needed to describe \\\"intersection of the likelihood values\\\", \\\"Difference in update Step\\\" and \\\"density is placed around 0\\\" in section 5.\\n\\nA4) Thank you for pointing this out. We will expand on these points in the paper. We have elaborated on these points in another general comment above: \\\"General Comment on Previous Experimental Procedure\\\". In summary, however, we use the phrase \\\"intersection of the likelihood values\\\" to express the point at which the training network has the same error as the true data generating network on the noisy training data. We believe this to be the point at which the Jeffreys Prior parametrization is found in the loss landscape. To test our assertion that the Jeffreys Prior parametrization provides the optimal test performance we observe the number of parameter updates between where the Jeffreys Prior parametrization is found and where the minimum test error is found. We referred to this as the \\\"Difference in update step\\\". We then plot a histogram and kernel-density estimation (KDE) of the difference in update step from repeated trainings. We found a significant portion of the KDE was situated around the difference in update step of $0$ and said that \\\"the density is placed around 0\\\".\\n\\n[1] Dinh, Laurent, et al. \\\"Sharp minima can generalize for deep nets.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\"}",
"{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"We thank the reviewer for their constructive feedback, and kind words.\\n\\nQ1) In Figure 1 values outside the optical cluster at 0.0 appear nonetheless. I am not sure how to judge the amount of spread I see, and what effect they have on the performance of the network.\\n\\nA1) The topic of Figure 1 and our experimental procedure was raised also by another reviewer and, thus, we answered this in a general comment: \\\"General Comment on Previous Experimental Procedure\\\". Please see this above, however, in summary this is due to the fact that networks will learn both true signal from the data as well as noise simultaneously while training. This distorts the results of our experiments when a significant degree is modelling early in training.\\n\\nQ2) In general I would like to see experiments on datasets and with architectures that are at least somewhat close to what people use in practice (at least in terms of the size of the task and the capacity of the net).\\n\\nA2) We are updating our experimental procedure to be more general and include larger models. We have made a general comment describing the new procedure above, titled: \\\"General Comment on Updated Experimental Procedure\\\". Thank you for this suggestion. We would value any further feedback or suggestions you may have for the new procedure.\"}",
"{\"title\": \"General Comment on Updated Experimental Procedure\", \"comment\": \"We are grateful for the time taken by the reviewers in helping to improve the quality of this paper and our work. Further, we acknowledge that the experimental networks in the original submission were too small and as a result we are obtaining experimental results on larger network architectures for Figure 1, with more variety in depth, width and the activation function used. We will update Figure 1 with these new results in the updated version. The new procedure is as follows. We create a randomly generated True network with depth between $5$ and $15$ layers. The widths of the model layers are randomly sampled between $5$ and $100$ neurons. The layers are sorted in descending order of width (so we have no encoder-type layers). We then prepend the $100$ neuron input layer and append the $1$ neuron output layer. At the moment all layers except the last are sigmoidal. This is the model used to *generate* data. We then randomly initialize our training network. The number of layers in this network is randomly chosen from the range of $[True Network Size+5, 25]$ to ensure we obtain a sufficiently large network to overfit the data. The widths of this network's layers are sampled from the range of $[True Networks Smallest Layer, 100]$. This is again to ensure the model is over-parametrized. The True networks parameter values as well as the Training networks initial values are sampled uniformly from $[-1.0, 1.0]$ with a random positional bias added to the True network parameters in the range of $[-0.5, 0.5]$. This bias is to ensure the Training network starts with a significant degree of error. Finally, we utilize randomly sampled values between $[0.0, 1.0]$ as input to the models, with a training batch size of $50$ datapoints and a test batch size of $500$. This data is input to the True network and we obtain the corresponding data labels as output. Lastly we add Gaussian noise to the Training data only (while the Test data remains clean) with a mean of $0$ and variance of $0.2$. The Training network is then trained to model the True network using this data and we observe the points where their likelihoods are equal and where the test error is minimized.\"}",
"{\"title\": \"General Comment on Previous Experimental Procedure (Part 2 of 2)\", \"comment\": \"We are in the process of running experiments on considerably larger networks, as requested by both Reviewer #2 and Reviewer #3, the details of which have been posted in another general comment below. While our new results are consistent with those provided in the original work, it appears the problems highlighted above were as a result of the training model being under-parametrized to fully learn the true signal and the noise. Thus, a trade-off occurred. The deeper models appear to be giving better results. The use of synthetic data is, however, still necessary for our experimental procedure. As stated above, we need to determine the point where the model variance is equal to the true distribution variance on training data (point where the likelihoods are equal). In all real-world datasets such ground truth information is not obtainable. Thus, the synthetic data afforded us the ability to precisely determine where the intersection of the likelihoods occurred, as well as the point where a minimum was reach in the error on the very large test set.\", \"figure_2_and_figure_3\": \"These experiments relying on the calculation of the Hessian matrix, however, are only computationally feasible (at least by our hardware constraints) on the smaller network. This is due to the Hessian of the network being an exceptionally large matrix for even a small network. Naturally there are techniques which aim to mitigate the size of the Hessian. These are, however, approximations and we believe the trade-off of using a smaller network to calculate the full and precise Hessian for our results as being worth-while.\"}",
"{\"title\": \"General Comment on Previous Experimental Procedure (Part 1 of 2)\", \"comment\": \"We are grateful for the time taken by the reviewers in helping to improve the quality of this paper and our work. The most common concern raised was that our experimental procedure was not sufficiently exhaustive to provide a compelling case for our theoretical arguments. We agree with these comments and in some cases the concerns raised with our experimental procedure are necessary and we aim to clarify their necessity in this general comment. We do, however, acknowledge that a more general and realistic experimental procedure was required. We have, thus, updated our experimental procedure and discuss this in the second general comment below.\", \"figure_1\": \"The most common concerns appear to be as a result of Figure 1. Thus, we will provide a brief summary of this Figure and clarify some terminology, such as \\\"the intersection of the likelihoods\\\", which were raised by Reviewer #2. The purpose of Figure 1 is to reflect the number of training steps between where the network achieves its minimum test set error and where the Maximum A Posteriori parametrization for the network using the Jeffreys prior is found (for brevity we will call this the Jeffreys prior parametrization). From the discussion on Page 8, and in particular using Equation 10, we reflect that the Jeffreys prior parametrization will result in the likelihood of the training neural network generating the training data and the likelihood of the true data distribution generating the training data being equal. In other words the Jeffreys prior results in a model with the same training error or variance as the true data distribution.\\n\\nThere are, however, two means by which the training network may reduce its error and increase its likelihood. Firstly it can model the true signal in the data, and secondly it can model the noise in the data. In reality, while training, the model will learn both noise and signal simultaneously. Conceptually if the model were to learn pure signal only it would model the true distribution identically, achieve a test error of $0$ (as no noise was added to this data) and obtain a training error equal to that of the true distribution. The model would then proceed to reduce training error by modelling the only information remaining, namely the noise and we would then see an increase in the test error. Unfortunately a normal training procedure is not as separable and as a result the network will model noise before learning all of the true signal in the data. Depending on the relative quantity of noise to signal being modeled two cases will occur. Case 1: The quantity of noise learned will dominate the true signal learned. In this case we will see the test error reach its minimum prior to the likelihoods intersecting. Case 2: The degree of true signal being learned dominates the degree of noise being learned. In this case the test error will continue decreasing slowly and, since true signal remains to be learned when the likelihoods intersect, the test error will be minimized after the likelihoods intersect. In summary, the Jeffreys prior parametrization equates the model and true distribution likelihoods. The fact that modelling the noise also increases the model likelihood distorts the point where the minimum test error is found. In cases where high quantities of noise are learned early in the training we observe the points outside the optical cluster at 0.0 (as well as an observably higher test error) as observed by Reviewer #2. We believe the symmetry of the density estimation of Figure 1 to be the main highlighting factor reflecting the fact that equating the likelihoods and using the Jeffreys prior parametrization results in minimum test error. In essence the difficulty of separating noise from true signal in data is the problem of overfitting, and one we aim to address in future work. We do, however, believe a contribution of this work to be the novel theoretical placement of where the minimum test error can be obtained in the loss landscape.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper conjectures that the so called Jeffreys prior over the parameters of a neural network is the prior leading to the best generalization performance. The authors test this conjecture on an artificial task using a small neural network, and investigate the sensitivity of the results to noise.\\n\\nI like the general idea of the paper and appreciate the very detailed exposition placing it in the context of other works. In particular, I enjoyed the summary showing the sometimes conflicting evidence for better generalization in either broader or sharper minima, and how it relates to the Jeffreys prior.\\n\\nHowever, as I understood the paper, the main claim in page 5 Equation 4 \\u201cThus we conjecture that a correct prior for a model would be:\\u201d is an *assertion* that Jeffreys prior is the correct prior to use over the parameter space of neural networks. While it is a possibility, the amount of empirical evidence presented does not (at least to me) provide strong enough justification.\\n\\nOn page 7, you say \\u201cThis model was a neural network composed of one, 5 neuron, hidden layer which utilized a sigmoid activation function in its hidden layer and a linear activation in its scalar output layer.\\u201c, describing your experiment. I don't think this experiment is sufficiently large to convince me.\\n\\nFurthermore, in Figure 1 values outside the optical cluster at 0.0 appear nonetheless. I am not sure how to judge the amount of spread I see, and what effect they have on the performance of the network.\\n\\nIn general I would like to see experiments on datasets and with architectures that are at least somewhat close to what people use in practice (at least in terms of the size of the task and the capacity of the net). That would give me more confidence that your conjecture is true. While I appreciate your detailed theoretical exposition, I think the amount of empirical evidence you provide is insufficient to back the claims. Considering the explicit instruction to judge papers exceeding 8 pages with a higher standard, I believe that the lack of a greater amount of empirical evidence is a significant deficiency of your otherwise very interesting work.\\n\\nI encourage you to expand this paper and resubmit to another venue -- I believe it has a great potential.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper argues that the widest minimum in the loss landscape is not the best in terms of generalization. The authors provide theoretical arguments and claim that there exists an optimal width beyond which generalization can be poor. Synthetic simulations are presented to support these claims.\\n\\nThe authors employ Fisher Information to characterize the optimal width or the curvature around the minimum. The fact that the determinant of the Fisher Information Matrix is invariant to parametrization, under certain conditions, serves as the motivation to design an objective Bayesian prior called Jeffrey's prior. \\n\\nThe motivation and the theoretical arguments are interesting, but the paper lacks in presentation and sufficient empirical evidence is also lacking to get fully convinced by the claims. \\n\\nThe authors should discuss the architecture design choices used for the synthetic data-generating model. Why are the last 3 layers of the larger model comprise of linear mappings?\\n\\nFig 1 is not clear. What does n=23 signify in the caption? More discussion is needed to describe \\\"intersection of the likelihood values\\\", \\\"Difference in update Step\\\" and \\\"density is placed around 0\\\" in section 5.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper targets at a deep learning theory contribution based on information geometry. This contribution is tightly based on Zhang et al. (2018) and explains the generalization of deep learning from a Bayesian perspective. The main contribution the authors claimed is an optimal degree of curvature exist which gives the best generalization guarantees, which is in contrast to the commonly perceived \\\"the wider the better\\\".\\n\\nFirst of all, the writing (including language etc) is of poor quality, to the extent that the submission is very difficult to read and can be rejected merely based on this, with unusual expressions, missing punctuations, super long sentenses, and wongly used words. The reviewer won't list example here because they are everywhere.\\n\\nWhat is even worse is the conceptral errors and defected derivations. For example, in eq.(1), the authors equate the Fisher information matrix (which is an expected Hessian) to the Hessian matrix, this is subject to conditions which must be clearly given right before/after the equation. As their results are largely based on the correctness of eq.(2), let's examine the derivations in appendix A.1. In the first equation in A.1, what is the subindex \\\"j\\\"? \\\"Utilizing Laplace Approximation of the integral\\\": such approximations have conditions that must be clearly stated. It is not clear how one can get the last approximation in page 12 from the previous equations. In summary, their eq.(2) is a loose approximation which is subject to a set of conditions (that are not given), and the derivation is of poor quality.\\n\\nAs a theoreiritical contirbution, the authors did not manage to converge to some simple and clear statements (theorems or equvalent). Instead, the contribution is largely *explanatory*. It is hard to observe anything new, given the poor writing and organization. The first 4 pages are mainly introductions of previous works.\\n\\nThe authors used information geometry and minimum description length to explain the generalization of deep learning. This is a small area. It is hard to miss closely related works by simple searching. Instead, the authors only cited Rissanen (1978). On the other hand, as the authors used the spectrum properties of the Fisher information matrix, there are some recent works by Amari which can be cited.\"}"
]
} |
ByxGkySKwH | Towards neural networks that provably know when they don't know | [
"Alexander Meinke",
"Matthias Hein"
] | It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the
training data. Thus, ReLU networks do not know when they don't know. However, this is a highly important property in safety
critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance. | [
"relu networks",
"towards neural networks",
"training data",
"low confidence predictions",
"predictions",
"important property",
"safety critical applications",
"context",
"detection",
"ood"
] | Accept (Poster) | https://openreview.net/pdf?id=ByxGkySKwH | https://openreview.net/forum?id=ByxGkySKwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Dtn5Jqm5F",
"ByxzUJ1siS",
"rJlSam-ciH",
"ryxadQZ5jH",
"H1gtSXbqiS",
"Sye38MQptH",
"BygIxqKcYr",
"BkxIutEqtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723880,
1573740361939,
1573684157115,
1573684085052,
1573684033498,
1571791443875,
1571621357563,
1571600750060
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1460/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1460/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1460/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1460/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1460/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper tackles the problem of confidence on neural network predictions for out-of-distribution (OOD) samples. The authors propose an approach for training neural networks such that the OOD prediction is uniform across classes. The approach requires samples from in- and out-of distribution and relies on a mixture of Gaussians for modelling the distributions, allowing to obtain theoretical guarantees on detecting OOD samples (unlike existing techniques).\\n\\nThe main concerns of the reviewers have been addressed during the rebuttal. If this approach does not outperform state-of-the-art in practice, providing such theoretical guarantees is an important contribution.\\n\\nAll reviewers agree that this paper should be accepted. I therefore recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to authors\", \"comment\": \"The authors have addressed my mainn concerns in full and I will update my score accordingly.\"}",
"{\"title\": \"Reply to Review #3\", \"comment\": \"We appreciate the helpful feedback from the reviewer and for pointing out relevant references. We now discuss these references in the related work in the introduction.\\n\\nThe suggested paper by Liu et al, \\u201cOpen Category Detection with PAC Guarantees\\u201d also yields guarantees for OOD detection but of a completely different kind than discussed in our paper. They provide guarantees on the generalization of the employed empirical performance measures, while we certify low confidence of a classifier on whole volume. We are thankful for bringing up\\nHendrycks, Gimpel \\u201cEarly methods for detecting adversarial samples\\u201d as they have employed a similar metric to the one we use in our Gaussian mixture model (which we now cite there). \\n\\nAs requested we have now included CIFAR100 in our evaluation (please see our enlarged result tables in the paper). The results are similar to the other datasets, in the sense that Outlier exposure and our CCU are the best methods for OOD detection and CCU again succeeds in the worst case scenario of an attack on an OOD system with adversarial noise. All other methods except ACET fail with AUC values below 20%. \\n\\nPlease note that due to a change in Pytorch we had to retrain all models in order to run additional analysis. Thus there are slight changes in the reported numbers for all methods but there are no changes in the overall picture.\"}",
"{\"title\": \"Reply to Review #2\", \"comment\": \"Thanks a lot for the encouraging comments and for the interesting questions. We address all mentioned concerns:\\n\\n``What is the novelty of the method at a high level?\\u2019\\u2019\\n\\nWe are not aware that any other paper on OOD detection has written it in a Bayesian framework as we do modeling explicitly p(x|i) and p(x|o) and thus having an expression of p(y|x) in terms of p(y|x,i) and p(y|x,o). This then allows us to derive our CCU optimization framework as maximum likelihood estimation in this particular model. Most other approaches are more ad-hoc by enforcing e.g. low confidence on the out-distribution data. However, the main novelty compared to any other OOD method is that our approach allows to certify whole volumes to have low confidence and thus being able to certify that this volume will be identified by the classifier as ``out-distribution\\u201d. This is done by using a density estimator for p(x|i) and p(x|o) which one can control (this is the reason for the ``simple\\u2019\\u2019 Gaussian mixture models for p(x|i) and p(x|o)) . Moreover, we bring up the challenging worst case evaluation of adversarial noise which we think should become standard in OOD detection. For the use in safety-critical systems the average case in terms of empirical evaluation on ``out-of-distribution\\u2019\\u2019 datasets is in our opinion not sufficient. Thus, we believe that our provable guarantees for the certification of low confidence over whole volumes and provably low confidence far away from the training data constitute important steps towards a ``general certification\\u2019\\u2019 of neural networks.\\nWe have changed the introduction to highlight our contributions more clearly.\\n\\n\\n``What is the performance of ACET when trained on adversarial uniform noise instead of adversarial tiny image dataset\\u201d\\n\\nWe have added this comparison in Appendix F. In fact we did in the beginning train ACET using adversarial noise as in the original paper but we found out that the performance of ACET on OOD detection improves when training on adversarial tiny images. Moreover, we wanted to have all methods to be trained using the same information on the out-distribution so that one has a fair comparison. In short the results in Appendix F show that OOD detection of the ACET trained on\\nadversarial tiny images is better for OOD detection but even for the worst case evaluation on adversarial noise. The reason is that the attack model is an l_2-ball with respect to a metric which is adapted to ``natural images\\u2019\\u2019 (we use the covariance of the training data) and not the l_infty attack\\nwhich ACET does during training. As our ACET model is trained using tiny images it is more adapted to these natural images.\\n\\n\\n``How do you ensure that the radius is not too large such that it has images that the model should actually be confident on (close to in distribution samples)?\\u2019\\u2019\\n\\nThanks for this question \\u2013 we provide a detailed analysis in Appendix E. Indeed for all our 200 certified balls for each dataset we checked if they contain images from the training or test set and this does not happen on any of the datasets. Thus we think that our certification procedure works very reliably.\\n\\n\\n``A flip-side is how sensitive the performance is to the score chosen for choosing the radius for computing the valid set of images for the adversary? It\\u2019s possible that the 11% threshold is too high? What\\u2019s the minimum confidence of CCU on the test images?\\u2019\\u2019\\nWe also have analyzed this in Appendix E. For MNIST, FMNIST and SVHN no test images resp. a tiny fraction (less than 0.1%) gets less than 11% confidence. For Cifar100 1.3% of the test images get less than 1.1% confidence, and for Cifar10 5.3% less than 11%. \\nHowever, one has to note that the certified bound of 11% confidence is an upper bound which is not tight in practical settings. Thus the actual maximal confidence over all the 200 certified balls of CCU which we found using PGD was never larger than the minimal confidence achieved on the test set (this is why CCU always has 100% AUC). However, in order to address the possibility that this could be an artifact of PGD not finding the global maximum on our certified balls, now we also report a lower bound on the AUC assuming that the maximal confidence of 11% (resp. 1.1%) would be attained by CCU in all the 200 certified balls. Note that even this theoretical lower bound still outperforms the other models\\u2019 empirical AUC.\"}",
"{\"title\": \"Reply to Review #1\", \"comment\": \"We thank the reviewer for suggesting the additional papers ``Evidential Deep Learning\\u2019\\u2019 (EDL) and ``Deep Ensembles\\u2019\\u2019 (DE) for comparison. It is fair to say that ``Deep Ensembles\\u2019\\u2019 has not been proposed for OOD detection but for uncertainty evaluation similar to MC-Dropout. We have included EDL and DE in our evaluation (please see Table 1 and 2 in the main paper). As pointed out by the reviewer, both techniques outperform MC-Dropout. DE performs better than EDL regarding OOD detection but both methods are not competitive with outlier exposure and our CCU \\u2013 this holds for all datasets but with particularly large differences on SVHN, CIFAR10 and CIFAR100 . EDL and DE fail regarding the worst case evaluation on adversarial noise. For EDL the AUC is zero on all datasets, meaning that for all 200 certified balls for each dataset the maximal confidence achieved in the ball around uniform noise is higher than the confidence of all test samples. Likewise DE has an AUC of below 8% on all datasets in this worst-case setting.\\n\\n``I would have liked to see precision and recall of the OOD detection task in addition to AUC\\u2019\\u2019\\n\\nIn Appendix G we have added a table with the AUPR, the area under the precision-recall curve,\\nas discussed in Hendrycks, Gimpel ``A baseline for detecting misclassified and out-of-distribution examples in neural networks\\u2019\\u2019, ICLR 2017. We did not add individual ROC curves or precision-recall curves as this would result in 28 plots each with 10 curves (note that we compare 10 different methods) which becomes quite cluttered. However, if the reviewers and/or the area chair indicate that they would like to see these plots, we are happy to include them in the final version.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper uses a generative model to assign anomaly scores. By its construction, it can provide provable performance guarantees. Experiments do not make unreasonable assumptions such as the ability to peak at the test data, unlike much previous work.\\nMy primary concern is that they should show performance on CIFAR-100 not just CIFAR-10, and I certainly hope these experiments will be included during the rebuttal. Overall experimentation is thorough and competently executed, and the proposed technique is sufficiently novel.\", \"small_comments\": \"> adv OOD detection with uniform ball perturbed\\nThis is a good way of formulating adversarial OOD detection.\\n\\nA possibly related work is _Early Methods for Detecting Adversarial Images_ (2016) since it uses covariance matrix information for detecting adversarial examples. This paper should cite _Open Category Detection with PAC Guarantees_ by Liu et al. (ICML 2018) since this also involved provable guarantees for OOD detection.\", \"update\": \"my concerns are addressed but my sentiment is still that this is a 6.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper provides a method to train neural networks with guarantees on outputting probabilities/scores close to uniform on inputs that are out of domain.\\nPrecisely, on some point x_0, we can obtain the largest radius such that on inputs in the ball around x_0 the classifier outputs some predetermined maximum score. \\nThe paper combines a simple generative model (mixture of Gaussian) for modeling in-distribution vs. out of distribution. The simplicity allows to obtain guarantees on the probability of an input being considered out of class. The model output p(y | x) critically uses the probability of being out of distribution. Hence guarantees on being detected as out of distribution translate to guarantees on p(y|x) being close to uniform.\", \"decision\": \"I vote for accepting the paper. The paper has some clear strengths. However, I have some concerns regarding experiments and comparison to previous work. It would be great if the authors could clarify.\", \"strengths\": \"The paper\\u2019s methodolgy is clearly written. The modeling is clear and sound. Overall, this idea is a promising approach to obtain networks that are provably under-confident far from training examples. The training cost for this approach is comparable to standard training, and the approach seems scalable and broadly applicable in general.\\n\\nThe experimental evaluation is also clearly described. Worst-case evaluation of OOD (out of domain) performance seems novel and the gains not this objective using the proposed approach of this paper are interesting and promising.\", \"concerns\": \"While the paper explains the proposed method well, the description of previous work and relation to previous work is inadequate. After spending some hours reading the cited paper, I am still confused about what\\u2019s the novelty of this work at a high level.\\nThis work uses p(x|i) and p(x|o) in the computation of p(y|x) during inference, where i and o are in distribution and out of distribution respectively. This is crucial to obtain guarantees one performance. However, how does this compare to other previous work that also uses some kind of generative modeling to model in/out distribution? A bunch of papers are cited in the introduction as doing this, but the relationship to the proposed work is unclear.\\n\\n\\u2014 I have a major experimental concern: When comparing against ACET, the baseline of performing adversarial style training on random noise inputs seems more appropriate since it\\u2019s closer to the evaluation metric (which picks random noise as out of domain and not 80M Tiny Images). What does the performance of this ACET baseline look like?\\n\\n\\u2014 During evaluation, how do you ensure that the radius is not too large such that it has images that the model should actually be confident on (close to in distribution samples). A flip-side is how sensitive the performance is to the score chosen for choosing the radius for computing the valid set of images for the adversary? It\\u2019s possible that the 11% threshold is too high? What\\u2019s the minimum confidence of CCU on the test images?\", \"minor_comments_on_writing\": \"\\u2014K_i, K_0 are not defined when writing the expression for GMM\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors present a novel approach for OOD detection; in particular their approach comes with worst-case guarantees without compromising on performance.\\n\\nThe manuscript is clearly written and I only have some concerns regarding the evaluation. \\nFirst, while the authors include with MCD an uncertainty-aware training approach, I miss more state-of-the-art methods with substantially better OOD performance, including Evidential Deep Learning (Sensoy et al, NeurIPS 2018) and Deep Ensembles. In particular a comparison to EDL would be interesting, since a similar entropy-encouraging term in the loss function is used during training, resulting is maximum entropy for OOD samples. \\nSecond, I would have liked to see precision and recall of the OOD detection task in addition to AUC, allowing a more meaningful/complete comparison between the approaches.\"}"
]
} |
Sklf1yrYDr | BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning | [
"Yeming Wen",
"Dustin Tran",
"Jimmy Ba"
] |
Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensemble’s cost for both training and testing increases linearly with the number of networks, which quickly becomes untenable.
In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles. BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member. Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch. Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and out-of-distribution tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4. We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs. We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks | [
"deep learning",
"ensembles"
] | Accept (Poster) | https://openreview.net/pdf?id=Sklf1yrYDr | https://openreview.net/forum?id=Sklf1yrYDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"wgiXjKhqhGf",
"8TaTqJMR1X",
"csloubzHZ",
"qYM_YwEhon",
"r1eWgJYijB",
"r1giCa_isr",
"BygEHAlcoS",
"Syl-w72Ssr",
"HJxBdw3msH",
"Syg4ew37iH",
"S1eCXLhXor",
"B1eTCr27jS",
"HkeAWH3XoH",
"r1xd6xjpFH",
"SJxoU252KH",
"rkljAl1hKS"
],
"note_type": [
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1585708841038,
1581399831812,
1581390469999,
1576798723851,
1573781224550,
1573780946935,
1573682747881,
1573401433033,
1573271405087,
1573271276009,
1573271077944,
1573270996537,
1573270789961,
1571823807954,
1571757139368,
1571709139101
],
"note_signatures": [
[
"~Ke_Alexander_Wang1"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"~Ziyu_Wang2"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1459/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1459/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1459/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Similar to Cheung et al., 2019\", \"comment\": \"Hi,\\n\\nThis's method is very similar to Cheung et al. 2019's paper: https://papers.nips.cc/paper/9269-superposition-of-many-models-into-one.pdf which also proposes sharing weights among the ensemble in a similar way and demonstrated learning up to 50 different sequential learning tasks. It might be good to add it as a reference.\"}",
"{\"title\": \"Thanks for pointing out this related work!\", \"comment\": \"Hi,\\n\\nThank you for leaving a comment on this! We will add a reference in the related work section for completeness!\\n\\nBest,\\nYeming\"}",
"{\"title\": \"Related work on scalable ensembles for uncertainty quantification\", \"comment\": \"Dear authors,\\n\\nCongratulations on having your paper accepted! It's very nice to see uncertainty-aware learning methods improving performance on modern DL tasks, and the application on continual learning is particularly interesting.\\n\\nIn the following work we have also studied ensemble-like method for uncertainty estimation, with a similar focus of maintaining the training scheme for existing network architectures. I hope you will find the reference helpful.\\n\\nZ. Wang, T. Ren, J. Zhu, and B. Zhang. Function Space Particle Optimization for Bayesian Neural Networks. In ICLR, 2019. https://arxiv.org/abs/1902.09754\\n\\nBest,\\nZiyu Wang\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposed an improved ensemble method called BatchEnsemble, where the weight matrix is decomposed as the element-wise product of a shared weigth metrix and a rank-one matrix for each member. The effectiveness of the proposed methods has been verified by experiments on a list of various tasks including image classification, machine translation, lifelong learning and uncertainty modeling. The idea is simple and easy to follow. Although some reviewers thought it lacks of in-deep analysis, I would like to see it being accepted so the community can benefit from it.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"More on the two groups of BN interpretation.\", \"comment\": \"Besides the argument in the initial response, we want to add in the lifelong learning experiments, we don't need to vectorize the BatchEnsemble computation because only one set of fast weight is involved in each task. Thus, our framework can naturally extend to higher rank perturbation which adds more expressiveness to our model although we found it is not necessary for the lifelong learning task in this paper. Such perturbation cannot be interpreted as two groups of BN (one of them has no BN statistics).\"}",
"{\"title\": \"Summary of revision, including new experiments\", \"comment\": \"The revision includes:\\n1. An additional experiments about neural network calibration on CIFAR-10 corruption dataset in Appendix D. Figure 7 shows that BatchEnsemble achieves the best trade-off among accuracy, calibration, computational and memory costs. Moreover, it shows that BatchEnsemble is orthogonal to existing ensemble methods such as Dropout ensemble. Combining dropout ensemble and BatchEnsemble leads to better calibration (as pointed out by reviewer #3). It is even competitive to naive ensemble while has 4x less memory cost.\\n\\n2. We added the result of MC-drop on Transformer in Table 1. Transformer single model already heavily uses dropout as regularization so dropout ensemble doesn't lead to better perplexity during testing. We didn't have time to finish the naive ensemble experiments but we don't think this degrades the quality because naive ensemble is supposed to be an upper bound. We have naive ensemble in every other experiments if it is feasible.\\n\\n3. As reviewer # 1 asked, we compared BatchEnsemble to naive ensemble of small models so that they share the same memory budget in Table 5, Appendix F.\\n\\n4. We fixed a number of typos.\"}",
"{\"title\": \"Combining BatchEnsemble with dropout leads to better calibration.\", \"comment\": \"We would like to thank the reviewer for the positive feedback. We add one more experiment on CIFAR-10 corruption dataset in the appendix D in revision. Figure 7 shows that BatchEnsemble achieves the best trade-off among memory, testing cost, accuracy and calibration. Moreover, it shows that combining BatchEnsemble and dropout leads to even better performance. It is even competitive to naive ensemble while maintaining single model memory cost. It is an evidence that BatchEnsemble is orthogonal to current ensemble method such as dropout ensemble. It also answers the question that how the method can be used as a basis for future work.\", \"about_the_two_main_concerns\": \"(i) Split-CIFAR100 is considered as a challenging lifelong learning task in previous published papers. We also extend our method to Split-ImageNet which we think is even harder because of the large number of sequential tasks. Our method is an inspiration for applying memory efficient ensemble in lifelong learning task. We think it is fair to leave [(i) developing harder lifelong learning benchmark (ii) extending our framework to it] to future work.\\n(ii) BatchEnsemble achieves better performance than single model and dropout ensemble over all experiments we showed (except the machine translation one where we are still working on). Given the tidy memory overhead BatchEnsemble introduces, we don't think it is fair to compare BatchEnsemble to naive ensemble which is supposed to be an upper bound for all memory efficient methods.\"}",
"{\"title\": \"Satisfied with responses - keeping score\", \"comment\": \"Thank you for your replies to my questions. I'm satisfied overall and keep my weak accept score - my main concerns are still (i) the limitations of the method for lifelong learning when extended to more complex/unrelated tasks and (ii) the mixed results vs the baselines in the experiments conducted.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your careful and insightful feedback.\\n\\n-> Q: I have a concern regarding ensembling. Do I understand correctly that in Figure 1 the method achieves almost constant test time cost only in the case of one device parallelization? If yes, then Figure 1 is slightly misleading and the description of this figure should be improved.\\n\\nFigure 1 is supposed to help understanding BatchEnsemble in the matrix element-wise multiplication view. For efficient computation, we use the vectorized computation. We will make this clear in the revision.\\n\\n-> Q: In the classification section the authors compare their approach only with MC-dropout. I would recommend adding other ensembling methods that have small memory footprint, and can be better than MC-dropout. The same is true for machine translation section. \\n\\nTreeNet is an ensemble method that is more memory expensive than MC-dropout and BatchEnsemble. Which layers should be shared among ensemble members in deep network such as ResNet-32 is still unknown and need extra effort to discover. We will include this work in discussion in the revision and run experiments that compares to it if time permitted.\\n\\n-> Q: The authors emphasize that their approach is faster than consequently training independent models. However, since these models are independent, it is possible to train them in parallel on multiple devices. The restriction to one device during training seems in general a bit artificial.\\n\\nWe don\\u2019t restrict to one device (in fact, experiments (4.1, 4.2) use multiple devices). Rather, we show that BatchEnsemble can exploit parallel training using both axes (within a device as well as across devices) and this depends on the user. In the extreme setting benefiting naive ensembles where all parallelism is done across devices, note that the memory cost for parameters still remains smaller for BatchEnsemble (assuming a parameter server).\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your careful and insightful feedback. We first answer some questions without extra experiments.\\n\\n-> Q: Issues with BatchEnsemble as a lifelong learning method. When applied to lifelong learning, the slow weights are only tuned for the first task - as acknowledged by the authors, this means that forward transfer is only possible from the 1st task to all subsequent tasks and, more concerningly, it could severely limit the expressiveness of the ensemble for subsequent tasks that can only make a rank-1 adjustment to each layer. On the split-cifar and split-imagenet tasks this interestingly does not seem to be an issue, but one could imagine that it could be for tasks that differ more.\\n\\nWe have a potential solution to make forward transfer possible as mentioned in section 3.3. On the benchmark we consider (Split-CIFAR100 is already considered a hard lifelong learning dataset), BatchEnsemble shows promising performance. We\\u2019re not aware of existing challenging lifelong learning datasets that necessitate a more complicated solution. So we plan to implement the potential solution in harder lifelong learning tasks as future work.\\n\\n-> Q: Was the task split and order randomised for each run in Figure 3a and 3b? Would be interesting to know if the choice of first task matter for performance. Also, did the authors try not training the slow weights at all for the lifelong learning experiments? This would show how much the transfer from the first task helps the subsequent ones.\\n\\nIt is randomised and the choice of first task affects the performance. The experiments are running with 5 random seeds and the confidence interval is plotted in Figure 3. We tried not to train the slow weights (just train the fast weights) for the first task. The model severely underfits so learning a reasonable slow weight is crucial for better performance in lifelong learning.\\n\\n-> Q: In Figure 3b, it\\u2019s strange that EWC has a similar/ slightly higher forgetting than a vanilla neural network - do the authors have an explanation for this? Was the regularisation coefficient tuned for EWC?\", \"answer\": \"A typo in the forgetting measure (it should be 0.12 for PNN). We will fix it in the revision.\\n\\n-> Q: The proposed solution for enabling transfer beyond the 1st task is to enable lateral weights from features from previous tasks, as in progressive neural networks, but this would undermine the parameter efficiency of the model.\\n\\nWe agree that the naive implementation of lateral weights would undermine the parameter efficiency although it is still more efficient than PNN. Assuming some similarity among tasks, maybe we just need sparse lateral connection to previous tasks and maintain parameter efficiency. It is an interesting direction for future work.\\n\\n-> Q: Machine translation experiments: BatchEnsemble on the attention layers of a transformer speeds up training in machine translation, but has little effect on final performance of the model versus a single transformer. Were any measures taken to equalise the number of parameters in the single transformer versus the ensemble method? Was a naive ensemble trained on the machine translation tasks for comparison?\\n\\n\\nWe showed some improvement on perplexity over single model. However, with multiple ensemble members, it has the potential to make calibrated predictions in machine translation. There is no standard uncertainty benchmark in machine translation so we didn\\u2019t include it in the experiment section. We slightly increased the number of units in the FC (given only 2% extra parameters budget) which has no gain over single model. We are running naive ensemble experiments now and plan to add it in the revision.\\n\\n-> Q: Image classification experiments: It is hard to fairly compare the BatchEnsemble performance to the single model performance here given the 50% extra training iterations, but its encouraging that BatchEnsemble outperforms MC-dropout and comes close to the performance of a naive ensemble.\\n\\nIncreasing the training iterations for single model has no further improvement. If we increase the batch size (which takes more advantage of modern hardware), we don\\u2019t increase the number of training iterations.\\n\\n-> Q: How can the method be used as a basis for future work? It would be good to see some discussion of whether and how BatchEnsemble could be combined with other neural network ensemble methods.\\n\\nWe believe BatchEnsemble is orthogonal to other ensemble methods such as SWA, Snapshot ensemble, dropout-ensemble. One can potentially combine these methods with BatchEnsemble. We will add more discussion in the revision. We also have experiments results that combining BatchEnsemble and Dropout leads to better uncertainty modelling, which will be added in the revision.\"}",
"{\"title\": \"Response to reviewer #1 (3)\", \"comment\": \"-> Q: Memory and training/test computational costs need to be reported for each experiment. However, the currently reported results are incomplete here and there.\\n\\nMemory and training/test computational costs (relative to single model) are consistent across experiments. We showed the cost in Figure 1. A comparison to other continual learning methods are given in Table 4 in the Appendix. We will state the costs in a more clear way in each experiment in the revision.\\n\\n-> Q: Comparing to the currently limited number of baselines on the incomplete evaluation metrics, the proposed method does not show significant improvements, for example, the results in Figure 4, Table 1 and Table 2.\\n\\nIt is not fair to compare BatchEnsemble to naive ensemble given the memory cost. We show a significant improvement for low memory cost tradeoff (i.e., significant improvement over single model and dropout-ensemble).\\n\\n-> Q: The proposed method requires the models for different tasks should have exactly the same architecture. This could be a strong limitation in many scenarios. For example, when different tasks have significantly different numbers of classes.\\n\\nBatchEnsemble can ensemble of network with different length. It is true that it has some limitations and is a potential for future work. Note that other memory efficient methods such as dropout-ensemble have the same limitation; and existing SOTA for lifelong learning like GEM, A-GEM also have this restriction.\"}",
"{\"title\": \"Response to reviewer #1 (2)\", \"comment\": \"-> Q: How were the baselines for each experiment selected? How to determine the specific setting in each experiment (any reason behind choosing the parameters in the settings)?\\n\\nHow baseline selected in lifelong learning experiment is explained in the second and fourth paragraph in section 4.1. The specific setting in the experiments followed exactly as [1,2], except the single-epoch setting in their papers. For other experiments, we compare to single model, naive ensemble and dropout-ensemble except the machine translation experiment. The experiments setting are commonly used in other papers.\\n\\n-> Q: In the life-long learning settings, the shared weights W is only trained on the first task and then keeps fixed: this can leads to both large variance and bias. Why does it simply work well without causing any serious problems? The rank-one extension of a shared model W enforces a very strong regularization to the model for each task. Will the method work promisingly when the tasks are more different from each other or harder to solve? For example, what if we increase the classes in each task? Is the rank-one extension still flexible and expressive enough to handle this situation?\\n\\nThe rank-1 perturbation per layer provides satisfying expressiveness because we chose deep neural network (at least 32 layers) in our experiments. The gap between BatchEnsemble and PNN increases to 3% (70.1 v.s 73.0) if we use AlexNet. If we increase the number of classes in each task, all methods would have decreasing accuracy. So in Split-CIFAR100, we chose the set-up the same as (GEM, AGEM paper). In Split-ImageNet, we chose T=100 to demonstrate that BatchEnsemble is capable of learning a large number of sequential tasks.\\nWe plan to combine the improvement we mentioned in section 3.3 with harder lifelong learning tasks (such as learning sequential skills in RL) in future work. Note that Split-CIFAR100 is already a challenging domain for which state-of-the-art methods evaluate on [1,2,3].\\n\\n[1]: Lopez-Paz, D., & Ranzato, M. (2017). Gradient Episodic Memory for Continual Learning. NIPS.\\n[2]: Chaudhry, A., Ranzato, M., Rohrbach, M., & Elhoseiny, M. (2018). Efficient Lifelong Learning with A-GEM. ArXiv, abs/1812.00420.\\n[3]: Xu, J., & Zhu, Z. (2018). Reinforced Continual Learning. NeurIPS.\\n\\n-> Q: Mathematically, comparing to single model Wx, the proposed ensemble method equals to applying a dimension-wise scaling to the input x and a dimension-wise scaling to the output Wx, and the scaling factors vary across different tasks. Hence, the proposed structure is exactly the same as fine-tuning two groups of batch normalization scaling factors before and after applying transformation W. It does not make much sense in the experiments that the performance of BN-Tuned in Figure 3a is much worse than the proposed method since they share exactly the same structure and math (note the memory and computational costs are also the same). The paper does not give an explanation about this. Moreover, the baseline BN-Tuned is only compared on only one of those datasets in the paper. It should be one of the most important baselines and needs to be compared in all experiments.\\n\\nThat\\u2019s a very interesting connection. Note that while the connection is true (i.e., apply BN at every input and pre-activation, but remove shift parameter and batch-statistic normalization), the intuition behind why that works is not really clear from the BN perspective. Our work interprets feature-wise scaling of inputs and preactivations as a rank-1 perturbation of the weights, which is meant to provide expressivity. It\\u2019s uncommon to think of BN\\u2019s feature scaling as still BN if the computation doesn\\u2019t use batch statistics. Additionally, our method can perturb each dimension of the 4-d weights in convolutional layers while the BN interpretation can\\u2019t.\\n\\n-> Q: On each benchmark dataset (except the last one), only 1-2 baselines are compared and most baselines are not state-of-the-art methods or not methods specifically designed for the problem (e.g., many are dropout and its variants). This makes the comparisons not convincing, especially considering that the experimental settings are determined by the authors and might be chosen for the best performance of the proposed method.\\n\\nComparing to dropout-ensemble because it is the state-of-the-art memory efficient ensemble as far as we know. The experimental setting is chosen to be the same as previous published papers. \\n\\n-> Q: At least two baselines should be included in all experiments: 1) single model with the equal number of model parameters, and 2) naive ensemble not sharing parameters across member models. However, each experiment only includes one or even none of these two baselines.\\n\\nThe only experiment that doesn\\u2019t have naive ensemble is machine translation. It is an upper bound so we don\\u2019t think missing it in one of many experiments should affect the quality of the paper. We plan to add it in the revision.\"}",
"{\"title\": \"Response to reviewer #1 (1)\", \"comment\": \"Thank you for your careful and insightful feedback. We first answer some questions without extra experiments.\\n\\n-> Q: Why can the simple method achieve a more compelling trade-off between accuracy and efficiency/memory costs comparing to a large single model or a naive ensemble of small models? Any mathematical or information-theoretical explanation behind that?\\n\\nCombining rank-1 perturbation per layer with large number of layers in deep networks leads to diverse ensemble member as evidenced by Figure 8 in the Appendix. In ensemble theory, more diversity leads to better accuracy by averaging the output. Diversity is also the key to better uncertainty prediction. Thus, the uncertainty modelling experiments are another evidence that rank-1 perturbation provides satisfying diversity. This explains why rank-1 perturbation achieves an improved accuracy.\\n\\nRegarding a theoretical explanation, our hypothesis is that a low-dimensional subspace of the parameters (particularly restricted to the first rank of each weight) provides sufficient expressivity and trainability to return diverse solutions. [1] shows that we can construct the subspace by taking the first principal components of SGD trajectory, which leads to diverse sets of high performing models ($w=w_0 + Pz$ where $w_0$ is a solution point found by SGD, z is the vector from the constructed subspace, P is a linear transformation). The slow weight in our paper can be thought of w_0 and the fast weights are vectors in the subspace.\", \"there_are_other_related_works_adding_evidence_to_this_hypothesis\": \"for predictive accuracy, the intrinsic dimensionality of networks has been signalled to be very low [2]; and for predictive uncertainty, a concurrent ICLR submission (https://openreview.net/forum?id=BkxREeHKPS) shows that tying uncertainty parameters up to a certain low rank may be sufficient.\\n\\n[1]: Izmailov, P., Maddox, W.J., Kirichenko, P., Garipov, T., Vetrov, D.P., & Wilson, A.G. (2019). Subspace Inference for Bayesian Deep Learning. ArXiv, abs/1907.07504.\\n[2]: Li, C., Farkhoor, H., Liu, R., & Yosinski, J. (2018). Measuring the Intrinsic Dimension of Objective Landscapes. ArXiv, abs/1804.08838.\\n\\nOverall, there are two related questions here: \\n1. Does rank-1 perturbation provide diversity? \\n2. How much diversity does it provide? \\nIn this paper, we focus on answering the first question. We leave formally understanding how and why BatchEnsemble provides a compelling accuracy-efficiency tradeoff to future work. Understanding diversity of naive ensembles is already an open research direction (the work was published in 2016; one work focused solely on the analysis is a concurrent ICLR submission, https://openreview.net/forum?id=r1xZAkrFPr).\\u201d\\n\\nBatcheEnsemble v.s. A large single model: In the experiment section, we compared BatchEnsemble to single model with roughly the same number of parameters (except the machine translation part). More specifically, BatchEnsemble occurs 2% parameters overhead. With such parameters budget, we can only scale the number of filters in single model by 1.07, which results to no improvement. BatchEnsemble can also be used as uncertainty modelling and lifelong learning which the single model is not capable of.\\n\\nBatchEnsemble v.s. Naive ensemble of small models: If the ensemble size is 4 then the fair comparison is (naive ensemble of 4 ResNet-14 vs. BatchEnsemble of ResNet-32). We think this is a good suggestion and will add the comparison in the revision. But BatchEnsemble still has better testing time cost. And if ensemble size is large such as Split-ImageNet, BatchEnsemble is still a better choice than naive ensemble of small models.\\n\\n-> Q: It is easy to understand that the ensemble defined here can improve efficiency and reduce memory cost. But as an alternative to the naive ensemble, we also expect the performance to not suffer from severe drawbacks. How to control efficiency-performance trade-off in the proposed method?\\n\\nWe propose BatchEnsemble as an alternative efficient ensemble method. Rather than comparing to naive ensemble (the memory consumption is 2 v.s. 400 with ensemble size 4), a more fair comparison is comparing to dropout-ensemble which we consider in 4.3 and 4.4 and Appendix-D. \\n\\nIn general, naive ensembles, BatchEnsemble, and dropout ensembles are different points on the tradeoff curve of efficiency-to-performance. Ensembles are not concerned with efficiency (Figure 1), but achieve highest performance. If one prefers high efficiency and wants to find the method with highest performance, then we show over many tasks that BatchEnsemble is a better alternative to dropout (and single models). BatchEnsemble can even perform as well as naive ensembles on some tasks, but they are not meant to be an alternative if efficiency is not a concern.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a new efficient ensembling method that has smaller memory footprint than naive ensembling and allows a simple parallelization on one device. The authors\\u2019 idea is based on sharing weights between individual ensembling models. The weights of each model can be represented as element-wise product of two matrices: shared one and matrix with rank 1 that can be efficiently stored.\\n\\nThe idea is quite interesting despite its simplicity. The experimental part is quite broad. I would like to highlight the lifelong learning as the strongest experimental result achieved by the authors. Despite the significant improvement on top of the baselines, this approach has one drawback described by the authors themselves. This method is difficult to generalize for the case of very diverse tasks despite its scalability. Nevertheless, I would not consider it as a large problem.\\n\\nI have a concern regarding ensembling. Do I understand correctly that in Figure 1 the method achieves almost constant test time cost only in the case of one device parallelization? If yes, then Figure 1 is slightly misleading and the description of this figure should be improved.\\nIn the classification section the authors compare their approach only with MC-dropout. I would recommend adding other ensembling methods that have small memory footprint: e.g. [1], and can be better than MC-dropout. The same is true for machine translation section. \\n\\nThe authors emphasize that their approach is faster than consequently training independent models. However, since these models are independent, it is possible to train them in parallel on multiple devices. The restriction to one device during training seems in general a bit artificial.\\n\\n\\n[1] Stefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall, and Dhruv Batra. Whym heads are better than one: Training a diverse ensemble of deep networks.arXiv preprintarXiv:1511.06314, 2015b\\n\\n\\nOverall, it is an interesting paper, that has several drawbacks.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper presents an ensemble method for neural networks, named BatchEnsemble, that aims to provide the benefits of improved accuracy and predictive uncertainty of traditional ensembles but with a significantly lower computational cost and memory cost. The method works by maintaining a shared \\u201cslow\\u201d weight matrix per layer, along with an ensemble of rank-1 \\u201cfast\\u201d weight matrices that are combined individually with the slow matrix via a Hadamard product in order to generate the network ensemble. The fast matrices can be stored as a pair of vectors, incurring a much smaller memory cost than a full rank matrix, and the prediction of an ensemble member can be vectorized such that the forward pass through the whole ensemble can be parallelized within a single GPU, yielding a computational speedup over traditional ensembles. The method is evaluated across a host of experimental settings, including image classification, machine translation, lifelong learning and uncertainty modelling.\", \"Overall, I recommend this paper to be accepted because:\", \"(i) the method proposed is simple to understand and implement,\", \"(ii) it yields clear computation and memory benefits over a traditional ensemble,\", \"(iii) the method is motivated by a good literature review, putting the approach and experiments conducted in context,\", \"(iv) while in terms of performance the experimental results are mixed, many different settings are evaluated, they are conducted fairly and they are transparently described, with the limitations are clearly acknowledged for the most part.\", \"Specific comments / questions\", \"Issues with BatchEnsemble as a lifelong learning method. When applied to lifelong learning, the slow weights are only tuned for the first task - as acknowledged by the authors, this means that forward transfer is only possible from the 1st task to all subsequent tasks and, more concerningly, it could severely limit the expressiveness of the ensemble for subsequent tasks that can only make a rank-1 adjustment to each layer. On the split-cifar and split-imagenet tasks this interestingly does not seem to be an issue, but one could imagine that it could be for tasks that differ more.\", \"Was the task split and order randomised for each run in Figure 3a and 3b? Would be interesting to know if the choice of first task matter for performance. Also, did the authors try not training the slow weights at all for the lifelong learning experiments? This would show how much the transfer from the first task helps the subsequent ones.\", \"In Figure 3b, it\\u2019s strange that EWC has a similar/ slightly higher forgetting than a vanilla neural network - do the authors have an explanation for this? Was the regularisation coefficient tuned for EWC?\", \"The proposed solution for enabling transfer beyond the 1st task is to enable lateral weights from features from previous tasks, as in progressive neural networks, but this would undermine the parameter efficiency of the model.\", \"Machine translation experiments.\", \"BatchEnsemble on the attention layers of a transformer speeds up training in machine translation, but has little effect on final performance of the model versus a single transformer.\", \"Were any measures taken to equalise the number of parameters in the single transformer versus the ensemble method?\", \"Was a naive ensemble trained on the machine translation tasks for comparison?\", \"Image classification experiments.\", \"It is hard to fairly compare the BatchEnsemble performance to the single model performance here given the 50% extra training iterations, but its encouraging that BatchEnsemble outperforms MC-dropout and comes close to the performance of a naive ensemble.\", \"Predictive uncertainty / diversity. BatchEnsemble seems to perform well for uncertainty modelling in contextual bandits relative to a number of baselines.\", \"How can the method be used as a basis for future work? It would be good to see some discussion of whether and how BatchEnsemble could be combined with other neural network ensemble methods.\"], \"minor_comments_not_affecting_review\": [\"Section 4.4, paragraph 2, line 1 \\u201cuncertainty\\u201d misspelt.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper aims to improve the efficiency of ensembles of neural nets in traditional supervised learning and life-long learning (learning on a series of tasks). The main idea is to let all the neural nets in an ensemble share the same weights W for each layer, and the weights for each neural net is generated by the Hadamard product of W and a specific rank-one matrix of the same size as W that is different across members in the ensemble. In experiments, they evaluate the method with some baselines on life-long learning, traditional classification, NMT tasks, and uncertainty modeling.\", \"the_paper_relates_the_proposed_method_to_several_different_learning_problems_and_applications_and_lists_many_potential_advantages_in_these_applications\": \"it covers a lot of things. However, it lacks in-depth discussion to several key problems, rigorous analysis or complete experimental study to support the main claims, for example:\\n\\nWhy can the simple method achieve a more compelling trade-off between accuracy and efficiency/memory costs comparing to a large single model or a naive ensemble of small models? Any mathematical or information-theoretical explanation behind that?\\n\\nIt is easy to understand that the ensemble defined here can improve efficiency and reduce memory cost. But as an alternative to the naive ensemble, we also expect the performance to not suffer from severe drawbacks. How to control efficiency-performance trade-off in the proposed method?\\n\\nHow were the baselines for each experiment selected? How to determine the specific setting in each experiment (any reason behind choosing the parameters in the settings)?\\n\\nIn the life-long learning settings, the shared weights W is only trained on the first task and then keeps fixed: this can leads to both large variance and bias. Why does it simply work well without causing any serious problems?\\n\\nThe rank-one extension of a shared model W enforces a very strong regularization to the model for each task. Will the method work promisingly when the tasks are more different from each other or harder to solve? For example, what if we increase the classes in each task? Is the rank-one extension still flexible and expressive enough to handle this situation?\\n\\nThese are some of the most important questions needed to be answered in the first place before showing higher evaluation metrics and listing the potential advantages of the proposed method. But it is not clear to me at all how they can be answered according to the contents in the current paper. I notice that the authors mentioned the last two questions at the end of Section 3, but no explanations/discussions were given.\", \"other_major_concerns\": \"1) Mathematically, comparing to single model Wx, the proposed ensemble method equals to applying a dimension-wise scaling to the input x and a dimension-wise scaling to the output Wx, and the scaling factors vary across different tasks. Hence, the proposed structure is exactly the same as fine-tuning two groups of batch normalization scaling factors before and after applying transformation W. It does not make much sense in the experiments that the performance of BN-Tuned in Figure 3a is much worse than the proposed method since they share exactly the same structure and math (note the memory and computational costs are also the same). The paper does not give an explanation about this. Moreover, the baseline BN-Tuned is only compared on only one of those datasets in the paper. It should be one of the most important baselines and needs to be compared in all experiments.\\n\\n2) On each benchmark dataset (except the last one), only 1-2 baselines are compared and most baselines are not state-of-the-art methods or not methods specifically designed for the problem (e.g., many are dropout and its variants). This makes the comparisons not convincing, especially considering that the experimental settings are determined by the authors and might be chosen for the best performance of the proposed method.\\n\\n3) At least two baselines should be included in all experiments: 1) single model with the equal number of model parameters, and 2) naive ensemble not sharing parameters across member models. However, each experiment only includes one or even none of these two baselines.\\n\\n4) Memory and training/test computational costs need to be reported for each experiment. However, the currently reported results are incomplete here and there.\\n\\n5) Comparing to the currently limited number of baselines on the incomplete evaluation metrics, the proposed method does not show significant improvements, for example, the results in Figure 4, Table 1 and Table 2.\\n\\n6) The proposed method requires the models for different tasks should have exactly the same architecture. This could be a strong limitation in many scenarios. For example, when different tasks have significantly different numbers of classes.\"}"
]
} |
H1gWyJBFDr | Fully Convolutional Graph Neural Networks using Bipartite Graph Convolutions | [
"Marcel Nassar",
"Xin Wang",
"Evren Tumer"
] | Graph neural networks have been adopted in numerous applications ranging from learning relational representations to modeling data on irregular domains such as point clouds, social graphs, and molecular structures. Though diverse in nature, graph neural network architectures remain limited by the graph convolution operator whose input and output graphs must have the same structure. With this restriction, representational hierarchy can only be built by graph convolution operations followed by non-parameterized pooling or expansion layers. This is very much like early convolutional network architectures, which later have been replaced by more effective parameterized strided and transpose convolution operations in combination with skip connections. In order to bring a similar change to graph convolutional networks, here we introduce the bipartite graph convolution operation, a parameterized transformation between different input and output graphs. Our framework is general enough to subsume conventional graph convolution and pooling as its special cases and supports multi-graph aggregation leading to a class of flexible and adaptable network architectures, termed BiGraphNet. By replacing the sequence of graph convolution and pooling in hierarchical architectures with a single parametric bipartite graph convolution, (i) we answer the question of whether graph pooling matters, and (ii) accelerate computations and lower memory requirements in hierarchical networks by eliminating pooling layers. Then, with concrete examples, we demonstrate that the general BiGraphNet formalism (iii) provides the modeling flexibility to build efficient architectures such as graph skip connections, and autoencoders. | [
"Graph Neural Networks",
"Graph Convolutional Networks"
] | Reject | https://openreview.net/pdf?id=H1gWyJBFDr | https://openreview.net/forum?id=H1gWyJBFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"NE3C5d_yK",
"HklAVQc3iH",
"S1gZufcnjr",
"Byl2wZchjS",
"SJlhsq6d9S",
"SkeIGYybcH",
"H1exCVyg9H",
"HkxFvEe6uH",
"ryen8-Qx_B"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798723822,
1573851958338,
1573851753327,
1573851492284,
1572555428252,
1572038926203,
1571972295558,
1570731104949,
1569890644159
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1458/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1458/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1458/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1458/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1458/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1458/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1458/Authors"
],
[
"~Chaoyang_He1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All three reviewers are consistently negative on this paper. Thus a reject is recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you Reviewer #1\", \"comment\": \"We thank the reviewer for their suggestions and comments.\\n\\n1. The reviewer is correct that bigraphnet layer still requires a separate clustering (or expansion) block as indicated by fig 1. This block can be precomputed (non-learnable, non-parameterized) such as voxel grid for point cloud data or data-driven (learnable, and parametric) such diffpool and gpool. In fact, bigraphnet supports any arbitrary input/output graph structures, a more general case than clustering input graph where each output vertex is assigned one of the mutually disjoint clusters of input vertices, like in DiffPool. Fig 1 used the dashed-line to denote the clusters of nodes (while using dashed line to denote non-parametric) suggesting that the clustering is non-parametric; this was not intentional and will be corrected (there is no restriction on the learnability of the clustering). \\nThe main advantage of the bigraphnet part is the parametrization of the reduction part of the graph convolution operation as opposed to the node selection done in learnable pooling like diffpool and gpool. The bigraphnet architecture is complementary to the different pooling techniques mentioned above and can be made differentiable and dynamic using those techniques. Another way is that it can be used to speed up some of those techniques.\\n \\n2. Graph NNs have been used on image data (for example in ECC): in this formulation each pixel is a node with its rgb value as its feature. From this view, a strided convolution (a parametric operation) computes new representation on a downsampled image which is a subset of the original image graph. We used the concept only as a high-level motivation, and will clarify this in the updated manuscript. \\n \\n3. While we agree with the reviewer about other potential interesting experiments to run, there are several reasons we believe our current set of experiments are convincing in demonstrating the promise of our fully convolutional approach. Our intention in this paper is not to achieve state-of-the-art performance, but rather to (1) propose a new graph formalism that allows tremendous flexibility in expression, and (2) demonstrate that by replacing the pooling mechanisms in an existing GNN with our fully convolutional approach, while keeping parameter count and other operations constant, can improve performance while significantly reducing memory consumption by 2x and inference times by ~25%. This experiment best isolates the contribution of our proposal, rather than chasing SOTA. In addition, by comparing to ECC which have an extensive array of graph application, we feel that we demonstrated the wider application domains of this formalism.\"}",
"{\"title\": \"Thank you Reviewer #3\", \"comment\": \"We appreciate the reviewer's comments:\\n\\n1. We appreciate the criticism. We indeed agree the analogy to strided CNNs is simply motivational used to only draw a parallel to this highly effective type of convolution used in modern CNNs (We believe the bipartite graph convolution will take a similar role for large GNNs). However, none of the formulation depends on any type of analogy with strided convolutions and we demonstrated with a comprehensive set of experiments that our proposed BiGraphNet operation sufficed to eliminate the graph pooling operations altogether, just like explicit pooling layers are no longer used in recent CNN architectures. We will update the manuscript to emphasize on the concrete results over the high-level motivation. \\n\\n2. We have responded to similar comments from other reviewers, please see our other written responses. Our goal is not to set SOTA, but to use an existing GNN model with explicit graph pooling, and replace with BiGraphNet to show comparable performance at substantial memory (2x) savings and 25% faster compute. For this goal, the ECC model is an appropriate set of experiments across both graph and the variety datasets the focus on large graphs that need pooling to fit onto GPUs.\\n\\n3. We apologize for the oversight of this relevant paper, and will include it in Related Work of the revision.\"}",
"{\"title\": \"Thank you Reviewer #2\", \"comment\": \"Thanks you for your comments, and your appreciation of the novelty of our proposed graph formalism. We address your comments below:\\n1. We chose the ECC model because it reports results on extensive set of applications (3D vision, molecular graphs, images ...) instead of multiple datasets of the same domain such as the typically used citation networks and because it extensively uses architectures that use pooling which are necessary for 3D vision applications and some of the molecular datasets selected in our experiments (due to their large size). Our goal here is not to beat SOTA, but rather perform experiments that isolate our proposed layer; e.g. to show, for existing GNN architectures with explicit graph pooling, a drop-in replacement with BiGraphNet pooling is more efficient in computation (2x less memory usage and 25% faster) without performance degradation. \\n\\n2. Due to the abovementioned scope of the current study, we did not intend to do GNN architectural search to advance state-of-the-art of performance. This is why optimization of architectural hyperparameters such as network depth are not a relevant ablation study here. Rather, we focus on taking a published GNN model with explicit pooling and replace the graph pooling with BiGraphNet modules while holding all other architectural hyperparameters exactly the same, in order to have a fair comparison. We believe our contributions in the novel graph formalism, and demonstrated gains in an isolated comparison, our sufficient to warrant inclusion rather than achieving SOTA.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduced a novel parametrized graph operation called bipartite graph convolution (BGC). The proposed bipartite graph convolution layer functions as a regular graph convolution followed by a graph pooling layer, but it uses less memory. Also, the BGC layer can be used to aggregate multiple different graphs with various number of nodes. This paper further discussed the possibility of extending it to construct bipartite graph U-net structure with skip connections. Experimental evaluations have been focused on (1) comparing BGC against regular graph convolution layer followed by graph pooling layer in terms of classification accuracy and memory cost; and (2) comparing the regular graph-AE with the graph U-Net built on the proposed BGC layer with the unsupervised feature learning task.\\n\\nOverall, reviewer is very positive about the technical novelty of the paper. However, the experimental results seem not very strong. \\n\\n(1) The ECC model (Simonovsky and Komodakis, 2017) is no longer the state-of-the-art one on ModelNet. Please consider more recent papers such as the following one. Besides that, the performance delta seems very incremental.\\n\\n-- Dynamic Graph CNN for Learning on Point Clouds. Wang et al. In ACM Transactions on Graphics, 2019. \\n\\n(2) The current results are not very convincing as only one network structure is compared for each of the experiment. The ablation studies on graph structure (e.g., number of layers) are currently missing (Figure 4 and Table 1).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new graph neural network model named BiGraphNet, which introduces a parameterized bipartite graph convolution operation to perform transformation between input and output graphs. The proposed method is claimed to have advantages over existing deep hierarchical GCN architectures mainly in terms of being able to construct analogous building blocks employed by modern lattice CNN architectures and the reduced computational and memory cost. The main weaknesses of this paper are listed as follows:\\n\\n1) The motivation is relatively weak, which is to bring in the analogous building blocks in CNN architectures. Although GNN is closely related to CNN and RNN, the graph learning tasks may not have the same property as in computer vision or natural language processing. It would be better to convince the readers from the GNN itself and carefully argue the necessity of the proposed method.\\n\\n2) The experiments in this paper are rather weak and not convincing. First there is no performance comparison to state-of-the-art GNN models, such as DGCNN, DIFFPOOL and GIN, etc. At least on the D&D dataset, many existing models report graph classification accuracy over 78.0, but the baseline method used in this paper only achieves 72.5. Thus it is not fair to claim the proposed method can retain or improve the performance of existing GNN models.\\n\\n3) The related work comparison is not sufficient. For example, some existing works have already explored to apply skip connections to the graph neural networks, such as [1], which is not mentioned and compared in this paper.\\n\\nBased on the above arguments, I would like to recommend a reject for this paper.\\n\\n\\n[1] Xu, Keyulu, et al. \\\"Representation learning on graphs with jumping knowledge networks.\\\" arXiv preprint arXiv:1806.03536 (2018).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes BiGraphNet, which proposes to replace the graph convolution and pooling with a single bipartite graph convolution. Its motivation comes from using stride(>1) convolution to replace pooling in CNN. The authors claim that the computation and memory can be reduced with the proposed bipartite graph convolution, because the pooling layers are removed. The authors also conduct experiments about graph skip connection and graph encoder-decoder to show that their method's flexibility.\", \"cons\": \"1. If I understand it correctly, the bipartite graph convolution still needs a cluster algorithm to determine the output graph, which is identical to cluster-based pooling methods like DiffPool. In addition, previous pooling methods like DiffPool, gPool are NOT non-parametric as suggested by Figure 1. Therefore, the advantage of the proposed method is vague.\\n2. The idea of bipartite graph convolution seems different from that of stride convolution. The connection should be better explained.\\n3. The experiments of this paper are not very convincing. Comparison with more baselines and ablation study are needed to demonstrate the effectiveness of this method. On graph classification tasks, many other methods (GCN with pooling) are worth comparing with, like DiffPool, SAGPool, gPool, etc. More datasets should be included. In addition, it will be more convincing to do ablation study, e.g. single layer replacement.\"}",
"{\"comment\": \"thank you for pointing this out. we will update the citation in the manuscript.\", \"title\": \"updated citation\"}",
"{\"comment\": \"Hi, thanks for your citation of our paper (https://arxiv.org/abs/1906.11994). We just released our second version of this paper, please revise the reference description. Thanks.\", \"title\": \"Update our citation\"}"
]
} |
rJeW1yHYwH | Inductive representation learning on temporal graphs | [
"da Xu",
"chuanwei ruan",
"evren korpeoglu",
"sushant kumar",
"kannan achan"
] | Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should represent both the static node features and the evolving topological structures. Moreover, node and topological features can be temporal as well, whose patterns the node embeddings should also capture. We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features to learn the time-feature interactions. For TGAT, we use the self-attention mechanism as building block and develop a novel functional time encoding technique based on the classical Bochner's theorem from harmonic analysis. By stacking TGAT layers, the network recognizes the node embeddings as functions of time and is able to inductively infer embeddings for both new and observed nodes as the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to include the temporal edge features. We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches. | [
"temporal graph",
"inductive representation learning",
"functional time encoding",
"self-attention"
] | Accept (Poster) | https://openreview.net/pdf?id=rJeW1yHYwH | https://openreview.net/forum?id=rJeW1yHYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-cTQT4VjPy",
"HklEYpDwoS",
"Hyg83ASHoS",
"SJgLWABBir",
"SkeUKTBrjB",
"HJgpzq645S",
"B1e4OGa6tB",
"S1gCZwh2Kr",
"S1ef0VW4FS",
"BJlz1NRkur",
"rJxsCdcJ_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1576798723793,
1573514620246,
1573375661737,
1573375486472,
1573375357987,
1572293140568,
1571832428426,
1571763974260,
1571194058189,
1569870809624,
1569855699481
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1457/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1457/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1457/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1457/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1457/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1457/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1457/AnonReviewer1"
],
[
"~Seyed_Mehran_Kazemi1"
],
[
"ICLR.cc/2020/Conference/Paper1457/Authors"
],
[
"~Seyed_Mehran_Kazemi1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The major contribution of this paper is the use of random Fourier features as temporal (positional) encoding for dynamic graphs. The reviewers all find the proposed method interesting, and believes that this is a paper with reasonable contributions. One comment pointed out that the connection between Time2Vec and harmonic analysis has been discussed in the previous work, and we suggest the authors to include this discussion/comparison in the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"The authors responded satisfactorily and updated the paper with better clarity. I appreciate the job and maintain the assessment. A minor point to clarify regarding the question of the number of embedding vectors, is that I actually meant the other methods (not this paper). But since other methods do not take a functional treatment but use discrete positions, the question was meaningless in retrospect.\"}",
"{\"title\": \"To Official Blind Review #1\", \"comment\": \"We thank the reviewer for the careful reading and valuable comments. According to the feedback, we added several additional experiments and explanations in the revised version of our paper. The added/modified contents are marked by RED fonts so that other reviewers are also aware of the changes we made.\\n\\nTo Question \\u201cIn the ablation study, what exactly is the original positional encoding? Are they learned embedding vectors?\\u201d\\nThe results we reported in the original submission are the learnt positional embedding vectors, where they are jointly optimized as free model parameters. In the revised paper, we also add the fixed position encoding suggested by Vaswani et al. (2017) in the ablation study (the green bar in Figure 3). We see that the fixed positional encoding is slightly outperformed by learnt positional encoding. \\n\\nTo Question \\u201cSince the authors consider continuous time rather than discrete time, how many embedding vectors are there?\\u201d\\nThe time encoding approach proposed in our paper is entirely functional, which means it is a vector function of time. So the functional time encoding $\\\\Phi(.)$ takes any time value (timespan) as input and outputs a $d_T$-dimensional vector as the representation of the time value (timespan). The functional form is presented in Eq5. The ability to handle continuous variable is a major advance of our approach compared with the vast majority of prior work on representation learning with discrete variables. \\n\\nDiscussions on the concerns in the limitation of the stationarity assumption induced by using Fourier features. \\nThe reviewer raises a very good point, which touches on the implicit assumption made by our approach that we model the relative temporal information (timespan) instead of the absolute time. We agree that the absolute time can contain useful non-stationary temporal signals, such as the seasonality. Our approach does not take such perspective into consideration. The solution provided by the reviewer points out one possible direction, or we could treat the absolute time information as covariates and directly include them into the model, which we shall leave to future work.\"}",
"{\"title\": \"To Official Blind Review #2\", \"comment\": \"We thank the reviewer for the careful reading and valuable feedback. First of all, we apologize the typos, grammar mistakes and unclear notations. We will correct them in the final version. According to the feedback, we added several additional experiments and explanations in the revised version of our paper. The added/modified contents are marked by RED fonts so that other reviewers are also aware of the changes we made.\\n\\nOur response to the cons and questions are listed as below.\", \"to_q1\": \"The attention mechanism employed by GAT is very different from our approach. And it is due to the different formulations that our approach works better than GAT as well as the enhanced version of GAT (GAT+T in Table 1,2,3) which operates by concatenating our time encoding to the node features. The detailed comparisons between the attention mechanism of our approach and the GAT are provided in Appendix A.2. Therefore, the major contribution of our work is the functional time encoding as well as the graph neural network architecture.\", \"to_q2\": \"This is a great series of questions. Model interpretation remains to be a key challenge for deep learning models. We decide to look into the \\u201cblack box\\u201d by ad-hoc model analysis on the attention weights. We refer the reviewer to the new Section 4.6 (Attention Analysis) in the revised paper for the detailed results and analysis.\", \"to_q3\": \"We thank the reviewer for pointing out the improper use of \\u201carchitect\\u201d. We have replaced \\u201carchitect\\u201d with \\u201carchitecture\\u201d in the revised version.\\n\\nTo Q4.1: \\nIt is true that the introduction on self-attention in Section 2 is not self-contained since we have assumed certain background knowledge from readers. In the revised version, we provide some additional introductions in Section 2 to build more connections between prior work and our approach.\\n\\nTo Q4.2: \\nWe have mentioned in the first sentence of Section 3.1 that $d_T$ is the dimension of the time encoding functional space. And since we are using time encoding to replace the positional encoding in Eq1, we have implicitly assumed that $d_T=d_{pos}$. In the revised paper, we provide additional explanations in the beginning of Section 3.1 for better clarifications.\\n\\nTo Q4.3: \\nWe agree that the statement original statement on \\u2018reparameterization trick\\u2019 is not rigorous, and we thank the reviewer for pointing this out. In the revised paper, we replace the statement with:\\n\\u201cHowever, the reparameterization trick is often limited to certain distributions such as the \\u2019local-scale\\u2019 family, which may not be rich enough for our purpose. For instance, when $p(\\\\omega)$ is multimodal it is difficult to construct the underlying distribution via direct reparameterizations.\\u201d\\nIndeed, the underlying distribution of $\\\\omega$ is unknown, so there is no way to justify if it is truly out of the range of direct reparameterization. Therefore, when selecting the appropriate distribution learning approach, we prefer models with higher complexity (larger parameter space in this case). \\n\\nTo Q4.4:\\nIn Eq6, the time $t$ is the target time at which we wish to obtain the embedding, and $t_i$ is the time when the target node interacts with its neighboring node $v_i$. Therefore, $t \\u2013 t_i$ is the timespan between the target time and the prior interaction time of $v_0$ (target node) and $v_i$. \\n\\nFinally, we once again thank the reviewer for the time and efforts in reviewing our paper. Your feedbacks are very important for us improving our work. We look forward to further comments and discussions.\"}",
"{\"title\": \"To Official Blind Review #4\", \"comment\": \"First of all, we want to thank for the reviewer for the careful reading and constructive comments. According to the feedback, we added several additional experiments and explanations in the revised version of our paper. The added/modified contents are marked by RED fonts so that other reviewers are also aware of the changes we made.\", \"the_additional_experiments_we_conducted_are\": \"1.\\tGraphSAGE-mean + time encoding (GraphSAGE+T) by concatenating time embedding with node features for all three tasks on all datasets (Table 1,2,3 in Page 8,9);\\n2.\\tGAT + time encoding (GAT+T) by concatenating time embedding with node features for all three tasks on all datasets (Table 1,2,3 in Page 8,9);\\n3.\\tSensitivity analysis on the number of heads and number of layers of the proposed TGAT (Figure 7c in Page 18).\", \"the_relevant_explanation_we_added_according_to_the_feedback_is\": \"1.\\tA detailed comparison between the attention mechanism of our approach and the GAT (Appendix A.2).\\n\\nOur analysis on the additional experiments are provided in Section 4.3 and 4.5 in the revised paper. In general, equipping GraphSAGE and GAT with our time encoding does lead to slightly improved performances uniformly across all tasks and datasets. However, the proposed TGAT still surpass the enhanced baselines with significant margins in most cases. On one hand, the results suggest that the time encoding have potential to help extend non-temporal graph representation learning methods to temporal settings. On the other, we see that the time encoding still works the best with our network architecture which is designed for temporal graphs.\\n\\nThe additional sensitivity analysis in Figure 7c suggests that using three attention heads with two layers gives the best performances. Using only a single head may suffer from under-fitting issues on the dataset we experimented on, since we observe increased metrics with using two and three heads. In all our experiments, we treat the number of heads as a tuning parameter, since its behavior may vary on different datasets\\n\\nFinally, we point out that there are significant differences between the attention mechanism employed by our approach and the GAT. The side-by-side comparisons between the two attention formulations as well as the justifications are provided in Appendix A.2. \\n\\nAgain, we express our gratitude to the reviewer for the time and effort in reviewing our paper. Your feedbacks are very important for us improving our work. We look forward to further comments and discussions.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposed the temporal graph attention layer which aggregates in-hop features with self-attention and incorporates temporal information with Fourier based relative positional encoding. This idea is novel in GCN field. Experimental results demonstrate that the TGAT which adds temporal encoding outperforms the other methods. Overall this paper addressed its core ideas clearly and made proper experiments and analysis to demonstrate the superiority against existing counterparts.\\n\\nThere are some things need to be further answered. The baselines compared in this paper seems to be too weak. For example, how does T-GraphSage (GraphSAGE+Temporal encoding) work? How does the single-head variant of TGAT work? How does the original GAT work plus temporal encoding (as I notice TGAT uses self-attention which is similar but may not be equivalent to original GAT attention formulation, are they equivalent or not?)\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper addresses the problem of representation learning for temporal graphs. That is, graphs where the topology can evolve over time. The contribution is a temporal graph attention (TGAT) layer aims to exploit learned temporal dynamics of graph evolution in tasks such as node classification and link prediction. This TGAT layer can work in an inductive manner unlike much prior work which is restricted to the transduction setting. Specifically, a temporal-kernel is introduced to generate time-related features, and incorporated into the self-attention mechanism. The results on some standard and new graph-structured benchmarks show improved performance vs a variety of baselines in both transduction and inductive settings.\", \"pros\": \"+ Dynamic graphs are an important but challenging data structure for many problems. Improved methods in this area are welcome. \\n+ Dealing with the inductive setting is an important advantage. \\n+ Clear performance improvements on prior state of the art is visible in both transductive+inductive settings and node+edge related tasks.\\n\\nCons+Questions:\\n1. Technical significance: Some theory is presented to underpin the approach, but in practice it seems to involve concatenating or adding temporal kernels element-wise to the features already used by GAT. In terms of implementation the concatenation in Eq 6 seems to be the only major change to GAT. I\\u2019m not sure if this is a major advance. \\n2. Insight. The presented method apparently improves on prior work by learning something about temporal evolution and exploiting it in graph-prediction tasks. But it's currently rather black-box. It would be better if some insight could be extracted about *what* this actually learns. What kind of temporal trends exist in the data that this method has learned? And how are they exploited in by the prediction tasks?\\n3. Writing. The English is rather flaky throughout. One particular recurring frustration is the use of the term \\u201carchitect\\u201d which seems wrong. Probably \\u201carchitecture\\u201d is the correct alternative. \\n4. Clarity of explanation. The paper is rather hard to follow and ambiguous. A few specific things that are not explained so well: \\n4.1. Eq 1+2 is not a sufficiently clear and self-contained recap of prior work. \\n4.2. Symbol d_T used at the start of Sec 3.1 seems to be used without prior definition making it hard to connect to previous Eq1+2. \\n4.3 The claim made about alternative approaches (Pg4) \\u201cReparameterization is only applicable to local-scale distribution family, which is not rich enough\\u201d. Seems both too vague and unjustified. \\n4.4 The relationship between $t_i$ and the neighbours of the target node in Eq. 6 is not very clear.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The major contribution of this paper is the use of random Fourier features as temporal (positional) encoding for dynamic graphs. These encodings are concatenated with standard node embeddings in transformer-like attention calculations for graph message passing. The reader finds that the proposed approach is interesting.\\n\\nExperimental results are also favorable.\", \"concern\": \"Whereas the use of random Fourier features (RFF) is well justified, a limitation is that it is based on a stationarity assumption. Thus, it may be less applicable to nonstationary structural changes. To cope with nonstationarity, a straightforward idea is to parameterize the temporal encoding by using neural networks rather than RFF. In the authors' approach, the RFF is in a sense parameterized, because the frequencies omega are learned. Nevertheless, the stationarity limitation persists.\", \"question\": \"In the ablation study, what exactly is \\\"the original positional encoding\\\"? Are they learned embedding vectors? Since the authors consider continuous time rather than discrete time, how many embedding vectors are there?\"}",
"{\"comment\": \"Thanks for the reply.\\n\\nThe connection between Time2Vec and harmonic analysis has been discussed in Section 5.3 of [1] (Section 5.4 shows empirically that learning the frequencies from data instead of fixing them results in better performance).\\n\\nHowever, I agree with the authors that there currently exists no results (or methodology) on using Time2Vec for dynamic graphs and the proposed methodology is novel and interesting.\", \"title\": \"Time2Vec\"}",
"{\"comment\": \"We thank the commentary for pointing out the related work of Time2Vec. We would like to point out several fundamental differences between our proposed functional time encoding and Time2Vec.\\n\\nFirstly, our time encoding is motivated by the harmonic analysis and comes with solid theoretical justifications and guarantees, where Time2Vec is more heuristic-driven. For learning the functional representation of time, we refer to the classical harmonic analysis to convert the challenge of learning functional time encoding to the kernel and distributional learning problems that have been established in machine learning literature. We then prove the stochastic uniform convergence property for our proposed approach. \\n\\nSecondly, by couping with self-attention, we propose a whole network architecture to effectively apply the functional time encoding to learn representations on temporal graphs. By the time of our submission, there is no evidence that Time2Vec can be adapted to learn representations for temporal graphs.\\n\\nWe want to thank the commentator for mentioning the recent survey on dynamic graphs. We will add references to several heuristic-driven time to vector approaches such as Time2Vec in our next version, and discuss on the above points upon reviewers' suggestions.\", \"title\": \"Reply to connection to Time2Vec\"}",
"{\"comment\": \"Thanks for the very interesting work!\\n\\nIt seems to me that the vector representation proposed for time in this work is almost identical to Time2Vec [1] (with similar motivations). The connection between Time2Vec and positional encoding has also been established in [1]. I believe the connection to Time2Vec should be highlighted in the paper. Nevertheless, combining GAT with Time2Vec for inductive representation learning on dynamic graphs is quite interesting. BTW, you may be interested in a recent survey we wrote on dynamic graphs [2].\\n\\n[1] https://arxiv.org/abs/1907.05321\\n[2] https://arxiv.org/abs/1905.11485\", \"title\": \"Connection to Time2Vec\"}"
]
} |
Bkel1krKPS | Attention on Abstract Visual Reasoning | [
"Lukas Hahne",
"Timo Lüddecke",
"Florentin Wörgötter",
"David Kappel"
] | Attention mechanisms have been boosting the performance of deep learning models on a wide range of applications, ranging from speech understanding to program induction. However, despite experiments from psychology which suggest that attention plays an essential role in visual reasoning, the full potential of attention mechanisms has so far not been explored to solve abstract cognitive tasks on image data. In this work, we propose a hybrid network architecture, grounded on self-attention and relational reasoning. We call this new model Attention Relation Network (ARNe). ARNe combines features from the recently introduced Transformer and the Wild Relation Network (WReN). We test ARNe on the Procedurally Generated Matrices (PGMs) datasets for abstract visual reasoning. ARNe excels the WReN model on this task by 11.28 ppt. Relational concepts between objects are efficiently learned demanding only 35% of the training samples to surpass reported accuracy of the base line model. Our proposed hybrid model, represents an alternative on learning abstract relations using self-attention and demonstrates that the Transformer network is also well suited for abstract visual reasoning. | [
"Transformer Networks",
"Self-Attention",
"Wild Relation Networks",
"Procedurally Generated Matrices"
] | Reject | https://openreview.net/pdf?id=Bkel1krKPS | https://openreview.net/forum?id=Bkel1krKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ZQkRCxJ3HQ",
"BklsYnBhoB",
"BygwFWrJcr",
"SyeRK47CFr",
"SJlIFDe6KH",
"HkgWMFWhKr",
"S1lysesBOH"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798723763,
1573833859021,
1571930494565,
1571857542184,
1571780477950,
1571719433393,
1570250903189
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1456/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1456/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1456/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1456/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1456/AnonReviewer2"
],
[
"~Hyunjae_Kim1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work proposes a new architecture for abstract visual reasoning called \\\"Attention Relation Network\\\" (ARNe), based on Transformer-style soft attention and relation networks, which the authors show to improve on the \\\"Wild Relation Network\\\" (WReN). The authors test their network on the PGM dataset, and demonstrate a non-trivial improvement over previously reported baselines.\\n\\nThe paper is well written and makes an interesting contribution, but the reviewers expressed some criticisms, including technical novelty, unfinished experiments (and lack of experimental details), and somewhat weak experimental results, which suggest that the proposed ARNe model does not work well when training with weaker supervision without meta-targets. Even though the authors addressed some concerns in their revised version (namely, they added new experiments in the extrapolation split of PGM and experiments on the new RAVEN dataset), I feel the paper is not yet ready for publication at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewers\", \"comment\": [\"We thank the reviewers for their comments and suggestions. Based on the reviews, we made the following changes to the paper:\", \"We added an evaluation of the model's performance on the extrapolation split.\", \"We conducted experiments on the new RAVEN dataset and report the results\"], \"response_to_the_key_criticism\": [\"Of course, the dependency of the ARNe model on labeled data is a limitation. However, this requirement only affects training. At test time, the model does not need auxiliary labels.\", \"The neutral split of the PGM dataset was used by [1] to compare with other models. Therefore the argument that the neutral split was not intended to be an interesting challenge seems misplaced. Nonetheless, the performance on other splits is interesting. Hence, we added results on the extrapolation split (more splits were not possible in this rebuttal period due to time constraints).\", \"We agree that additional datasets could strengthen the paper. A small experiment on RAVEN was added.\", \"[1] Santoro et al., 2018: Measuring abstract reasoning in neural networks\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This work proposes a new architecture for abstract visual reasoning, based on Transformer-style soft attention and relation networks. The authors test their network on the PGM dataset, and demonstrate a non-trivial improvement over previously reported baselines.\\n\\nIn general, abstract reasoning is an important field of current study in neural network-based machine learning, as it is an area that has notoriously eluded these types of models historically. The paper is reasonably well put together, and I have no reason to question the various technical aspects of the work.\\n\\nUnfortunately, I think there are significant shortcomings. Firstly, the PGM dataset was designed to stress out-of-distribution generalization, and performance on the Neutral split was not proposed as a particularly interesting challenge on its own. This is because, as the name implies, abstract reasoning requires the ability to identify abstract conceptual features of the data and compose them in novel ways at test time, which is *not* a feature of the neutral split. The authors are encouraged to run their model on these other generalization splits. \\n\\nSecond, there seems to be little value to the field overall for research involving minor architectural improvements for single datasets. If the authors believe in this method, they are encouraged to demonstrate its effectiveness on a wide variety of data types. On this note, I should add that the authors are incorrect to state that this is the first work to use self-attention for abstract reasoning (please see Zambaldi, 2018 for one example of many papers that have incorporated self-attention into convolutional architectures). \\n\\nSo to sum up, while this work broaches an interesting subject and is technically fine, it does not surpass the threshold for acceptance because it fails to demonstrate the usefulness of the method on the task at hand, as well as broad utility of the proposed method.\"}",
"{\"comment\": \"Thank you for your comment. In the following, we reply to each criticized point.\", \"generalization\": \"We agree that this would enable further insights. We have tested it on the provided extrapolation dataset. ARNe achieved a performance of 17.76% which is slightly better than WReN. Unfortunately, we do not have the resources to conduct experiments of other PGM configurations: They would require large store capacities as well as extensive computations.\", \"raven_benchmark\": \"Thank you for pointing out this new benchmark. We evaluated our model on RAVEN and found it achieve a performance of 92.23% for 50k samples for each figure configuration. However, using Raven-10000 the performance is 19.67%.\", \"ablation\": \"We replaced the encoder with a MLP of the same depth. In a second experiment we replaced the multi head attention mechanism with a linear transformation. The performances are 35.06% and 44.56% respectively.\\n\\nTo further strengthen the experimental validation of our model, we implemented a combination of WReN and MAC called WReN-MAC and found it achieve 79.6 % on PGM.\\n\\nThe paper will be updated to reflect these additional findings.\", \"title\": \"Further experiments on PGM and RAVEN\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper describes a somewhat novel approach to abstract visual reasoning using transformers in the so-called \\\"Attention Relation Network\\\" (ARNe), which the authors show to improve on the \\\"Wild Relation Network\\\" (WReN). The Transformer is motivated by the role that attention may play in Human information processing - which sounds plausible, but the paper does not expand on this theme.\\n\\nThe paper is well written and makes an interesting contribution, but I feel the results are not quite yet ready for publication. The authors are writing that they are still working on baseline results on the full dataset, which would provide interesting comparisons, and some details on the implementation (number of parameters, etc) are missing - or maybe I missed them.\\n\\nThe learning curve in Figure 3 (sample efficiency, test accuracy) suggests that the ARNe training is not fully stable - why would the model deteriorate when going from ~40% of the training data to ~60%? Is the model potentially overfitting, and how does the size of the proposed model compare to the size of the baseline model(s)? It seems that the field is also moving towards the RAVEN dataset, which presents a more complex structure; it would be more convincing to present results on both datasets, to show that attention can indeed also improve results on more complex setups.\\n\\nThe text in the \\\"Acknowledgments\\\" section should be removed for the camera ready version!\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work introduced an attention-based model to solve the RPM cognitive tasks. The model is based on the transformer network, which performs relational reasoning through its self-attention mechanisms.\", \"technical_novelty\": \"The method seems to be a straightforward application of the transformer network to the PGM task. The technical novelty of the proposed approach is unclear. I\\u2019d love to hear what the authors have to say about the technical contributions of the proposed ARNe model in comparison to prior work.\", \"supervision_with_meta_targets\": \"It also seems that the meta-targets are crucial for attaining a good performance with the ARNe model. According to Table 4, the model without meta-target training (beta=0) only achieved 12% accuracy in train/val/test sets. However, prior work [Santoro* et al. 2018] has demonstrated that even without training on meta-targets, WReN still achieves a performance of over 60% accuracy (Table 1). This result suggests that the proposed ARNe model does not work well when training with weaker supervision without meta-targets. The results could be a lot stronger if the authors show ARNe outperforms the prior work when beta is set to 0.\", \"ablation_studies\": \"This model is only tested in the neutral PGM dataset. The evaluation would be strengthed with the generalization results of this model in different generalization regimes (see Table 1, Santoro* et al. 2018) and comparing its performance with prior works.\"}",
"{\"comment\": \"The proposed model, which is a transformer-based RPM solver, achieved significantly high accuracy in the neutral setting, where the data distribution of training and test sets are the same.\\nI think the idea of adopting the attention mechanism in solving abstract reasoning tasks is good.\\n\\n\\nHowever, the model was not tested on generalization settings such as H.O. Triple Pairs, Interpolation and Extrapolation, where unseen objects and attributes appear during evaluation (Barret et al. 2018).\\nSince the PGM dataset was proposed to evaluate the generalization abilities of models, I believe evaluation should have been conducted not only on the neutral setting, but also the generalization settings.\\n\\nAlso, there is an another benchmark dataset, called RAVEN (Zhang et al. 2019).\\nIt would be better if the proposed model was evaluated on both PGM and RAVEN.\\n\\nLastly, I am concerned that the ablation study in the paper is insufficient.\\nIt is questionable whether the high accuracy in the neutral setting is due to the effectiveness of the self-attention mechanism or just the large model size.\", \"title\": \"Several concerns about your paper\"}"
]
} |
SJegkkrYPS | Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training | [
"Noah Gamboa",
"Kais Kudrolli",
"Anand Dhoot",
"Ardavan Pedram"
] | This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity.
We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training.
We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To make sure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable. | [
"Structured Sparsity",
"Sparsity",
"Training",
"Compression",
"Adversarial",
"Regularization",
"Acceleration"
] | Reject | https://openreview.net/pdf?id=SJegkkrYPS | https://openreview.net/forum?id=SJegkkrYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YBAFeS0OS8",
"SJx4ciYciH",
"rkl7uiY9iB",
"SJxtA9K5ir",
"rkgWKqKciH",
"r1xPmfUG9B",
"H1lup7X-5B",
"BJgK9BkUYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723736,
1573718923904,
1573718891166,
1573718736772,
1573718649382,
1572131359424,
1572053952354,
1571317137094
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1455/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1455/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1455/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper concerns a training procedure for neural networks which results in sparse connectivity in the final resulting network, consisting of an \\\"early era\\\" of training in which pruning takes place, followed by fixed connectivity training thereafter, and a study of tradeoffs inherent in various approaches to structured and unstructured pruning, and an investigation of adversarial robustness of pruned networks.\\n\\nWhile some reviewers found the general approach interesting, all reviewers were critical of the lack of novelty, clarity and empirical rigour. R2 in particular raised concerns about the motivation, evaluation of computational savings (that FLOPS should be measured directly), and felt that the discussion of adversarial robustness was out of place and \\\"an afterthought\\\".\\n\\nReviewers were unconvinced by rebuttals, and no attempts were made at improving the paper (additional experiments were promised, but not delivered). I therefore recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author Response\", \"comment\": \"(1) We agree that our method is similar to Narang et al.; however, achieving high levels of accuracy and high levels of structured sparsity for CNNs was missing; we contribute substantial experiments to find the limits of the final level of sparsity and how few epochs we can train for CNN. Also, Narang et al. prune after every iteration while we prune after every epoch (for as little as 20 epochs out of 90), so there are much fewer updates in our method. We grant that the adversarial is not exactly novel. However, our goal was to investigate the robustness in presence of structured sparsity.\\n \\n(2) Thank you for pointing out that this is unclear. The gradual pruning is done as shown in Fig 1, but we will add prose to explain the process. Our goal is to provide structured sparsity, rather than speed up training, that is we would like to reduce the memory footprint but also structure it in a highly regular way that could be exploited by a hardware accelerator. As you say, this is setting parameters to 0, which is the goal of many. Please see Table 4 for a number of them. On top of that, all the hardware accelerators put mechanisms to skip computations with zeros please see reference [Han 2015b] for an example.\\n \\n(3) The background in the introduction regarding speedup seems to have mischaracterized our main intentions, so we will clarify that it is mainly to provide structured sparsity while maintaining accuracy. However, we would like to note that we noted we would get speedups from sparsification because we would reduce the total number of computations [Zhu et al 2019] [Parashar et al 2017] by having a small number of epochs in which we do pruning and fixing the sparsity masks to their final values early in training rather than sparsifying until the end of training or after training. \\n \\nParashar, Angshuman, et al. \\\"SCNN: An accelerator for compressed-sparse convolutional neural networks.\\\" 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2017.\\n \\nZhu, Maohua, et al. \\\"Sparse Tensor Core: Algorithm and Hardware Co-Design for Vector-wise Sparse Neural Networks on Modern GPUs.\\\" Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. ACM, 2019.\"}",
"{\"title\": \"Author Response Part 2\", \"comment\": \"(8) Our main goal is to make training sparse coarse and simple so an accelerator can exploit savings in sparse computation. So, yes, our goal is to produce the smallest possible trained networks with highest possible accuracy but with a fixed sparsity after a certain epoch and minimum irregular computations to easily deploy on hardware accelerators. We also want to both reduce the cost and also provide a final network that is sparse enough to be deployed without further pruning. We do not focus on reducing the cost of obtaining a pruned network for inference-time. We will make these goals clearer in the introduction.\\n \\n(9) \\nRESPONSES TO UNJUSTIFIED CLAIMS (in order)\\n\\nThe following reference shows the regularization term in the loss of the neural network. This computation requires irregular computation such as norms or square roots that are expensive in hardware. [https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/]\\n[http://faculty.washington.edu/nrsimon/standGL.pdf]\\n \\n-This is because the overheads incurred for compressing/indexing would make it slower than when run in dense format, so these overheads need to be offset higher sparsity levels.[https://kkourt.io/phd/phd-en.pdf], [https://arxiv.org/pdf/1901.07827.pdf]\\n \\n-This only requires Index changes. Weights can still be compressed.\\n \\n-After a certain point in training there won\\u2019t be any recompression and the indirect access to indexes due to compression will be fixed and can be cached. This way the compressed weight matrix can be stored without modifying the compression infrastructure. Only the value of data is changed instead of its location to avoid complicated compression/recompression as in other solution.\\n \\n-We showed that for window pruning we can always use only 4 MAC units hence using half computation.\\n \\n-As discussed above, the regularization often requires the vector norm which requires division (to avoid overflowing) and square root that are very expensive in hardware. \\n \\n-This should be epoch, again our mistake.\\n\\n-This should be epoch, again our mistake.\\n \\n-This answers your question for Section 5 for performance on a fixed number of FPUs. We will make this more clear in the paper.\\n \\n-Our goal is characterization while maintaining accuracy, reducing sparsity masks, and eliminate irregular computations.\\n\\nWe performed experiments with 0-30, 20-40, 30-50, 40-60 in Table 1 and Table 6 (first column).\\n \\n-Both, we will clarify.\\n \\n-You are right but these 34+ experiment datapoints took over 4 months and so were intractable to replicate 3-5 times. We will perform that on select datapoints.\\n \\n-Random pruning is not actually studied in any of the previous work. We cannot replicate every method we compared as we trust the previous work and the validity of their results.\\n \\n-We believe appendix data is providing additional evidence for our findings. We put the main findings in the limited 8 pages of the paper.\\n \\n-While the loss in accuracy is higher, we provide structured sparsity and fix the structure early on in training, meaning unlike in magnitude-based pruning, we do move the weights around after the structure is fixed. This can simply not be accelerated by hardware. We will add this to our citations and our comparisons in Table 4..\\n\\n-Prunetrain savings is not justified given their loss in accuracy and their low degree of sparsity. One can simply train a dense network for less number of epochs and stop at their achieved accuracy and save more computations without the hassle of training and restructuring the network.\\nDynamic sparse is infeasible for any target architecture to gain performance.\\nPlease note that these are the closest sparse training papers focused on CNNs.\\nDynamically reconfiguring sparse masks is not feasible for accelerators as it needs decompression/recompression at each step.\\nMostafa and Wang studied the upper limits of sparsity while training and maintaining accuracy.\\nWe want high levels of accuracy but we want to eliminate decompression/recompression at each step. Unlike prunetrain our accuracy levels are much higher and the network mask is also fixed.\\n \\n \\n \\n(10)\\n* You do not describe the hyperparameters for Intra-epoch pruning (the balance between window and CK - last paragraph of 2.1.1)\\n-Thanks we will add that\\n \\n(11)\\nADVERSARIAL ROBUSTNESS\\n \\n-Thank you for the suggestion, we will look into adding PGD attacks or consider completely removing this aspect.\\n \\n(12) Thank you for your other comments regarding the name, appendix, and figure clarity. We will fix these.\"}",
"{\"title\": \"Author Response Part 1\", \"comment\": \"(1) This paper studies the sparse training for maximum convergence accuracy of Resnet50 with Imagenet and delivers the best result for sparse training with static mask for over 70-80 epochs of training schedule in terms accuracy and acceleration potential.\\nThe experiments performed all the sensitivity studies and report over 34 experiments covering various levels of sparsity, length pruning era, start epoch of pruning era, both Resnet50 v1 and v1.5, and various sparsity granularities just in Table 1 alone.\\nTo give a context some of the main references in Table 2 report less than 10 data points.\\n \\n(2) Thanks for you for comment regarding the problems we highlighted with current literature. We will cite literature to address all the above.\", \"just_to_give_context\": \"1) Regularization contains operations such as vector norm (normalization, divide, square root) that are not typically multiply accumulate nature and hence are expensive for accelerators and GPUs as they are atypical with high latency.\\n \\n2) Sorry for the miscommunication, our meaning with this sentence was that the levels of sparsity in previous coarse grain sparsity methods were not high-enough for a given convergence accuracy. Therefore, the overheads incurred to handle sparsity (indexing/compression/decompression) were not justified. Our main point is that edge devices are one example area where a high level of coarse grained sparsity while maintaining accuracy is desirable, and this paper achieved this goal. This paper also achieves similar goals for sparse training that we will address below.\\n \\n3-4) This is supported by Table 2: given a target convergence accuracy Mostafa and Wang had to train for an extra 10 epochs (100 epochs) to be able to reach within 1% of a 90 epoch baseline. We showed our results at all relevant epochs and the difference to baseline both at 90 and 100 epochs.\\n \\n5) This reconfiguration happens throughout the whole network. Restructuring the sparsity masks is extremely expensive in any system. It includes decompression and recompression which in turn triples memory accesses.\\nReplacing the non-zeros and refreshing the indexes is a memory intensive operation. It has high latencies and high energy consumption. However ours only happens after every epoch. This was represented as after every step in Algorithm 1, and we will correctly update the figure to accurately represent our mask update schedule.\\n \\n \\n(3) We can show the nature of the computation incurred for reparametrization. While the point is taken that we have no empirical study to support this, we make the argument that our overheads are minimal given that they are only incurred during 20% of total training schedule and that we eliminated regularization overheads. Also, we update the sparsity only for a total of 20 times once/epoch during pruning era (again this was misrepresented in Algorithm 1 and will be corrected).\\n \\n(4) The reason we mentioned edge devices was to highlight that the accuracy drop does not justify using low levels of sparsity because the overheads of indexing/compression/decompression are higher than just keeping data dense. [https://en.wikipedia.org/wiki/Sparse_matrix] [https://kkourt.io/phd/phd-en.pdf], [https://arxiv.org/pdf/1901.07827.pdf]. \\nFor example, CUSparse library performs only better than CUBLAS when degrees of sparsity are over 90% (https://developer.nvidia.com/cusparse)\\n \\n(5) We show both convergence at epoch 90 and 100 for comparison to baseline and all other methods. The argument here is that those extra 10 epochs with high degrees of sparse training are much cheaper. The epoch 90 results are in Table 6 in the appendix.\\nEven compared to 90 epoch baseline we drop less than 1% in accuracy.\\n \\n(6) Dynamic sparse reparametrization is impossible to implement on any target architecture because it changes sparsity mask every step. This means changing indexes of non-zeros and decompression/recompression at every step. This is basically infeasible on any accelerator. We contacted the authors and they told us they trained the network like a dense network. Our goals are clear: we want fixed sparsity mask after certain epoch and we wanted to eliminate any irregular computation that is expensive on accelerators.\\n \\n(7) This a mistake on our behalf in the algorithm text and it is actually after each epoch in our code. Thank you for pointing this out. We only prune total of 20 times (length of pruning era).\"}",
"{\"title\": \"Author Response\", \"comment\": \"(1) Our argument for only using one architecture is that Resnet networks are the benchmark for MLPerf and the hardest/longest networks to train. All other recent work on CNNs use Resnet, on Imagenet and some smaller datasets, as well. However, to address this concern we will add experiments on VGG.\\n \\n(2) The main goal of this approach is to restrict the pruning era to reduce complexity especially on accelerators. We also aimed to reach a fixed sparsity mask as early as possible. Keeping the above goal in mind, we did perform sensitivity analysis (shown in Table 1) keeping the final convergence accuracy as high as possible. We demonstrated that shrinking the pruning era damages the accuracy. On top of that we moved the pruning era 10 epochs earlier or later in training and studied wider pruning schedules.\\n \\n(3) This is a good suggestion; however, what prevented us from doing this is that these experiments we demonstrated each take several days of training per data point. To just perform the sensitivity analysis on the width of the pruning era it took us several months. We can definitely perform these analysis for smaller networks and datasets.\\n \\n(4) Thanks for the suggestion regarding adversarial attacks. We will investigate PGD. However, we believe that finding that structured sparse training is robust for FGSM is still valuable as other work show.\\n \\n(5) Thanks we will fix the name of intra-epoch pruning to combined method to improve clarity.\\n \\n(6) We used RTX2080 instances. We will add that to the paper.\\n \\n(7) Very valid point regarding the productivity of the Tiny-Imagenet results. We will add this.\\n \\n(8) Regarding Table2, compression focused methods take around 180 epochs of training if aiming for levels of accuracy that reported. If not, they have much worse accuracy numbers without providing structured sparsity and without the potential of computation savings during training. So, we chose to only compare with sparse training methods.\\n \\n(9) Thanks, we will clarify that Mao et al. (2017) worked with Resnet/Imagenet.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": [\"The paper investigates methods to train neural networks so the final network has sparse weights, both in convolutional layers and in fully connected layers. In particular, the paper focuses on modifying the training so that the network is first trained without sparsification for a certain number of epochs, then trained to be increasingly sparse, and then fine-tuned with a fixed sparsity pattern at the end.\", \"While I find the overall approach of the paper interesting, currently the experiments are not systematic enough to derive clear insights from the paper. Hence I unfortunately recommend rejecting the paper at this point. I hope the authors find time to conduct more systematic experiments for a future version of the paper.\", \"Concretely, the following would be interesting experiments / questions:\", \"How effective is the proposed training method on architectures other than ResNets?\", \"What happens if the \\\"pruning era\\\" is made longer, started substantially earlier, or started substantially later? Currently it is not clear if the epoch 30 - 50 pruning era is (approximately) optimal and how much performance varies with begin and end of the pruning era.\", \"Due to the small variation between some of the methods, it would be good to investigate how robust the ordering is when the experiment is re-ran with different random seeds etc.\", \"In addition, I have the following suggestions:\", \"The authors may want to remove or enhance the adversarial robustness evaluation. Currently the authors only evaluate robustness against FGSM, but it is well known that iterative attacks such as PGD are more effective.\", \"Instead of \\\"intra-epoch pruning\\\" or \\\"intra\\\", the name \\\"combined\\\" may be more clear for the combined method.\", \"In the description of the experimental setup, it could be good to specify what GPUs were used (since this lead to the smaller batch size).\", \"It could be helpful for the reader to discuss how predictive results on Tiny-ImageNet are for results on ImageNet.\", \"In Table 2, it would be good to add context by comparing to prior work with sparsity level 60% and some of the compression-focused methods from Table 4.\", \"In the comparison to Mao et al. (2017), it could be good to clarify that they also work with ResNet models on ImageNet.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"SUMMARY\\n-------\\n\\nThis paper explores a series of incremental variations of existing pruning techniques for compressing Resnet-50 for ImageNet. Specifically, it proposes concentrating all pruning during an early \\\"era\\\" of training (the first 20-50 epochs out of 100 total). It also explores hybrids between sparse pruning and structured pruning. Finally, it considers the adversarial robustness of the resulting networks to the FGSM attack. \\n\\nThis paper makes no novel proposals and experiments are minimal. There are no clear takeaways from the results of these experiments. The goals of the paper are unclear, and it is difficult to compare this paper to existing work. \\n\\nThis paper has no clear motivation and makes no tangible contributions to the literature and, therefore, I recommend a rejection. \\n\\nCONTRIBUTIONS\\n-------------\\n\\n1) A study of the appropriate window (\\\"pruning era\\\") for pruning Resnet-50 on ImageNet and TinyImageNet\\n2) A study of the tradeoffs between various forms of structured and unstructured pruning.\\n3) An analysis of the adversarial robustness of the pruned networks.\\n\\n\\nDETAILED COMMENTS\\n------\\n\\nPROBLEMS ADDRESSED\\n\\nIt was challenging to discern the specific problems that this paper sought to address and, relatedly, the goals that the paper sought to achieve. The introduction of the paper lists a wide variety of problems in the existing literature:\\n\\n1) Paragraph 3: Structured sparsity introduces \\\"regularization and computational overhead.\\\"\\n2) Paragraph 3: \\\"Coarse-grained sparsity\\\" cannot eliminate enough parameters to perform well on \\\"edge devices.\\\"\\n3) Paragraph 4: Dynamic sparsity techniques require more training epochs.\\n4) Paragraph 4: Dynamic sparsity techniques do not preserve network accuracy (1-2 percentage point drop at 80% sparsity).\\n5) Paragraph 4: Dynamic sparsity requires reconfiguring the sparsity pattern frequently, which is computationally expensive.\\n\\nThe paper does not justify the fact that any of these are actually problems, nor does it make any attempt to quantify the extent of these problems. Moreover, the proposed techniques do not resolve any of these problems. Corresponding to the numbers above:\\n\\n1) The paper never measures this overhead nor justifies that it is a problem in practice. Meanwhile, the techniques proposed in the paper introduce substantial overhad of their own, including training for an extra ten epochs. It is possible that the techniques proposed in this paper have worse overhead than the techniques that are criticized in the introduction. Since the paper provides on numbers either way, it is impossible to tell. In short, computational cost is a key part of the author's argument despite the fact that there is no empirical support for any of these claims.\\n\\n2) I believe the paper means that, in order to get to sufficient levels of sparsity to work on \\\"edge devices,\\\" accuracy drops unacceptably far. What does the paper mean by \\\"edge devices,\\\" what are sufficient levels of sparsity, and what does it mean for accuracy to drop unacceptably far? The paper has numbers for the proposed methods, so it should be possible to make this comparison if such baselines are explicit.\\n\\n3) The proposed techniques also require the same number of additional training epochs, so this complaint is unaddressed.\\n\\n4) The proposed techniques show a 2-3 percentage point drop at 80% sparsity (Table 1), which is actually worse than the technique that the authors criticize.\\n\\n5) The proposed techniques require pruning after every single training step during the \\\"pruning era.\\\" This is likely to be more computationally expensive than any of the other gradual pruning and dynamic sparsity techniques listed, which prune at intervals of hundreds or thousands of iterations. In addition, the authors never justify why changing the sparsity pattern frequently throughout training will affect performance. On GPUs with modern frameworks, I see no reason why this should matter so long as the sparsity pattern does not change too frequently (although that is exactly what this paper proposes to do during the \\\"pruning era\\\").\\n\\n\\nGOALS\\n\\nIt was also challenging to discern the goals of the paper. Was it:\\n\\n1) To produce the smallest possible trained networks with the highest possible accuracy?\\n\\n2) To reduce the cost of obtaining a pruned network for inference-time? (Or to reduce the cost of obtaining a sufficiently efficient pruned network for inference-time?)\\n\\n3) To reduce the cost of training neural networks in general by pruning them during training?\\n\\nIn the introduction and the related work section, these goals go unstated, making it difficult to determine how this paper compares to existing work. The comparisons provided in the paper focus on specific aspects of each related work rather than the entire picture. For example, in comparison to Mao et al., the authors claim better accuracy at one sparsity level, implying goal 1. However, for to Lym et al., the paper focuses on the computational costs of training the network, implying goal 3.\\n\\n\\nUNJUSTIFIED CLAIMS ABOUT NEURAL NETWORK COMPUTATION\\n\\nThroughout the paper, there are a number of unjustified claims about which neural network configurations will perform better on contemporary hardware. Considering computational efficiency appears to be a key element of the paper's argument, these claims require citations or - particularly when various configurations are compared to one another - empirical support. Some examples:\\n\\n* Section 1, Paragraph 3: \\\"The regularization term [of structured sparsity] modifies the original training and can be expensive in hardware.\\\"\\n* Section 1, Paragraph 3: \\\"The final network [from Lym et al. 2019] contains an insufficient degree of sparsity for deployment on edge devices.\\\"\\n* Section 1, Paragraph 4: \\\"Continuous reconfiguration of the sparsity pattern is expensive as it does not allow for compression of weights during training\\\"\\n* Section 1, Paragraph 5: \\\"having a fixed sparse multiply-accumulate pattern allows weight compression during training and can save compute and energy in hardware\\\"\\n* Section 5, Paragraph 2: \\\"A strict parameter allows the hardware mapping to plan for a fixed number of multiply-accumulate operations.\\\"\\n* Section 5, Paragraph 2: \\\"Regularization, although useful in forcing the network to learn prunable weights, adds more irregularity to computation flow.\\\"\\n\\n\\nPRUNING TECHNIQUES\\n\\n* Recomputing the pruning mask at every training step seems gratuitously inefficient.\\n* Sorting the weights in the entire network shouldn't be particularly inefficient if it isn't done on every single iteration. (Section 2.1 paragraph 1)\\n* Why do you maintain the same number of weights in each convolutional filter with window pruning? (Presumably for performance reasons, but you never say that.)\\n* None of the pruning methods are novel. They're simply various permutations of structured and unstructured magnitude pruning as proposed by many others in the literature.\\n\\n\\nEXPERIMENTS\\n\\n* Section 3.1 Paragraph 2: It appears that you are exploring the best \\\"pruning era.\\\" If you are to do so, you will have to sweep over (1) the length of the pruning era (2) the starting epoch of the pruning era, and (3) the shape of the function used to determine sparsity. Instead, it sounds like you tried two arbitrary pruning eras (0-30 and 30-50). Likewise, in Paragraph 3, you test only a small number of possible scenarios.\\n* Section 3.1 is generally hard to parse. It is unclear what you are studying. The ideal pruning era? The relative performance of the pruning methods introduced in section 2? \\n* How many times did you replicate each experiment? You should ideally include at least 3 (and preferrably 5) replicates with mean and stddev reported.\\n* What baselines are you including? You should include a random pruning baseline and you should ideally replicate any methods that you compare to.\\n\\n\\nRESULTS\\n\\n* Section 4.1: The data you refer to is in an appendix even though it is crucial to the main body of the paper. The appendices should contain material that is nonessential for making sense of the paper.\\n* Section 4.2 Paragraph 1: Are these numbers good? A standard sparse pruning technique (Gale et al. 2019, https://arxiv.org/pdf/1902.09574.pdf) achieve 70% sparsity without any change in accuracy. Please include baselines comparing to other methods in the literature.\\n* Table 2: It is difficult to compare the results in these papers. PruneTrain aims to reduce the cost of training and measures cost reductions in FLOPS. If you intend to compare against this paper, you should quantify the cost of training using your method against that of PruneTrain. Merely presenting sparsity and accuracy numbers is insufficient. Likewise for the dynamic sparsity experiments. What is your goal in showing this comparison, and did Mostafa and Wang share that goal when they justified their technique?\\n* You do not describe the hyperparameters for Intraepoch pruning (the balance between window and CK - last paragraph of 2.1.1)\\n\\n\\nADVERSARIAL ROBUSTNESS\\n\\nConsidering the fact that this paper focuses on proposing new variations of existing pruning techniques, any discussion of adversarial robustness seems to be (1) out of place and (2) an afterthought. If the authors delete a half-page of content (one phrase from the abstract, a paragraph and bullet from the introduction, and a paragraph each from sections 3 and 4), this content could be removed with minimal impact to the paper's main contributions. The content on adversarial robustness is cursory, uses a weak and out-of-date attack (FGSM), and does not compare to any other pruning methods. In fact, the one comparison is to the results in a paper (Wang et al, 2018) that looks at both FGSM and PGD (a stronger attack) on completely different networks and tasks (MNIST and CIFAR10). The paper would be stronger if content on adversarial robustness was removed entirely.\\n\\n\\nOTHER MINOR COMMENTS\\n\\n* The title includes the word \\\"starfire,\\\" but it never appears again in the paper. The paper proposes no specific technique, so there isn't anything to name.\\n* Use the \\\\ begin{appendix} command before you create the appendices and the \\\\ end{appendix} command when you are done. You can then use \\\\section normally and each section so-created will appear with a letter rather than a number.\\n* Figure 4 is very hard to read.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a strategy to prune a convolutional neural network during training. To speed up training, the proposed method prunes the weights with the smallest magnitude during only a small number of epochs at the beginning of training, later on continuing training with a fixed sparsity pattern. Several granularity levels for convolutional and fully-connected layers are studied. Furthermore, the robustness of the resulting pruned networks to adversarial attacks is investigated.\", \"originality\": [\"As acknowledged at the beginning of section two, the general pruning strategy used here is very similar to that introduced by Narang et al., 2017. While the authors argued that the threshold is computed in a different manner, it also increases gradually during training, as in Narang et al., 2017.\", \"I acknowledge that Narang et al., 2017 focuses on RNNs, while here the focus is on CNNs. However, the originality of the different pruning strategies used here for convolutional and fully-connected layers is very limited. In essence, these strategies directly follow those studied by Mao et al., 2017.\", \"The study of robustness to adversarial attacks, while interesting, is also not novel per se, as the idea of performing such a study was proposed in Wang et al., 2018. I acknowledge that the conclusions drawn here differ from those in Wang et al., 2018. However, there are no explanations for this different behavior.\"], \"methodology\": [\"While the beginning of Section 2 states that the pruning threshold gradually increases during training, the specific way this is achieved is not clearly explained.\", \"The pruning strategies depicted by Fig. 2, whether for convolutional layers or for fully-connected ones, never aim to remove entire output channels. However, the only way to truly end up with a smaller network is to remove entire channels and/or layers, as argued in Wen et al., 2016 and in Alvarez & Salzmann, NIPS 2016, as well as studied in Mao et al., 2017 via the filter-level granularity. It is unclear to me how speed would be affected by having a network with the same number of channels and layers, but many parameters set to zero.\"], \"experiments\": [\"The experiments show the good behavior of the proposed algorithm in terms of sparsity vs accuracy tradeoff. However, while the introduction seems to focus on the benefits of the proposed method in terms of training speed, these benefits are not demonstrated in the experiments, where no timings (neither for training not for inference) are reported.\", \"As mentioned above, it is not clear to me that the speedup will be significant if the sparsity pattern does not remove entire channels, but I am willing to be proven wrong.\"], \"summary\": \"My main concern about this paper is its novelty, as the method essentially uses the method of Narang et al., 2017, albeit with a different threshold, with the sparsity patterns of Mao et al., 2017. The experiments demonstrate that the method is effective at pruning, but do not provide any timings to evaluate the resulting speedups.\"}"
]
} |
Hkee1JBKwB | Convolutional Tensor-Train LSTM for Long-Term Video Prediction | [
"Jiahao Su",
"Wonmin Byeon",
"Furong Huang",
"Jan Kautz",
"Animashree Anandkumar"
] | Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames.Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations. A potential solution is to extend to higher-order spatio-temporal recurrent models. However, such a model requires a large number of parameters and operations, making it intractable to learn in practice and is prone to overfitting. In this work, we propose convolutional tensor-train LSTM (Conv-TT-LSTM), which learns higher-orderConvolutional LSTM (ConvLSTM) efficiently using convolutional tensor-train decomposition (CTTD). Our proposed model naturally incorporates higher-order spatio-temporal information at a small cost of memory and computation by using efficient low-rank tensor representations. We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and better/comparable results to other ConvLSTM-based approaches, but with much fewer parameters. | [
"Tensor decomposition",
"Video prediction"
] | Reject | https://openreview.net/pdf?id=Hkee1JBKwB | https://openreview.net/forum?id=Hkee1JBKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ONZA6dxbvd",
"S1l4Ac4hor",
"BJeqsFEhsS",
"Hkgf1YEhjB",
"SJxYGCFcjr",
"HyegM4fo9H",
"rkgc4-Cy5S",
"BkxAlRIRYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723705,
1573829323916,
1573829026286,
1573828825586,
1573719568691,
1572705287567,
1571967281541,
1571872246304
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1454/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1454/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1454/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes Conv-TT-LSTM for long-term video prediction. The proposed method saves memory and computation by low-rank tensor representations via tensor decomposition and is evaluated in Moving MNIST and KTH datasets.\\n\\nAll reviews argue that the novelty of the paper does not meet the standard of ICLR. In the rebuttal, the authors polish the experiment design, which fails to change any reviewer\\u2019s decision.\\n\\nOverall, the paper is not good enough for ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Official Blind Review #4\", \"comment\": \"Thank you very much for your thoughtful comments.\\n\\n(1) Novelty of our work.\\n(a) Yu et al. 2017 shows that higher-order models (compressed by standard tensor-train decomposition) perform better than first-order models in synthetic regression problems. However, their approach can not be easily extended to video prediction, since standard tensor-train cannot cope with the essential convolutional operations. We perform an ablation study regarding the necessity of convolutions in Tensor-Train (requested by Reviewer 2). The training is not finished yet, but the validation curve ( https://postimg.cc/vx6rGnbH ) shows that our higher-order model with Convolutional Tensor-Train is an important part of our proposed method.\\n\\n(2) Comparison against PredRNN++ (especially in terms of MSE).\\nWe added updated results in the revised version (Table 2 and 4). We found that our method produces sharper predictions, but MSE scores are lower (see Fig. 3, 4, 6-11). Therefore, we decided to add another metric, LPIPS, which better represents human perception. We discuss this issue in the experiment section. In the end, our methods outperform PredRNN++ on both SSIM and LPIPS metrics. \\n\\n(3) Necessity of model compression for higher-order models. \\nFor higher-order spatio-temporal models (such as higher-order Conv-LSTM considered in our paper), the parameters grow exponentially with the order, therefore the higher-order models cannot be built without model compression. In this paper, we show that even heavily compressed higher-order models can have better performance than (uncompressed) first-order models.\\n\\n[a] Zhang, Richard, et al. \\\"The unreasonable effectiveness of deep features as a perceptual metric.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\"}",
"{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for your efforts in reviewing our submission and your valuable suggestions of ablation studies.\\n\\n(A) Comparison to [Yang et al 2017] and necessity of Conv-LSTM (ablation studies (1) and (2)).\\nThe problem of video prediction is considerably more difficult than the one of video classification tackled in Yang et al, 2017: while only a single label is returned in video classification, video prediction requires producing all pixels for future frames. The work Yang et al, 2017 is based on fully-connected LSTM, which is generally not sufficient for video prediction. The original ConvLSTM paper [a] explains the necessity of convolutions for the video prediction problem. We believe this already answers your concern about using ConvLSTM instead of LSTM for video prediction.\\n\\n(B) Necessity of higher-order models (ablation study (4)) \\nWe found that higher-order models generally perform better than first-order models, which justifies our preference for higher-order models. For example, if we reduce the reported higher-order model to first-order fixing other hyper-parameters unchanged, the PSNR will decrease by 1.1, and SSIM by 0.015.\\n\\n(C) Necessity of CTT (ablation studies (3) and (5)).\", \"we_perform_two_ablation_studies\": \"the convolution filter size is fixed to 1 for CTT (which effectively reduce to TT) in Conv-TT-LSTM with higher-order and a single order. They corresponds to the ablation studies (3) and (5) as suggested. Unfortunately, both models are still under training but the validation curve ( https://postimg.cc/vx6rGnbH ) show that Conv-TT-LSTM without CTT is much worse than our proposed model. We expect them not much better than the ConvLSTM baseline. It suggests that convolutions in tensor-train is a very important component for capturing information in video prediction.\\n\\n(D) Question of backpropagation in CTT.\\nIn Equation (4), we derive an efficient sequential algorithm for using CTT in higher-order models. Therefore, in our current implementation, we use the built-in auto-differentiation for backpropagation, which effectively reverses the order of the forward iterations. \\n\\n(E) Question of error propagation issue. \\nCompared to first-order models, higher-order models explicitly capture higher-order correlation, and therefore reduce the predictive error at each single step. As a result, the accumulation of the errors are slower over time, which benefits long-term prediction. \\n\\n[a] Xingjian, S. H. I., et al. \\\"Convolutional LSTM network: A machine learning approach for precipitation nowcasting.\\\" Advances in neural information processing systems. 2015.\"}",
"{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"We thank the reviewer for the valuable comments.\\n\\n(1) Only comparing to Conv-LSTM, and the video quality not being improved. \\nThe improved results are provided in the revised version for both Moving-MNIST (in Table 2 and Figures 2, 3) and KTH (Table 4 and Figure 4). Our new results outperforms the state-of-the-art model, PredRNN++, on both SSIM and LPIPS metrics. In Figures 3, 5 and6-11, the visual examples show that our models produce much sharper predictions compared to PredRNN++. \\n\\n(2) Missing baselines from Villegas et al., 2017 and Denton et al., 2017.\\nPSNR and SSIM scores of Villegas et al., 2017 has been included in Table 4, and are no better than our Conv-TT-LSTM models.\\n\\n(3) No video is provided.\\nWe included per-frame visualization (Fig. 3 and 5). We believe it is easier to judge the perceptual quality by looking at predicted results for each individual frame. We also added more samples in Fig. 6-11.\"}",
"{\"title\": \"Updated paper with new results\", \"comment\": \"In this revised revision, we update the following parts of the paper:\\n\\n1. We add a perceptual metric, LPIPS [a] for all comparisons (Table 2-4). This metric is known to be close to human perception, compared to traditional metrics such as MSE and SSIM. We added a paragraph about a discussion of these metrics at the beginning of the experimental section.\\n\\n2. We reproduce the state-of-the-art ConvLSTM-based method, PredRNN++ [c], using their source code [b] on both Moving-MNIST and KTH datasets. By performing an additional hyper-parameter search, we obtained better performance than the numbers reported in the original paper. Compared to these results, our Conv-TT-LSTM outperforms PredRNN++ in both SSIM and LPIPS (reported in Table 2, 4, per-frame comparison in Fig 2). The visual samples show that our proposed methods are much sharper than PredRNN++ for long-term prediction on both datasets (Fig. 3, 4, 6-11). \\n\\n3. Additional baseline, [Villegas et al., 2017] has been included in Fig. 4.\\n\\nI believe this update answers the concerns about the quality of our method (reviewer 1 and 4). We will answer the rest of comments soon. \\n\\n[a] Zhang, Richard, et al. \\\"The unreasonable effectiveness of deep features as a perceptual metric.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[b] https://github.com/Yunbo426/predrnn-pp\\n[c] Wang, Yunbo, et al. \\\"Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning.\\\" arXiv preprint arXiv:1804.06300 (2018).\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper build a higher-order spatio-temporal model by means of combining Convolutional Tensor-Train Decomposition(CTTD) and ConvLSTM, and utilize the combination method to solve long-term video prediction problems. The CTTD factorizes a large convolutional kernel into a chain of smaller tensors, so as to relieve the difficult convergence and overfitting problems caused by too much model params.\\nExperiments on Moving-MNIST and KTH datasets show that the proposed method achieved better results than standard ConvLSTM, and in some way comparable with SOTA model. Ablation Studies are also provided.\\n\\nAlthough it seems novel combing CTTD with ConvLSTM, the idea of CTTD and the combination mainly comes from [Yu et al.,2017] and [Yang et al.,2017]\\uff0cthis paper use the method in a new problem of video prediction, I think the theoretical innovation is not enough for ICLR.\\nAlthough the experimental results were better than ConvLSTM(2015), but not as good as PredRNN++(2018), especially in terms of the MSE metrics. Since the prediction accuracy has not yet achieved, I don't think the reduction of model params is a matter of primary importance. What\\u2019s more, Moving-MNIST and KTH are relatively simple datasets, video prediction on a more complicated datasets such as UCF101 will be more convincing.\", \"conclusion\": \"This paper is in some way novel, but not enough for ICLR, and the experiment results seems not enough convincing, so I will give a weak reject.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a convolutional tensor-train (CTT) format based high-order and convolutional LSTM approach for long-term video prediction. This paper is well-motivated. Video data usually have high dimensional input, and the proposed method aims to explicitly take into account more than one hidden representation of previous frames - both lead to a huge number of parameters. Therefore, some sort of parameter reduction is needed. This paper considers two different types of operations - convolution and tensor-train (TT) decomposition - in an interleaved way. The basic model considered in this paper is a high-order variant of convolutional LSTM (convLSTM).\\n\\u00a0\\nThere exist several works using tensor decomposition methods including TT to compress a fully connected layer or a convolutional layer in neural nets, to break the memory bottleneck and accelerate computation. This paper takes a different direction - it further embeds convolution into the TT decomposition and thus defines a new type of tensor decomposition, termed convolutional tensor train (CTT) decomposition. CTT is used to represent the huge weight matrices arisen in the high-order convLSTM. To my best knowledge, this combination of convolution and TT decomposition is new.\\n\\u00a0\\nThe paper is well-written as the literature review is well done. Experimental results demonstrate improved performance over the convolutional LSTM baseline, a fewer number of parameters, and the qualitative results show sharp and clean digits. This improvement could be attributed to multiple causes: the high-order, the tensor decomposition-based compression, or the CTT. The authors also provide an ablation study, but it mainly concerns comparisons with ConvLSTM.\\u00a0 \\u00a0\\n\\u00a0\\nDespite the promising results, this paper is not ready for ICLR yet. Below is a list of suggested points needed to address:\\n(1) Yang et al 2017 claim that TT-RNN without convolution can also capture spatial and temporal dependence patterns in video modeling. This is an important baseline but missing in the current version of the paper.\\u00a0\\n(2) The justification of high-order modeling in long-term prediction. The first-order model also implicitly aggregates multiple past steps. It would be good to add more experimental evidence to support the necessity of the high-order.\\n(3) There exists some unjustified complexity for the CTT approach. How does it compare to TT for high-order ConvLSTM?\\n\\u00a0\\nPerhaps, a more complete ablation study should include:\\n(1) LSTM with TT but without high-order and convolution\\n(2) LSTM with high-order and TT but without convolution\\n(3) ConvLSTM with TT\\n(4) ConvLSTM with CTT\\n(5) ConvLSTM with high-order and TT\\n(6) ConvLSTM with high-order and CTT\", \"question\": \"\\u2022 How is the backpropagation done for the CTT core tensors?\\u00a0\\n\\u2022 What is the error propagation issue of first-order methods and how does the high-order one not prone to it?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThis paper proposes a method that saves memory and computation in the task of video prediction by low-rank tensor representations via tensor decomposition. The method is able to outperform standard convolutional lstm and other methods by using less parameters when testing it in the Moving MNIST and KTH datasets. The authors also present a proof to validate their method.\", \"pros\": [\"Interesting method for decomposing tensors operations in convolutional architectures\", \"Outperforms immediate baseline (Convolutional LSTM)\", \"Weaknesses / comments:\", \"Weak experimental section\", \"The authors mainly compare against Convolutional LSTM. The performance increase is there but the difference in parameters is not that significant in comparison to the performance. Needing fewer parameters is one of the claims in this paper and I am not fully convinced of the trade-off between the complexity of the model and the gain in parameter reduction / performance. In addition, the show videos do not look that much improved. The paper is also missing baselines from Villegas et al., 2017 and Denton et al., 2017 which both have available models for the KTH dataset.\", \"No videos provided\", \"The paper does not provide any videos which is a must for video prediction papers. Judging the video quality from images in the paper is not easy, and also the used metrics have been shown to not be very objective in terms of video prediction quality or image generation in general.\"], \"conclusion\": \"The proposed decomposition method is interesting, but the experimental section fails to convince me as to whether the methods performance validates the complicated formulations. My current score is between weak reject and reject so I will give a weak reject.\"}"
]
} |
H1gyy1BtDS | An Information Theoretic Approach to Distributed Representation Learning | [
"Abdellatif Zaidi",
"Inaki Estella Aguerri"
] | The problem of distributed representation learning is one in which multiple sources of information X1,...,XK are processed separately so as to extract useful information about some statistically correlated ground truth Y. We investigate this problem from information-theoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between accuracy and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper. | [
"Information Bottleneck",
"Distributed Learning"
] | Reject | https://openreview.net/pdf?id=H1gyy1BtDS | https://openreview.net/forum?id=H1gyy1BtDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6VUB512HR",
"rygizoc0qr",
"S1lFhyDE5B",
"HJlpd_F6tr",
"rklfW0_6Fr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723675,
1572936466552,
1572265905123,
1571817589360,
1571814906196
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1453/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1453/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1453/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1453/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors study generalization in distributed representation learning by describing limits in accuracy and complexity which stem from information theory.\\n\\nThe paper has been controversial, but ultimately the reviewers who provided higher scores presented weaker and fewer arguments. By recruiting an additional reviewer it became clearer that, overall the paper needs a little more work to reach ICLR standards. The main suggestions for improvements have to do with improving clarity in a way that makes the motivation convincing and the practicality more obvious. Boosting the experimental results is a complemental way of increasing convincingness, as argued by reviewers.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies a distributed representation problem where multiple features X_1,...,X_K are processed (or encoded) separately to estimate (or decode) some quantity of interest Y.\\nThe log loss is considered throughout, which amounts to measuring the mutual information between Y and \\\\hat Y, defined as the \\\"accuracy\\\" of the estimation method. The average rate (measured in number of bits per sample) of the encoded feature is defined as the \\\"complexity\\\" of the representation method.\\nThe author derived the fundamental trade-off between the accuracy and the complexity for any representation-estimation (or encoding-decoding) method.\\nThe author also derived a variational representation of the optimal accuracy-complexity region, which also expresses the optimal encoder and decoder map as the solution of the optimization problem.\\nFinally, the author considered the case where the joint distribution of P_{X_1,,,,X_K,Y} is unknown, and encoder and decoder are parameterized by neural networks, parameters of which are tuned using data.\\n\\nI incline to reject the paper, for the following reasons.\\n1. The accuracy-complexity trade-off studied in the paper is more of a rate-distortion type of information-theoretic problem, where the joint distribution P_{X_1,,,,X_K,Y} is assumed to be known. Its connection to the learning problem, where the joint distribution P_{X_1,,,,X_K,Y} is unknown, is unclear. Even if the precise accuracy-complexity region is obtained, it says little about the sample complexity needed by a learning algorithm to achieve this region.\\n2. Deriving the optimal encoder-decoder mapping from the variational representation of the accuracy-complexity region also requires the joint distribution, which violates the basic assumption of the learning problem.\\n3. The author did consider the case where the joint distribution is unknown, and the encoder-decoder pair is learned from data. However, this learning problem is somewhat artificial: each encoder only encodes one of the features, but in order to encode optimally, it has to know the entire joint distribution, hence need to access all the features during training. This discrepancy of seeing different components of the data set during training and inference is not well-motivated.\\nThe author mentioned \\\"multi-view learning\\\" at the beginning of the paper. It would be good if the author can elaborate more on this problem in Sec 4 of Experiment Results, and discuss with more detail on how the proposed method solves this problem and how it is different from the existing results, both in terms of the algorithm and the performance.\\n\\n\\n\\n=================================================\\nFeedback to authors' reply\\n\\nI got a better understanding on how the proposed learning algorithms works after reading the authors' reply.\\nI guess the idea for the case where the joint distribution is unknown is that, for encoding, different nodes uses its own training data (without accessing other nodes' data) to optimize the encoder separately; while for decoding, the master node trains the decoder uses data available to all nodes to estimate the joint distribution.\\nIn this way, the encoders and the decoder jointly optimizes a variational lower bound of the optimal rate region.\\n\\nIf this is the case, I think the proposed method may have some value in practice. \\nBut now the question is how good the variational lower bound is compared to the optimal region, and how well can this variational lower bound be approximated by neural networks and how efficient can the training be done. Without theoretical analysis on these questions, one may only use experiments to assess the performance. From Table 2, it looks like the improvement of the proposed method on the existing method is quite marginal.\\n\\nIn summary, I would like to thank the authors' valuable reply. I encourage the authors to study the gap between the variational lower bound and the optimal region, and maybe do more experiments to find a good use case of the proposed method.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"I am not an expert in this area and the paper involves a lot of derivations and proofs, but I did not check the correctness of those derivations. In summary, this paper proposed a framework for integrating multiple data sources for representing data. In the framework, each data source was mapped to a latent data variable by using a nonlinear function which is called an encoder; then the mapped latent variables were jointly mapped to the target data by using another nonlinear function which is called a decoder. To make this idea to work, the paper used mutual information as the objective function to control the accuracy if the model, and at the same time to avoid overfitting the paper proposed to use MDL as a measure to control the complexity of the model. If I was right, this was the whole picture of the proposed model. My questions are the following:\\n1) I am not very clear how the model complexity was automatically incorporated with the objective function. It seems to me that the objective function was finally the equation (29) and then the neural networks for encoder and decoder were optimized. If this was the case, how the model complexity was incorporated, that is, how the R_k was used in the model? Was the values R_k constant in the model - I mean they are fixed constant values? How these values,i.e.,R_k, were chosen?\\n2) I am a mathematician, but to be honest, I feel that the Maths in the paper is huge and heavy and I thought it could not be that complex for the model. The consequence is that it make the paper to be hard to read. This is a personal feeling, you could just ignore this point.\\n3) Experiments: there are a lot of papers describing to integrate data sources for at least the MNIST example. It would be interesting to compare the proposed method to the literature. The experiment in 4.1 obviously is a toy data problem - I mean although the data is real, but the data generated was using noisy and rotations. It would be more interesting to apply the method to a real-world problem.\\n4) I think it would be more friendly to explicitly define the concepts of Discrete Memoryless and Memoryless Vector Gaussian Models. \\n5) The Markov chain represented in equation (3) is not well defined. I do not understand these notations. \\n6) Before the equation (4), is it equivalent X_k^n and X_{k,n}? I am confused by these notations\\n7) In equation (6), it is more readable to explicitly define the Shannon Mutual Information.\\n8) The second paragraph on Page 5: you use Gaussian pmfs here, but pmf denotes discrete variable. But Gaussian I assume is continuous.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors studied the distributed representation learning problem, where multiple sources of data are processed to provide information about Y. They studied this problem from information-theoretic point of view. Their main contribution can be summarized as follows.\\n 1. The optimal trade-off between the accuracy and complexity were studied for discrete memoryless data model as well as memoryless vector Gaussian model.\\n 2. A variational bound were constructed in order to connect the optimal encoder and decoder mappings with the solution of an optimization algorithm.\\n 3. If only samples from an unknown distribution are available, an algorithm were proposed to find the optimal encode and decoder. Moreover, some experiment were conducted to support the approach.\\n\\nIn general, I think the paper is well-organized. The definition of the problem and the motivation of the approach are clear. The theorems, algorithms and experiments are solid enough to support the whole story of this paper. Generally I wish to see this paper being accepted.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper extended the Gaussian Information Bottleneck method to the case of multi-view learning and provided a variation bound for the accuracy optimization with constrain on the sum of complexity. It also proposed an algorithm to learn the distributed representation without any prior knowledge of the data distribution.\\n\\nThe multi-view learning problem has been quite well studied in the literature. The paper reformulated the multi-view learning problem as a Bayesian inference problem and provided solid analysis for it.\\n\\nThe writing of the paper was pretty hard to follow for me, with a lot of notations that are not defined clearly. \\n* For example, I can roughly guess that U in theorem 1 represent the learned descriptors, but what\\u2019s the variable T in theorem 1?\\n* What is \\\\Omega in Theorem 2?\\n\\nThe experimental result doesn\\u2019t look very comprehensive at all as it was mostly compared with variations of the proposed algorithm and it doesn\\u2019t include any other multi-view learning algorithms.\\n\\nThe algorithms in the experimental result are not very clearly defined. I don\\u2019t see much explanation of what is exactly D-VIB and C-VIB. There\\u2019s some formulation of the algorithm in Section 3.4, but it only gives a loss function briefly. I\\u2019m not sure if many practitioners will be able to implement this algorithm from the description here.\"}"
]
} |
HJlA0C4tPS | A Probabilistic Formulation of Unsupervised Text Style Transfer | [
"Junxian He",
"Xinyi Wang",
"Graham Neubig",
"Taylor Berg-Kirkpatrick"
] | We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art. | [
"unsupervised text style transfer",
"deep latent sequence model"
] | Accept (Spotlight) | https://openreview.net/pdf?id=HJlA0C4tPS | https://openreview.net/forum?id=HJlA0C4tPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"riZnDqu_jI",
"BkxAC3mhoH",
"H1xpboQniH",
"HkgM05m3sS",
"rkeR9572sr",
"B1gcj6Fe5r",
"SkxpKw0nYB",
"SJl7ALxqYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723643,
1573825749990,
1573825284906,
1573825225894,
1573825174099,
1572015521778,
1571772293488,
1571583690782
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1451/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1451/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1451/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes an unsupervised text style transfer model which combines a language model prior with an encoder-decoder transducer. They use a deep generative model which hypothesises a latent sequence which generates the observed sequences. It is trained on non-parallel data and they report good results on unsupervised sentiment transfer, formality transfer, word decipherment, author imitation, and machine translation. The authors responded in depth to reviewer comments, and the reviewers took this into consideration. This is a well written paper, with an elegant model and I would like to see it accepted at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Revision Submitted\", \"comment\": \"We have submitted a revised manuscript and made the following modifications to address the reviewers' major concerns:\\n\\n-- Compared sampling decoding and greedy decoding as different approximation methods, in terms of both ELBO and task performance (the last paragraph of Section 5.3) \\n-- Compared different gradient propagation methods, in terms of both ELBO and task performance (Section 5.4)\\n\\n\\nWhile limited by time in the response period, we do still plan to address *all* the reviewer\\u2019s comments in future revisions. We also welcome any further feedbacks to improve this paper !\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for the time and comments!\\n\\n## Q1: The disconnect between the proposed probabilistic model and what\\u2019s actually happening ?\\nThank you for bringing this up! This is an extremely important point and is something we believe can be cleared up -- specifically, we believe we have supported the case in additional experiments (see below) that this is an optimization issue instead of modeling issue. \\n\\nStop-gradient can be viewed as an approximation technique for optimizing the training objective of the proposed probabilistic model. As you mention, many techniques fall into this category: Gumbel softmax approximates the true gradient with a biased estimator, hoping to reduce variance. Other techniques yield unbiased estimators, but at the cost of higher variance. Finding an effective optimization technique for a given model class often involves some degree of empirical exploration of the bias-variance tradeoff. Stop-gradient, when viewed in this light, is certainly a biased estimator of the true gradient, but may substantially reduce variance. We completely agree that this point could be supported more effectively with further experiments and comparisons. Specifically, is stop-gradient actually a better optimizer for our model class than Gumbel softmax or REINFORCE? Or does stop-gradient just lead to better task performance without actually better optimizing the modeling objective we claim to care about? If the latter were true, your concern about a disconnect between model and training procedure would be well-founded.\\n\\nIn order to resolve this, we have run additional experiments on the sentiment transfer task. We compare stop-gradient, Gumbel softmax, and REINFORCE as optimization techniques, reporting the best train and test ELBO under our model class achieved with each approach -- as well as task performance corresponding to best test ELBO. These results are presented in Table 5 and discussed in Section 5.4 of the updated paper draft. \\n\\nThe key finding is that stop-gradient leads to better training and test ELBO in our model class than propagating gradients with either REINFORCE or Gumbel softmax, validating stop-gradient as a better choice if our goal is truly to optimize our models training objective. (In other words, whichever optimization method achieves the better ELBO can be seen as a superior method for optimizing the proposed fully probabilistic model.) Further, across optimization methods, ELBO is correlated with task performance -- i.e. for these three optimization methods, the better the final ELBO achieved, the better the task performance. \\n\\nTogether, we hope these results help support the case that, while somewhat unsatisfying a priori as a gradient estimator, the use of stop-gradient is in fact about optimization in our actual model class. If you think it would help support the case further, we can add similar experiments on the other tasks in future revisions. We also believe that one nice thing about our probabilistic formulation is that it allows us to separate out problems of learning the model and optimization, as we did here. This could help guide future work in better ways to optimize such probabilistic objectives. \\n\\n\\n\\n## Q2: Similarities to unsupervised neural machine translation\\nYes, we agree that the underlying ELBO objective for our model class is similar to non-ELBO training objectives used in related unsupervised MT systems. However, we believe this is partially a strength: One goal of this paper is a probabilistic formulation that relates and interprets prior work. That said, there are important distinctions between our ELBO objective and objectives used in related work. For example, the main difference between the proposed model and UNMT is the added language model and the KL loss term. As mentioned in the paper, the language model is more useful when the two domains are less divergent, where the language models behave like a discriminator to avoid copying, and copying is very likely to happen in this case without supervised data. Therefore, the proposed model performs similarly to UNMT on decipherment where the vocabulary from two domains are completely different. For close language translation a portion of the vocabulary is shared between domains, and the language model plays a bigger role yielding improved performance.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the time and comments. Due to time limitations we could only address major points, but we\\u2019ll make sure to reflect all advice in future revisions.\\n\\n## Q1: Comparison of greedy and sampling decoding in terms of ELBO\\nWe agree that a comparison between gradient approximation techniques in this context would be informative. In preliminary experiments not included in the paper draft we tried various approaches -- in retrospect (and based on your comment) we realize it makes complete sense to include this analysis. Thanks for the suggestion! \\n\\nWe have updated the paper with a comparison between greedy and sample-based gradient approximations for ELBO on the sentiment transfer task. Please see the last paragraph in Section 5.3 and Table 4 for details. Here we report training ELBO, test ELBO, and task performance. We find that the greedy approximation leads to better optimization of both training and test ELBO and better task performance. It\\u2019s worth noting that once the model is trained, the sample-based approximation of ELBO is low-varaince. Thus, in Table 4, we are showing the sample-based training ELBO regardless of the gradient approximation technique. The fact that the greedy gradient approximation leads better ELBO optimization even though the greedy estimator is biased indicates that the sampled-based approximation (which is unbiased) has much higher variance during the early stages of learning -- we are trading variance for bias with positive effect. Using more samples might mitigate this issue during training, but would require substantially more computation. We are currently running additional experiments to explore methods for reducing training variance with sample-based approximations, and hope to include these results in future revisions.\\n\\n## Q2: Would it be beneficial to control each term in KL separately?\\nThis is a good point, and we believe that it would be beneficial with more control for each term. However, this also introduces an additional tunable hyperparameter. Due to time limitation we are not able to fully verify this hypothesis on these tasks, but we will try to include experimental analysis regarding this separate control in the next revision.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the useful feedback and clarification questions!\\n\\n## Q1: Isn't the idea that there is effectively only one encoder and one decoder learned that just put together in different ways during training? \\nYes, you are correct! Thanks for pointing out the possible confusion. We will clarify this point in the paper. Due to parameter sharing, during training we learn one shared encoder and one shared decoder.\\n\\n## Q2: Why is the BT + NLL baseline so strong ?\\nApart from the decipherment task, BT + NLL actually underperforms UNMT, which does not have the language model. This is most pronounced in the sentiment and formality transfer tasks where BT + NLL fails with very low perplexity. Therefore, in all cases but decipherment, it seems that adding a language model without the complete KL term (equation 5) does not result in superior performance.\\n\\nThanks for the suggestion to include examples of repetitive outputs! There are several examples from BT + NLL on sentiment transfer in A.3. We have we added additional repetitive examples on the formality transfer task in Appendix A.4 (Table 7).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This work is very well-written and easy to follow. The\\ncontribution is clearly articulated as while there are\\nprobabilistic generative models for transfer in the\\nliterature (Shen et al does include one) they don't perform as\\nwell. Ablation studies further confirm the need for the\\nparticular kind of parameter sharing used in the model in the\\npaper. Great results are shown on 5 text transfer problems.\", \"clarifications_and_improvements\": \"Just for clarity, in the last paragraph on page 4. It says two encoder-decoder\\nmodels are learnt, but isn't the idea that there is effectively only one\\nencoder and one decoder learned that just put together in different ways\\nduring training? I'm also curious why the baseline of BT+NLL was so strong? Is\\nhaving the loss of a language model work that much better than the regular\\nentropy term?\\n\\nI would also like if possible if you could share some of the repetitive examples\\ncreated by BT-NLL which explain its low PPL.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose a probabilistic framework for unsupervised text style transfer. Given two non-parallel corpora X,Y in different domains, the authors introduce unobserved corpora \\\\bar{X}, \\\\bar{Y}. These are used as latent variables that control the generation of the observed data. To train models, the paper proposes to optimize the evidence lower bound of the log marginal likelihood. To facilitate training, multiple techniques are suggested, such as parameter sharing, some gradient approximations and initialization with a reconstruction objective. The approach is evaluated on five style transfer tasks, as well as unsupervised machine translation. Models are evaluated with multiple metrics, and generally obtain reasonably strong performance.\\n\\nI lean towards the acceptance of the paper because the approach is fairly simple and elegant, while obtaining promising results. The connections to back-translation and language models are also potentially interesting. However, while the paper aims to suggest a principled approach to style transfer, using greedy samples biases the reconstruction objective, and as such the method does not really optimize the ELBO.\\n\\nCasting style transfer as data completion is a straight-forward idea that doesn't introduce unnecessary or too simplistic assumptions. Optimizing the ELBO follows naturally, and can lead to more diverse outputs than the BT+NLL approach, which misses the negative entropy term. Reference BLEU scores on all tasks are competitive, and sometimes clearly better, with strong baselines.\\n\\nGreedily sampling latent sequences during training should ideally be justified more carefully as it biases the objective function. In particular, an experimental comparison to stochastic sampling, which should more closely approximate the expectation, would be appreciated. Additionally, detailing the similarities and differences between the proposed approach and current UNMT techniques could be helpful to some readers.\", \"questions\": \"Could you present the validation and test evidence lower bounds? If so, how is sampling performed?\\n\\nIn footnote 2, you mention tuning the strength of the KL regularizer. As the KL can be decomposed into 2 terms (Eq. 5), would it be beneficial to control each term separately?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The main contribution of this paper is a principled probabilistic framework of unsupervised sequence to sequence transfer (text to text in particular).\\n\\nHowever, I believe there is a large disconnect between the probabilistic formulation written it section 3 and whats actually happening experimentally in section 5. It is not clear whether the model is *actually* optimizing an ELBO because the gradients from sequence reconstruction loss are not backpropogated to the inference network as explained in paragraph on Approximating Gradients of ELBO. Moreover this restriction makes the authors method almost the same as the one used for unsupervised neural machine translation by Lample et al 2017 and Artetxe et al 2017. I would like to see a more detailed analysis from authors on how far the performance of Gumbel-softmax estimator and REINFORCE estimator is from simple stop-gradient estimator used in experiments.\\n\\nIn terms of experimental setup I like that the authors considered a large suite of experiments across various tasks. Although the evaluation metrics on text style transfer tasks like sentiment transfer, formality transfer, author imitation are in line with previous work ideally the human evaluation needs to be done to truly see how well each method performs. On unsupervised machine translation, authors show a large improvement on Serbian-Bostian translation. I am a bit skeptical since as I wrote above the proposed method is very similar to previously proposed unsupervised neural machine translation approaches and it is not clear why we are seeking such a large gain of 5 BLEU points.\\n\\nOverall I think it is a well written paper with a large experimental suite, although I am skeptical of actual connection between probabilistic formulation and whats actually happening in practice.\\n\\n================================================\", \"update\": \"I have raised the score from 3 to 6.\"}"
]
} |
HyxCRCEKwB | ROBUST GENERATIVE ADVERSARIAL NETWORK | [
"Shufei Zhang",
"Zhuang Qian",
"Kaizhu Huang",
"Rui Zhang",
"Jimin Xiao"
] | Generative adversarial networks (GANs) are powerful generative models, but usually suffer from instability which may lead to poor generations. Most existing works try to alleviate this problem by focusing on stabilizing the training of the discriminator, which unfortunately ignores the robustness of generator and discriminator. In this work, we consider the robustness of GANs and propose a novel robust method called robust generative adversarial network (RGAN). Particularly, we design a robust optimization framework where the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. The generator tries to map the worst input distribution (rather than a specific input distribution, typically a Gaussian distribution used in most GANs) to the real data distribution, while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation. We have provided theories showing that the generalization of the new robust framework can be guaranteed. A series of experiments on CIFAR-10, STL-10 and CelebA datasets indicate that our proposed robust framework can improve consistently on four baseline GAN models. We also provide ablation analysis and visualization showing the efficacy of our method on both generator and discriminator quantitatively and qualitatively. | [
"Generative Adversarial Network",
"Robustness",
"Deep Learning"
] | Reject | https://openreview.net/pdf?id=HyxCRCEKwB | https://openreview.net/forum?id=HyxCRCEKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"iDsaMKJy5",
"rkxvKVc0YS",
"rkeJlDz6YH",
"Syx7zrDiYH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723614,
1571886207151,
1571788518598,
1571677450863
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1450/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1450/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1450/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work proposes a robust variant of GAN, in which the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. Unfortunately, the reviewers have raised some critical concerns in terms of theoretical analysis and empirical support. The authors did not submit rebuttals in time. We encourage the authors to improve the work based on reviewer's comments.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\nThe present work proposes to combine GANs with adversarial training replacing the original GAN lass with a mixture of the original GAN loss and an adversarial loss that applies an adversarial perturbation to both the input image of the discriminator, and to the input noise of the generator. The resulting algorithm is called robust GAN (RGAN). Existing results of [Goodfellow et al 2014] (characterizing optimal generators and discriminators in terms of the density of the true data) are adapted to the new loss functions and generalization bounds akin to [Arora et al 2017] are proved. Extensive experiments show a small but consistent improvement over a baseline method.\\n\\nDecision\\nThe authors do a thorough job at characterizing the proposed method using both theoretical analysis and wide ranging experimental studies. My main criticism of the paper in its present form is the lack of motivation for the proposed method. Why, out of the many possible ways to impose additional regularization should one use adversarial training to regularize GANs? While it is remarkable that the experimental results seem to be improving consistently, the improvement is quite small. Similarly, while theoretical results are provided, a discussion of what they mean for the performance of RGAN is sorely lacking, leaving me unconvinced that adversarial training leads to an improvement over GANs when compared with simpler methods of regularization. Therefore I vote to reject the paper in its present form.\\n\\nSuggestions for improvement on the experiments\\nMy main concern with the experiments is that a similar small improvement over the baseline could be achieved by tuning the hyperparameters in an alternative simpler regularization method. For instance, instead of using an adversarial perturbation, one could simply use a random perturbation applied to both the random noise and the discriminator input at testing time. The former would amount to a variance of the truncation trick [Brock et al 2019], while the latter would amount to using instance noise. These are established methods to improve GAN performance and to make a case for adversarial training of GANs one would need to show improvements compared to these simpler strategies, in my opinion.\\n\\nMy main suggestion for the theoretical part is to make a stronger case of what (if anything) these theoretical results say about the performance of RGAN compared to the usual GAN. In particular, the generalization bound does not seem to depend on lambda, (which interpolates between the original GAN and RGAN). What is to be inferred from these results regarding the performance of RGAN?\\n\\nQuestions to the authors\\n(1) I assume you perform adversarial training in practice by backpropagating in image/noise space? How does this affect performance? How would the convergence plots look like if wall-clock time, or the number of model evaluations were used on the x-axis?\\n\\n(2) Did you try investing a similar computational budget to tune hyperparameters for simpler regularization methods as mentioned above and compare the resulting improvement?\\n\\n(3) Is the value (I presume, standard deviation) given after each the inception score computed for different multiple iterations of the same run or multiple runs with different initialization and random seed?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed another way to improve GANs. The method tackled the robotness issue by requiring the generator and discriminator to compete with each other in a worst-case setting. The experiments on three datasets show some improvement.\\n\\nThe idea is interesting by forcing both G/D to learn the mapping in the worst case. However, the theory analysis to show whether the generalization is better than the original WGAN is not clear to me. The clipping or gradient penalty trick is still needed. In addition, how this framework can work with other techniques (e.g., better architectures, spectral normalization) orthogonally is unclear. \\n\\nThe experimental results are not strong. First, the improvements are somehow marginal (no gain if compared with SN-GANs). Only three small benchmarks are included. It would be good to see how it works on large datasets. In the meanwhile, the ablation study to investigate the effect over WGAN-gp is not obvious. Finally, I could not get any insight from the visualization analysis. It is not reasonable to only list several failed cases in un-conditioning setting and do the comparison. \\n\\nOverall, I think the idea to improve GANs is interesting. I made my recommendation mainly considering the experimental results and the insight analysis.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Developing stable GAN training method has gained much attention these years. This paper propose to tackle this issue via involving distributionally robust optimization into GAN training. Its main contribution is to combine Sinha et al with GAN, proposing a new GAN training method on the basis of vanilla GAN. Relative theory results are proved and detailed experiments are conducted.\", \"some_comments\": \"1 The proposition 0.1 is not quite clear. In fact it is correct only when the distribution discrepancy is Wasserstein. This paper reads \\u201chere we use the wasserstein metric\\u201d in \\u201crobust training over generator\\u201d subsection, the reviewer is not sure if the authors are aware of this point.\\n\\n2. There seems to be a lack of novelty except combining Sinha et al\\u2019s theoretical result with GAN training objective. And there seems not much explanation about the reasons behind this combination. \\n\\n3. The proof in this paper shares similar analysis with that in vanilla GAN paper so theoretically there is also not much novelty. There seems not much insight can one get from the theory results.\\n\\nOverall, the proposed method is evaluated under elaborate and detailed experiments and enjoys promising results, but lacks novelty and theoretical contribution. Therefore, the reviewer tends to reject this paper.\\n\\n### reference\\nAman Sinha, Hongseok Namkoong, and John Duchi. Certifying some distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571, 2017.\"}"
]
} |
BJeTCAEtDB | Feature Map Transform Coding for Energy-Efficient CNN Inference | [
"Brian Chmiel",
"Chaim Baskin",
"Ron Banner",
"Evgenii Zheltonozhskii",
"Yevgeny Yermolin",
"Alex Karbachevsky",
"Alex M. Bronstein",
"Avi Mendelson"
] | Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation}accompanies the paper. | [
"compression",
"efficient inference",
"quantization",
"memory bandwidth",
"entropy"
] | Reject | https://openreview.net/pdf?id=BJeTCAEtDB | https://openreview.net/forum?id=BJeTCAEtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0GOvsrFwQh",
"Bke8bYAror",
"HklwO_ASjH",
"rylzWORrjH",
"rkgopztCcB",
"BJgWjNu0KS",
"BylZwqW2KS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723585,
1573411069703,
1573410926912,
1573410809617,
1572930242525,
1571878041163,
1571719768573
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1449/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1449/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1449/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1449/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposed the use of a lossy transform coding approach to to reduce the memory bandwidth brought by the storage of intermediate activations. It has shown the proposed method can bring good memory usage while maintaining the the accuracy.\\nThe main concern on this paper is the limited novelty. The lossy transform coding is borrowed from other domains and only the use of it on CNN intermediate activation is new, which seems insufficient.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Answer to Reviewer #2\", \"comment\": \"Thank you very much for your review, following are the answer to your concerns:\\n\\n1. In every 3-dimensional tensor (feature map), the PCA transform can be applied to various block shapes. In Fig B.1 we checked it and the more efficient shape was 1 x 1 x C. Choosing this shape has a big implementation advantage because it can be implemented using the convolution kernel (which is very efficient), with kernel size 1 x 1 and where the weights (along the C channels) are exactly the principal components. \\n\\n2. The work is in post-training regime means there is no labeled data and we do not run backpropagation. The PCA is calculated only on a single batch (calibration). After we calculate it, the PCA is fixed (as the convolutional weights) and is not changed - in that way it is much more efficient since the calculation of the PCA matrix is computationally expensive. \\n\\nAbout your question of employing the convolution together with BN (known as \\u201cfolding\\u201d): this is a common technique employed in hardware to reduce the amount of computation, described, for example in \\u201cQuantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference\\u201d by Jacob et al. In the same way, we fold the PCA into the previous convolutional layer for saving arithmetic complexity. We added a reference to the above mentioned paper.\\n\\n3. \\u201cTo avoid this, we calculate the covariance matrix layer by layer, gradually applying the quantization.\\u201d - We apply uniform quantization to all layers of the network. The idea of gradual quantization for the covariance matrix means that we first quantize the first layer and calculate its covariance matrix; only after quantizing the first layer, we proceed with quantizing the following layer (instead of quantizing all layers at the same time) - The idea behind this is that the covariance matrix includes the real statistics of the network that is affected by the quantization of the previous layers.\\n\\n4. \\u201cThe PCA matrix is calculated after quantization of the weights is performed, and is itself quantized to 8 bits.\\u201d - The weights and the PCA coefficients are quantized to 8 bits with standard uniform quantization (specifically, a mid-tread uniform quantizer to ensure 0 is one of the bins). The PCA matrix of the feature map k is calculated after the weights of convolution k are quantized to 8 bits, so the PCA contain the real statistics of the activations produced at inference.\\n5. In Figure 4 we show the efficiency of each part of the suggested algorithm:\\n\\u201cdirect quantization of the activations\\u201d:\\n* Only quantization of the feature maps with standard uniform quantization - marked as \\u201cQ\\u201d in Figure 4 left.\\n* quantization of PCA coefficients - applying PCA transform to the feature maps and quantizing the latter - marked as PCA \\u2014> Q in Figure 4 left.\\n* direct quantization followed by VLC - Applying quantization to the feature maps and then compressing them using VLC (No PCA) - marked as Q \\u2014> VLC in Figure 4 left.\\n* full encoder chain comprising PCA, quantization, and VLC - The full suggested method, including: applying PCA, quantizing the coefficients, and then applying VLC - marked as PCA \\u2014> Q \\u2014> VLC in figure 4 left. \\nThe figure suggests that the full method achieves highest performance.\\n \\n6. \\u201cThis projection helps to concentrate most of the data in part of the channels, and then have the ability to write less data that layers.\\u201d - In figure E.1 we show what happens to the image after the projection onto the principal components. Because of the high correlation between channels we can see that after the projection more information is concentrated in the first channels, while the last channels are almost constant. This shall be compared to the case where there is no projection and the information is spread across all channels. Concentration of information in a small number of coefficients is the key tool for achieving the high compressibility reported in the paper.\\n\\n7. Results of Inception V3 and other methods are reported in the appendix, Figure A.1 and Table A1.\\n\\n8. Following your suggestion, we will add to the new version results for a smaller dataset (CIFAR10, Figure A.3 in the appendix). We are also checking the generalization of the proposed method to other tasks.\"}",
"{\"title\": \"Answer to Reviewer #3\", \"comment\": \"Thank you very much for your comments and rating. As proposed, we uploaded a fixed version. For some reason one of the TeX packages interfered with it - we apologize for that.\"}",
"{\"title\": \"Answer to Reviewer #4\", \"comment\": \"We thank the reviewer for the detailed comments. In what follows, we address in detail the raised issues.\\n\\n1. The transform coding theory is based on previous work \\u2014 indeed, we referred to (Goyal 2001) as well as much older works in the field of image and video compression. However, its use in for neural networks showing the correlation that can be exploited to reduce the memory bandwidth in the activations tensors is novel and was not shown before. In addition, we showed a reference hardware implementation that confirms this theory.\\n\\n2. The implementation is divided into 2 parts:\\nA PyTorch implementation of the algorithm, including various modern architectures.\\nA reference implementation on an Altera FPGA that confirms the reduction in memory energy consumption during inference\\nBoth parts are fully replicable using the code that accompanies the paper. ASIC mplementation should be straightforward using the provided RTL \\u2014 we chose the FPGA target due to the easier prototyping cycle.\", \"regarding_the_use_of_cache_and_memory\": \"in this work, we focus on compression of the feature maps, since in modern systems the cache is insufficiently big to contain all the feature maps; for this reason, in every forward path, writes to the external DDR are inevitable. It was shown in (Yang et al., 2017) that this data movement is a significant constituent of the energy footprint. In our FPGA implementation, we used small buffers and no associative cache memories on the path to/from the DDR.\\n\\nThe computation of PCA does not require a lot of on-chip memory. In fact, it can be interpreted as another 1x1 convolution. It adds a certain computational overhead as detailed in Table C.1; yet, because of the efficient implementation of the convolution, it is negligible in comparison to the benefit in bandwidth reduction.\\n\\n3. The method can be used in any system where memory bandwidth significantly contributed to the energy footprint. This includes GPU-based systems. However, in order to be efficient, it requires hardware acceleration of certain operations such as VLC/VLD in the memory hierarchy, which currently lacks in existing GPUs.\\n\\n4. The method exploits spatial dependencies of the activations by coding blocks from the activation tensor. Figure B.1 visualizes the amount of compression achieved by different block configurations across the activation channels and spatial dimensions. The highest correlation was found across the different channels at the same spatial location.\\n\\n5. Table C.1 contains more details about the logic utilization and the memory energy consumption in the hardware implementation.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studies an important question: how to reduce memory bandwidth requirement in neural network computation and hence reduce the energy footprint. It proposes to use lossy transform coding before sending network output to memory. My concern with the paper is two-fold:\\n1) The major technique of transform-domain coding is borrowed from previous work (e.g., Goyal 2001), hence the novelty of the proposed method is in doubt.\\n2) The implementation details are not clear. For example, I don't know whether the implementation in section 3.1 is based on CPU or FPGA, and how easily Section 3.1 will be implemented on ASIC. For the experimental results are reported in Section 4, we do not know how much memory and how much cache is used. Will the computation of PCA require a lot of on-device memory?\", \"more_detailed_comments\": \"Section 1, 2nd paragraph: GPUs are event more popular than FPGAs and ASICs. Can the proposed method be useful for GPU inference?\\nSection 1, 3nd paragraph: The last sentence says \\\"high interdependence between the feature maps and spatial locations of the compute activations\\\". However, it is not clear to me how the proposed method takes spatial location into account.\", \"section_2\": \"better to review previous work In lossy transform coding\", \"figure_1\": \"It seems to me Figure 1 is obvious. What is the novelty?\", \"section_4\": \"better to report the details of computing units and memory size.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The submission proposes to reduce the memory bandwidth (and energy consumption) in CNNs by applying PCA transforms on feature vectors at all spatial locations followed by uniform quantization and variable-length coding.\", \"i_appreciate_the_writing_quality\": \"as an outsider to the field of low-power/low-precision deep learning, I found the write-up straightforward and easy to follow. It\\u2019s harder for me to precisely assess the significance of the proposed approach, but at a high level it looks reasonable and is backed by convincing empirical evidence.\", \"small_comment\": \"I don\\u2019t believe the submission is following the ICLR 2020 format strictly: the font looks different, and the margins look tighter.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"A lossy transform coding approach was proposed to reduce the memory bandwidth of edge devices deploying CNNs. For this purpose, the proposed method compresses highly correlated feature maps using variable length coding. In the experimental analyses, the proposed method outperforms some of the previous work in terms of the compression ratio and accuracy for training ResNet-34 and MobileNetV2.\\n\\nThe proposed method and initial results are promising. However, the paper and the work should be improved for a clear acceptance:\\n\\n- Some parts of the method need to be explained more clearly:\\n\\n\\u2013 In the statement \\u201cDue to the choice of 1 \\u00d7 1 \\u00d7 C blocks, the PCA transform essentially becomes a 1 \\u00d7 1 tensor convolution kernel\\u201d, what do you mean by \\u201cthe PCA transform becomes a convolution kernel.\\u201d?\\n\\n- Could you please further explain how you compute PCA using batch data, how you update online and how you employ that in convolution weights together with BN? Please also explain the following in detail:\\n\\n(I) \\u201cTo avoid this, we calculate the covariance matrix layer by layer, gradually applying the quantization.\\u201d What is the quantization method you applied, and how did you apply it gradually?\\n\\n(II) \\u201cThe PCA matrix is calculated after quantization of the weights is performed, and is itself quantized to 8 bits.\\u201d How did you quantize the weights, how did you calculate PCA using quantized weights and how did you quantize them to 8 bits?\\n\\n- Could you please explain the following settings, more precisely: direct quantization of the activations; quantization of PCA coefficients; direct quantization followed by VLC; and full encoder chain comprising PCA, quantization, and VLC? Please note that there are various methods and algorithms which can be used for these quantization steps. Therefore, please explain your proposed or employed quantization methods more clearly and precisely.\\n\\n\\u2013 Please clarify the statement \\u201cThis projection helps to concentrate most of the data in part of the channels, and then have the ability to write less data that layers.\\u201d.\\n\\n- Did you apply your methods to larger networks such as larger ResNets, VGG like architectures, Inception etc?\\n\\n- I also suggest you to perform experiments on different smaller and larger datasets, such as Cifar 10/100, face recognition datasets etc., to examine generalization of the proposed methods at least among different datasets.\"}"
]
} |
SJgaRA4FPH | Generative Models for Effective ML on Private, Decentralized Datasets | [
"Sean Augenstein",
"H. Brendan McMahan",
"Daniel Ramage",
"Swaroop Ramaswamy",
"Peter Kairouz",
"Mingqing Chen",
"Rajiv Mathews",
"Blaise Aguera y Arcas"
] | To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact. Manual inspection of raw data—of representative samples, of outliers, of misclassifications—is an essential tool in a) identifying and fixing problems in the data, b) generating new modeling hypotheses,
and c) assigning or refining human-provided labels. However, manual data inspection is risky for privacy-sensitive datasets, such as those representing the behavior of real-world individuals. Furthermore, manual data inspection is impossible in the increasingly important setting of federated learning, where raw examples are stored at the edge and the modeler may only access aggregated outputs such as metrics or model parameters. This paper demonstrates that generative models—trained using federated methods and with formal differential privacy guarantees—can be used effectively to debug data issues even
when the data cannot be directly inspected. We explore these methods in applications to text with differentially private federated RNNs and to images using a novel algorithm for differentially private federated GANs. | [
"generative models",
"federated learning",
"decentralized learning",
"differential privacy",
"privacy",
"security",
"GAN"
] | Accept (Poster) | https://openreview.net/pdf?id=SJgaRA4FPH | https://openreview.net/forum?id=SJgaRA4FPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vAFRmcIOJI",
"rJgi-kwNoH",
"rkeTWC8EiB",
"Bkl15TIEjB",
"SkeZlzEDqr",
"Bkl4cLB15S",
"Byg96ATCtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723557,
1573314306620,
1573314052972,
1573313926933,
1572450793099,
1571931788194,
1571901121835
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1448/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1448/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1448/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1448/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper provides methods for training generative models by combining federated learning techniques with differentiable privacy. The paper also provides two concrete applications for the problem of debugging models. Even though the method in the paper seems to be a standard combination of DP deep learning and federated learning, the paper is well-written and presents interesting use cases.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for their comments. We answer their specific questions in turn.\\n\\nQuestion #1 (Re: generality of approach to bugs, choice of bugs for paper)\\n\\nWe respectfully disagree with the reviewer\\u2019s conclusions on the generality of this approach (e.g., \\u201cThe two debugging illustrations are very specific in term of the errors introduced and the ways to achieve the debugging goal. It is not sure how they can be further generalized to other types of bugs.\\u201d), though we acknowledge this generality was not sufficiently described in our initial submission. We have significantly revised Section 2, including adding a new Section 2.1, which we hope resolves these concerns. In particular, apart from selecting a type of generative network that best applies to the problem domain (e.g., choosing RNNs to debug a language modeling task, choosing GANs to debug an image modeling task), no further assumptions need be made by the user about the nature of the data (or any bugs or biases therein).\\n\\nIndeed, it is precisely because the signal we are trying to detect is unknown that we recommend a generative model. Were the modeler to possess additional evidence that strongly indicated a particular type of bug, a simpler data analysis may be enough to detect it. (E.g., if the modeler of the image pipeline had reason to strongly suspect a priori that some user\\u2019s images were black/white inverted, they could instead compute per-user device average pixel values and use federated computation to aggregate into a histogram. The histogram would reveal that many devices had predominantly very white images.) But because we assume no such a priori knowledge, we desire an approach that is as general as possible.\\n\\nNote that in Section 2 we now reference a recently published survey paper providing a taxonomy of faults in deep learning systems (https://arxiv.org/abs/1910.11015). We feel this paper confirms our choice of bugs as being representative examples, as they are listed prominently (e.g., \\u2018text segmentation\\u2019, \\u2018pixel encoding\\u2019) under one of the largest subcategories of faults, \\u2018Preprocessing of Training Data\\u2019 (https://arxiv.org/abs/1910.11015, Figure 1).\\n\\n\\nQuestion #2 (Re: can approach be generalized into some methodologies, what are the limits)\\n\\nAs mentioned, we have attempted to make the general methodology prominent in Section 2. \\n\\nAssessing the limits of this approach is one of the most interesting questions to explore in future work. To make an analogy: In this paper we present the use of a new \\u2018sensor\\u2019 and show its promise; we hope to see the community take up research on this sensor so we can all work together to characterize its \\u2018signal-to-noise\\u2019 ratio. \\n\\nSome different limits that can be thought of are sensitivity (i.e., how much presence in the underlying data distribution is required before representative examples of a characteristic start to be synthesized) and fidelity (i.e., how realistic do synthesized examples need to be to detect a characteristic). We\\u2019ve attempted to discuss each in the paper; we welcome the reviewer\\u2019s feedback if the current discussion in the paper could be improved.\", \"sensitivity\": \"We empirically characterize the sensitivity limits of the approach in the paper, e.g., in Figure 4 and Table 4. There we show, for RNN language models trained with varying degrees of presence of the concatenation bug, the varying levels of UNKs noticed in the generated content. E.g., Figure 4 shows that when the bug is only present in 1% of sentences, the distribution of generated content is close to unchanged vs. the no-bug case; however, when the bug is present in 10% of sentences, a clear change in distribution of UNKs is noticeable. Does the reviewer feel this a useful empirical analysis of sensitivity limits?\", \"fidelity\": \"Section 2.1 now contains a discussion of the types of problems where lower-fidelity synthesis is ok and the problems where high-fidelity will likely be necessary. Please let us know if this addresses the reviewer\\u2019s concerns. (Thanks to this reviewer and other reviewers\\u2019 feedback, as we realized this matter was less clearly discussed in the initial draft.)\\n\\nAgain, we hope this paper encourages new work in the generative modeling community, in particular to both assess current limits and hopefully push them further. (Towards this end, we call attention to open questions about fidelity and sensitivity in the Conclusion and Open Problems sections, respectively.) But we feel this initial paper demonstrates that there are realistic data inspection problems that exist today that can already be addressed with an approach like we describe, i.e., are useful within current limits of sensitivity and fidelity.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We appreciate the reviewer\\u2019s comments.\\n\\nFirst, we wish to clarify our view of the contributions of the paper. The principle contribution is not the introduction of new algorithms, but of a methodology for combining existing techniques together with a careful selection procedure in order to solve a large set of ML modeling challenges when working with decentralized data. While this observation may seem straightforward in hindsight, we do not believe it has been presented in any previous works. We have revised Section 2 and added a new Section 2.1 which hopefully makes this contribution more clear. While indeed we did need to make some algorithmic contributions (training user-level DP GANs on decentralized data for the first time), this is a secondary contribution.\\n\\nWe think something that was missed in the initial review of our paper was the uniqueness of combining federated learning, generative models, and user-level differential privacy. We respectfully disagree with the reviewer\\u2019s assessment of the level of previous work that\\u2019s been done at the intersection of these 3 areas. (E.g., the reviewer states our paper \\u201ccomes in the midst of many other works\\u201d; we feel this is erroneous, and revised the paper to make things more clear.)\\n\\nWe have significantly edited the related work section to explain how none of the existing methods directly apply to our setting (e.g., how Triascyn & Faltings 2019 uses a much weaker, empirical measure of privacy than our setting with user-level differential privacy). In the cases where existing results are applicable, we have in fact used them directly, e.g. adopting techniques from McMahan et al. 2018 and Chen et al. 2019. \\n\\nDoes our revised comparison in the related work section resolve the reviewer\\u2019s concerns?\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for their comments and observations, and are thrilled they enjoyed the paper and found the motivating problem and proposed solution compelling. We now address their list of \\u2018negatives\\u2019 in turn.\", \"need_for_realism\": \"With regards to the comment that generative models \\u201cneed to be very realistic in order to be useful\\u201d, it is our experience that this is not the case for many real-life problem examples, such as the pixel inversion and concatenation bugs we consider. The measure of utility for the applications described in our paper is not realism, but rather the ability to detect the presence of distinguishing characteristics in the mimicked distribution. We agree it is certainly the case that one could not distinguish all characteristics unless generating content to full realism, but it definitely the case that there are a broad set of characteristics that are distinguishable at well short of full realism. We feel the two problem examples demonstrate this characteristic: while not generating extremely realistic samples, they nevertheless convey a clear \\u2018signal\\u2019 that is useful to the modeler, e.g., the presence of bugs. But our work far from solves the problem we address, and we hope this encourages new work in the generative modeling community. \\n\\nThanks for the reviewer\\u2019s comment as we\\u2019ve updated the paper (e.g., Section 2.1) to better describe the types of problems where lower-fidelity synthesis is ok and the problems where high-fidelity will likely be necessary. Please let us know if this addresses the reviewer\\u2019s concerns.\\n\\nPrivacy budgets for hyperparameter sweeps (\\u201cSetting hyper parameters of any generative model also needs access to original data and impacts the privacy guarantees. \\u201d):\\n\\nWe believe what you are referring to is that identifying the correct hyperparameters typically requires a \\u2018sweep\\u2019 of values, each of which involves data queries against the private data; the privacy budget must account for all these queries, not simply the final training run. (If we have mistakenly interpreted your comment, we apologize, and would benefit from a clarification.)\\n\\nThis is absolutely true, and we made sure this paper raises this issue prominently. In Section 3 on DP Federated Generative Models, we conclude the discussion of DP by noting \\u201c.... that since the modeler has access to not only the auxiliary generative models, but also potentially other models trained on the same data, if an (eps, delta) guarantee is desired, the total privacy loss should be quantified across these models, e.g. ...\\u201d We also discuss the need for algorithms requiring minimal tuning as an import step for future work. Again, our contribution is primarily in highlighting this important problem, rather than solving it. Please let us know if you feel our current wording doesn\\u2019t properly convey this matter prominently enough; we certainly wish to call attention to it as we hope to see further research in this area.\\n\\nIt\\u2019s also true that this is a broader concern that impacts not just the generative models of our paper, but any ML (or other query-based) process that repeatedly samples from private data. There are some mitigations typically proposed (i.e., using a different, proxy dataset to work out the hyperparameter values before training on the actual private data), but this continues to be an active research area in the larger DP community, which we applaud. Along with benefiting everyone else working in DP ML (generative models or not, federated or not), it will definitely benefit those of us working with federated generative models.\\n\\nFinally, as the paper shows, GAN convergence did not take an exorbitant # of rounds (the generated image results we show are after 1000 rounds of federated training). So the volume of data queries being performed when training federated generative models is in-line with the typical volume of data queries performed when doing any type of federated learning.\\n\\nQuibble #1 - DP bounds:\\n\\nWould the reviewer be able to clarify this comment further for our benefit? We regretfully have had trouble parsing their meaning the first time around. As DP gives us an upper bound on privacy loss, and we\\u2019re achieving a DP $(\\\\epsilon, \\\\delta)$ at population scale that are indicative of a tight bound, we feel we\\u2019ve shown that privacy loss is minimal? We must be misunderstanding something in the reviewer\\u2019s comment/critique.\\n\\nQuibble #2 - Compare with other methods:\\n\\nWe have significantly edited the related work section to explain how none of the existing methods directly apply to our setting; in the cases where existing results are applicable, we have in fact used them directly, e.g. adopting techniques from McMahan et al. (2018) and Chen et al. (2019). Please let us know if this does not address these concerns.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Goals\\nThe paper identifies a key challenge in a large class of real world federate learning problems where we also have to ensure user level data privacy. In these settings the modeler can not inspect the raw data samples from the user (due to privacy concerns) and hence all modeling tasks (from data wrangling to hypothesis generation to labeling to model class selection to validation) become far more challenging. The paper proposes that in these circumstances one may use a generative model that learns the data distribution using federated learning methods with provable differentiable privacy guarantees. The generative model can then produce data (unconditional, or conditional on some features or class labels) which can be inspected by the modeler without compromising user privacy. \\n\\nExperiments\\nThe authors illustrate the approach using existing federated DP RNN learning methods, and using a slightly novel GAN learning algorithm for images (largely similar to other algorithms). They use these methods to provide two examples: 1) learning a language model from text (word sequences) where there is a bug in pre-processing steps (tokenization); 2) learning a GAN for images of handwriting on checks where there is a pre-processing bug that inverts the grayscale of images. These examples illustrate the potential for such methods to possibly be useful to modelers. While one may quibble about some details (see section below) the experimental set up is reasonable to illustrate the need and some of the challenges modelers are likely to face in the real world (\\n\\nEvaluation & Questions\\n\\nI'm really torn because I really enjoyed the paper very much overall but I have some strong concerns as well.\", \"positives\": \"the paper is well motivated and very well written (it is really a pleasure to read, and it is very clear about the details -- especially after they release the code it should be possible to reproduce the results too). The authors shine a spot light on a problem that is very important & widespread (eg while learning from condifential data on cell phones). The proposed solution is fairly simple, intuitive, and quite high level (lets use a generative model that creates phantom data that can be inspected)\", \"negatives\": [\"I am not entirely sold on this being a realistic approach in the long term -- ie that some of the key problems will ever be solvable (I'm quite ok even if they are not solved now in the first paperr). The authors do a very good job of being transparent about several potential issues (see eg last paragraphs of main paper and appendix D). My biggest concerns are below:\", \"the phantom samples generated from the model need to be very realistic in order to be useful. In other words, we need to have excellent, high fidelity generate models. Even to create proper hypothesis, create proper model classes, assess convergence, or assess whether the generative model is good enough one needs to be able to inspect the raw data -- which can not be done in the first place. This can not be entirely automated eliminating need for human inspection -- and the problem is much worse in generative models (which need to encode more information) than in discriminative models which need to encode less information (bits) almost by definition. Thus one has simply traded the problem of needing to inspect data to model the final algorithm (whcih could be discriminative) and has to deal with the problem of needing data to inspect the intermediate, generative model (which is also learned using federates, DP guaranteeing ways). It is not at all clear what one accomplished by doing this.\", \"GANs are notoriously hard to train with mode collapse etc. Setting hyper parameters of any generative model also needs access to original data and impacts the privacy guarantees.\", \"***NOTE added after author response***\", \"The rebuttal has sufficiently address the quibbles I raised below. I'm leaving it here to allow traceability. I'm not fully convinced about the response to the main issue I raised above (ie if the generative model is not very representative, high-fidelity, then one cant know whether a potential bug discerned by inspecting its samples is an artefact of the generative model or whether it is truly a fundamental bug upstream -- and training a high quality generative model also requires one to inspect the raw data in the first place so the problem has simply been swept under the carpet). Nevertheless for a first paper on the topic I think the contributions and intuition provided here are quite valuable so I am ok leaving this for for future work.\", \"quibble#1: theoretical DP bounds are not very tight. For example, in table 2 they may want to use realistic estimates instead of epsilon even to prove their high level point. I'm not sure I can buy their argument even on this illustrative problem as it stands.\", \"quibble#2: You may want to at least make an effort to compare against the nearest possible methods in your experimental setup even if they are not a great match to the problem. I'm not intimately familiar with the recent literature but you mention Triascyn & Faltings (2019) so perhaps you could also use that and expand a bit more on the novelty here\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This work presents a method for using generative models to gain insight into sensitive user data, while maintaining guarantees about the privacy of that data via differential privacy (DP) techniques. This scheme takes place in the federated learning (FL) setting, where the data in question remains on a local device and only aggregate updates are sent to a centralized server. The intended application here is to use the trained generative models as a substitute for direct inspection of user data, thus providing more tools for debugging and troubleshooting deployed models in a privacy conscious manner.\", \"pros\": \"Given the growing computational power of mobile devices and the importance of privacy for large-scale deployment of machine learning, this work is a timely contribution that could augment the ML pipeline for at-scale applications dealing with sensitive data. The authors do a good job of fleshing out the intended use cases of their training scheme, and present a pair of experiments that are well-chosen for illustrating the utility of generative models when dealing with private data.\", \"cons\": \"Although likely of practical use, the work seems to be lacking in novelty in several respects. First, the techniques developed here represent a fairly straightforward merger of DP and FL tools without much in the way of qualitatively new offerings. While the authors do develop a new GAN training scheme that works in the FL setting, this adaptation is also pretty straightforward, and mostly follows the approach laid out in [1] for training recurrent neural nets.\\n\\nSecondly, this paper comes in the midst of many other works aiming to integrate different combinations of generative models, privacy, and distributed training (as pointed out in the related work section). While the particular combination of techniques here differ from those in previous work, the authors don't attempt to justify why their training scheme should be preferred over these prior methods. And although their experiments are useful for understanding the general utility of generative models trained in a private and decentralized setting, they unfortunately don't permit any direct comparison with the experiments used in these previous papers.\", \"verdict\": \"For the reasons given above, I cannot recommend acceptance of this work.\\n\\n[1] H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang, Learning differentially private recurrent language models, ICLR 2018\\n\\n*** Follow-up after authors' rebuttal ***\\n\\nI'd like to thank the authors for their rebuttal, and for the significant addition to the paper in the form of an expanded Section 2. This material has helped me gain a bit better perspective on the use cases for their work, and convinced me of the potential for their methodology within real-world development of deep learning tools and services. In addition, this additional context helps to motivate the two experiments described here as fair representatives of actual debugging problems, and not simply issues that were hand-chosen to prove the authors' point.\\n\\nI still hold that the paper offers very little in the way of new conceptual or technical contributions, but in light of the potential utility of this privacy-conscious generative pipeline for the broader deep learning community, I have changed my score from a weak reject to a weak accept.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes a differentially private federated learning method to learn GAN with application to data bugging situations where privacy protection is needed. The proposed method tries to leave the data at the user-end to train the discriminators, and learn the generator at the centralised server. To support the debugging data related issues as claimed, two specific examples related to text and image modeling were presented. It is the generator which is DP-protected (as the discriminators are DP-protected) makes it possible where the generated data can hint the potential bugs.\", \"The scenario being considered is interesting and two real examples have used to illustrate the idea. However, this paper falls short in the following ways:\", \"It adopts what being proposed in McMahan et al. (2018) with some modifications to achieve the goal. The novelty is more related to the proposed application which allows debugging data issues to be possible when the data is private and decentralised.\", \"The two debugging illustrations are very specific in term of the errors introduced and the ways to achieve the debugging goal. It is not sure how they can be further generalized to other types of bugs.\", \"The paper is well written. However, the readers should have reasonable background on DP, GAN, federated learning, and generative models, or it will be hard to read through. Having said that, the authors do provide quite comprehensive literature review on related topics. But, then not much space is left for providing the necessary background and details for the proposed federated learning for GAN with DP (other than referring to Algorithm 1). The experiment section is good.\"], \"specific_questions\": [\"Other than the tokenisation bug and the image insertion bug, can more possible examples be described?\", \"Can the examples be generalised into some methodologies? And, what are the limits? Will there be data inspection needs which cannot be achieved by this approach? What are they?\"]}"
]
} |
rylT0AVtwH | Learning from Partially-Observed Multimodal Data with Variational Autoencoders | [
"Yu Gong",
"Hossein Hajimirsadeghi",
"Jiawei He",
"Megha Nawhal",
"Thibaut Durand",
"Greg Mori"
] | Learning from only partially-observed data for imputation has been an active research area. Despite promising progress on unimodal data imputation (e.g., image in-painting), models designed for multimodal data imputation are far from satisfactory. In this paper, we propose variational selective autoencoders (VSAE) for this task. Different from previous works, our proposed VSAE learns only from partially-observed data. The proposed VSAE is capable of learning the joint distribution of observed and unobserved modalities as well as the imputation mask, resulting in a unified model for various down-stream tasks including data generation and imputation.
Evaluation on both synthetic high-dimensional and challenging low-dimensional multi-modality datasets shows significant improvement over the state-of-the-art data imputation models. | [
"data imputation",
"variational autoencoders",
"generative models"
] | Reject | https://openreview.net/pdf?id=rylT0AVtwH | https://openreview.net/forum?id=rylT0AVtwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Cjd07CDw07",
"HJe_FU7soH",
"B1gi3o1oiS",
"SJxujtOdiH",
"HJlxtt_Osr",
"Skl-7ddujB",
"rJxyl7dOoB",
"ryxqjgdujS",
"r1gk4ck25B",
"BJlU6IBK9S",
"H1eOXTuD5B",
"H1ly2z2RFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723527,
1573758592341,
1573743538829,
1573583263685,
1573583224084,
1573582872631,
1573581543067,
1573580962495,
1572760102850,
1572587197766,
1572470047902,
1571893927407
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1447/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1447/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1447/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1447/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1447/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1447/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission proposes a VAE-based method for jointly inferring latent variables and data generation. The method learns from partially-observed multimodal data.\", \"strengths\": \"-Learning to generate from partially-observed data is an important and challenging problem.\\n-The proposed idea is novel and promising.\", \"weaknesses\": \"-Some experimental protocols are not fully explained.\\n-The experiments are not sufficiently comprehensive (comparisons to key baselines are missing).\\n-More analysis of some surprising results is needed.\\n-The presentation has much to improve.\\n\\nThe method is promising but the mentioned weaknesses were not sufficiently addressed during discussion. AC agrees with the majority recommendation to reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for a detailed reply!\", \"comment\": \"I would like to thank the authors for their detailed reply and the additional experiments they ran. Given a good rebuttal and improvements in the submission, I am increasing the score to weak reject, yet I still think that the manuscript is not well suited for publication in the current form. However, I encourage the authors to improve the paper and resubmit.\", \"here_are_my_comments_on_the_reply\": \"(1) Reconstruction from prior during training\\n> (I) optimize the final ELBO without conditional log-likelihood for unobserved modalities x_u\\nIn this alternative setting you do not get any training signal on how to reconstruct the missing modalities in the decoder. This is why I say that the way you provide the ground truth is the crucial ingredient of the model.\\nThere is no response on the blurriness of the samples.\\n\\n(2) Comparison with VAEAC\\n- The latest revision of the text still states that VAEAC cannot work with the partially-observed data: \\u201cVAE with arbitrary conditioning (VAEAC) which allows generation of missing data conditioned on any combination of observed data. This algorithm needs complete data during training and cannot learn from partially-observed data only.\\u201d\\n-Given your results, it seems that VAEAC is competitive in the partially-observed setting, and so should probably be added as a baseline.\\n- I still don\\u2019t understand why the VAEAC results are better on some sets and worse on others. More generally, changing the experimental setting compared to previous works without a clear reason is not a good scientific practice.\\n\\n(3) Experiments under synthetic non-MCAR masking\\n(5) Conditional imputation:\\nThank you for adding these experiments. I believe they strengthen the paper by showing that the model can handle non-MCAR masking and produce a diverse set of samples given the observed modalities.\\n\\n(4) Baselines:\\nI don\\u2019t find the response on GANs convincing. The paper uses metrics such as RMSE which do not really require the model to produce diverse samples, so GANs seem like a good baseline. Furthermore, there is no reply on the non-deep learning baselines.\\n\\n(6.2) NRMSE > 1\\nI did not understand this argument. Surely NRMSE can be arbitrarily large, but a well-tuned algorithm should probably obtain NRMSE < 1, i.e. perform better than a constant predictor set to the true mean. VAEAC paper reports NRMSE of 0.87-0.91 for Glass, which is consistent with this.\"}",
"{\"title\": \"Response to the Reply\", \"comment\": \"I like the method and the problem to tackle. I went through the response, and I appreciated that the author had tried to address the comments.\\n\\nHowever, the two main problems are remaining:\\n1) I don't feel comfortable if you called low-dimensional tabular data as multimodal data. I think the presentation before the experimental section is over-claimed, but the experiments should be more comprehensive. Besides, why separating the three modalities in CMU-MOSI/ ICT-MMMP? It feels weird to me since the author claimed the method should be multimodal. \\n\\n2) The second one is more serious. I don't feel the presentation flow of the paper meets the bar for a top conference paper. I do think after a significant effort in the presentation, the quality of the paper can be significantly improved.\"}",
"{\"title\": \"Reply to Reviewers\", \"comment\": \"We would like to thank all reviewers for their thorough and valuable feedback. We discuss the questions and concerns below and provide clarifications in the updated paper.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"(1) Multimodal setting:\\nWe apologize for not describing experimental settings clearly. In general, we believe multi-modal data is more general than simply image-text or video-text pair. By unifying tabular data also as multi-modal data (with each attribute as one modality), we show that VASE provides us a principled way for imputation and is capable of generalizing to more data families. We update additional multimodal dataset experiments in the point (3) below.\\n\\n(2) Prediction and Representation learning:\\nWe consider conducting these experiments during the rebuttal but none of the paper's code has been released by the authors. We agree deep latent variable models explicitly model the data distribution and provide a natural way for representation learning, but in our paper we evaluate the model from the perspective of imputation and generation.\\n\\n(3) Additional experiments:\\nWe updated additional imputation experiments on multimodal datasets (see in Appendix C.5) : CMU-MOSI/ICT-MMMO (Tsai et al. 2019), FashionMNIST/MNIST (Wu et al. 2018). Each dataset contains two or three modalities. VSAE outperforms other baselines on multimodal datasets under partially-observed setting. \\n\\n(4) Require mask during training:\\nIn our experiments, the binary mask is always fully-observed as is the nature of partially-observed data. A mask simply indicates which modalities are observed and which are not. We agree that it is very interesting to design a model with partially-observed or even unobserved mask. \\nHowever, it is beyond the scope of this work and we will consider it in future work.\\n\\n\\n[1] Wu et al. Multimodal Generative Models for Scalable Weakly-Supervised Learning, NeurIPS 2018. \\n[2] Tsai et al. Learning Factorized Multimodal Representation, ICLR 2019.\"}",
"{\"title\": \"Reply to Reviewer #4\", \"comment\": \"(1) Reconstruction from prior during training:\\nThe crux of the proposed model is the selective proposal distribution. \\\"Pseudo\\\" sampling for unobserved modalities during training provides a way to facilitate model training process. We evaluated the model under two training settings: (I) optimize the final ELBO without conditional log-likelihood for unobserved modalities x_u; and (II) optimize the final ELBO with conditional log-likelihood of unobserved modalities. This is realized by utilizing the \\\"pseudo\\\" sampling described before (and in the paper).\\nThe results are comparable but the added term in setting II shows benefits on some datasets. While setting I is solely based on the observed modalities, the setting II incorporates the unobserved modalities along with the observed ones. By using the complete data, the setting II describes the complete ELBO corresponding to the partially observed multimodal data (in consideration).\\n\\n(2) Comparison with VAEAC:\\nIn order to establish fair comparison, we used the same backbone network structures and training criteria for all baseline models and our proposed VSAE. Therefore, the implementation details differ from the original VAEAC paper. We did our best to maintain the optimization details described in all baseline papers.\\nExperiments on VAEAC with partially-observed data are also conducted. Results show that VAEAC under this setting can achieve comparable performance on categorical datasets: 0.245(0.002) on Phishing, 0.399(0.011) on Mushroom while the errors of VSAE are 0.237(0.001) on Phishing, 0.396(0.008) on Mushroom. However, on numerical and bimodal datasets, partially trained VAEAC performs worse than VSAE :\\n*VSAE: \\n0.455(0.003) on Yeast; 1.312(0.021) on Glass;0.1376(0.0002) on MNIST+MNIST; 0.1198(0.0001) on MNIST+SVHN; \\n*VAEAC trained partially:\\n0.878(0.006) on Yeast; 1.846(0.037) on Glass;0.1402(0.0001) on MNIST+MNIST; 0.2126(0.0031) on MNIST+SVHN.\\n\\n(3) Experiments under synthetic non-MCAR masking:\\nAs mentioned by the reviewer, we conduct experiments on non-MCAR masking following state-of-the-art non-MCAR model MIWAE [2]. Same as MIWAE, we synthesize masks by defining some rules to specify the probability of a Bernoulli distribution. Please refer to Table 3 and Appendix C.4 for updated comparison results. VSAE outperforms MIWAE under all MCAR, MAR and NMAR masking mechanisms. \\n\\n(4) Baselines:\\nAll baselines considered in the paper are designed to have comparable number of parameters (same or larger than our model) to make the comparison fair. We have updated the baseline details in the Appendix B.3. Although GAN-based models show promising imputation results, they usually fail to model data distribution properly. Therefore, we do not consider them as our baseline models. It is also important to note that VSAE is not a model designed only for imputation, but a generic framework to learn from partially-observed data for both imputation and generation. \\n\\n(5) Conditional imputation:\\nWhen performing imputation, we assume that the generation is not conditioned on the observed image, but only conditioned on the factorized latent variables. Input an observed image to the model, we observe a \\\"conditional\\\" distribution if we independently sample from the latent variables. See Figure.7 in updated Appendix C.2. \\n\\n(6) Answers to the questions:\\n1. Please refer to point (2) for detailed explanation on comparison with VAEAC. In summary, there are multiple reasons why the performance is not identical with the original VAEAC: (I) the back-bone structures are not the same; (II) training criteria (including batch size, learning rate, etc.) are not the same; and (III) training/validation/test split is different. We would like to emphasize that the aforementioned changes are necessary to establish fair comparison.\\n\\n2. We adopt the calculation from [1] where NRMSE is RMSE normalized by the standard deviation of each feature followed by an average over all imputed features. The standard deviation of ground truth features does not guarantee NRMSE < 1. \\n\\n\\n[1] Ivanov et al.Variational Autoencoder with Arbitrary Conditioning, ICLR 2019\\n[2] Mattei et al. MIWAE: Deep Generative Modelling and Imputation of Incomplete Data Sets, ICML 2019\"}",
"{\"title\": \"Reply to Reviewer #1\", \"comment\": \"We would like to thank the reviewer for providing valuable and detailed feedback. We have addressed the clarity concerns in the updated paper. Figure captions, metrics used in the table, etc, as mentioned in the presentation section of the review have been carefully examined and updated in the paper.\\nWe will reorganize the experiment section to better present the comparisons under different experimental settings.\\n\\n(1) Factorized Latent Variables:\\nThe factorization of latent space with respect to the modalities provides a way to differentiate observed and unobserved modalities. Therefore, VSAE is capable of handling partially-observed data where the missing modalities can be arbitrary. In addition, the embeddings are intuitively more meaningful as input to unimodal encoders is now limited to only observed modalities, eliminating the effect of missing modalities.\\nWhen performing imputation/generation, however, we want to capture the dependencies between modalities. In other words, unobserved modalities should be imputed based on the information extracted from observed modalities. For experiments, we design this by conditioning decoders on all latent variables, essentially accessing information from all observed modalities. This is not in contradiction to the factorized latent variable assumption. Instead, the encoders try to embed each modalities individually, while decoders learn the dependencies between different modalities.\\n\\n(2) Multimodal Experiments:\\nWe apologize for unclear description of experimental settings. In general, we believe multi-modal data is more general than conventional image-text or video-text pairs. By unifying tabular data also as multi-modal (with each attribute as one modality), we show that VSAE provides us a principled way for imputation, capable of generalizing to more data families. Specifically, we conducted experiments on two types of data: \\n(1) low-dimensional tabular data, and (2) high-dimensional data (pixel or text) as \\\"multimodal\\\" to better define the overall task of learning from partially-observed data. \\nUpon request, we have included more extensive experiments following [1] on MNIST/FashionMNIST, and [2] on CMU-MOSI/ICT-MMMO. Results are reported in Table 10 and Table 11 (Appendix C.5). As shown, VSAE consistently outperforms baseline models across the added experiments as well. \\n\\n(3) Discussions on Comparison with Upper Bound Methods:\\nModels trained with fully-observed data in theory should have better performance, thus we treat them as upper bound methods. However, it is very interesting to observe that in some cases, VSAE have superior performances. One possible explanation is that missing modalities introduces extra noise into the model as regularizer, thereby, increasing the generalization ability. However, detailed experiments and more discussions need to be carried out to back up this explanation. \\n\\n\\n[1] Wu et al. Multimodal Generative Models for Scalable Weakly-Supervised Learning, NeurIPS 2018. \\n[2] Tsai et al. Learning Factorized Multimodal Representation, ICLR 2019.\"}",
"{\"title\": \"Reply to Reviewer #5\", \"comment\": \"(1) Prior Network:\\nDuring training phase, we sample from prior network to generate \\\"pseudo\\\" observations for unobserved modalities. The pseudo observations are then used to estimate the conditional likelihood for such modalities (E_x_j in the ELBO).\\nPractically, we follow a two-stage method in our implementation. At each iteration, the first stage imputes unobserved modalities (with latent code sampled from approximate posterior for observed modalities, and prior for unobserved modalities), followed by the second stage to estimate ELBO based on the imputation and backpropagate corresponding gradients.\\n\\n(2) Conditioning on Ground-Truth Mask:\\nWe conduct experiments with decoder p(x|z, m) conditioned on the original mask in training set, and observe comparable performance and convergence time. The mask distribution might be easier to learn as compared to data distribution (since the mask is fully-observed). However, we argue that jointly learning the mask distribution and data distribution provides us an opportunity to further analyze the missing mechanism and potentially can facilitate other down-stream tasks. \\n\\n(3) Image Inpainting:\\nWe appreciate the reviewer's suggestion on evaluate the effectiveness of our model on image inpainting task. \\nHowever, with our current setup, an encoder is trained for each modality respectively, making it difficult to scale to inpainting task, if we treat each pixel as an individual modality. \\nNevertheless, we believe this is an interesting extension. The backbone models and mathematical formulations can be very similar, if not the same. A potential solution could be to employ patch level encoders to reduce the total number of encoders needed.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper proposes variational selective autoencoders (VSAE) to learn the joint distribution model of full data (both observed and unobserved modalities) and the mask information from arbitrary partial-observation data. To infer latent variables from partial-observation data, they introduce the selective proposal distribution that switches encoders depending on whether each input modality is observed.\\n\\nThis paper is well written, and the method proposed in this paper is nice. In particular, the idea of the selective proposal distribution is interesting and provides an effective solution to deal with the problem of missing modality in conventional multimodal learning. The experiment is also well structured and shows higher performance than the existing models. However, I have some questions and comments, so I\\u2019d like you to answer them.\", \"comments\": [\"The authors state that x_j is sampled from the \\\"prior network\\\" to calculate E_x_j in Equation 10, but I didn\\u2019t understand how this network is set up. Could you explain it in detail?\", \"The authors claim that adding p(m|z) to the objective function (i.e., generating m from the decoder) allows the latent variable to have mask information. However, I don\\u2019t know how effective this is in practice. Specifically, how performance differs compared to when p (m | z) is not used and the decoder p (x | z, m) is conditioned by the mask included in the training set instead of the generated mask?\", \"Why did you not do image inpainting in higher-dimensional experiments like Ivanov et al. (2019), i.e., considering each pixel as a different modality? Of course, I know that Ivanov et al. require the full data as input during training, but I\\u2019m interested in whether VSAE can perform inpainting properly even if trained given imperfect images.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposed variational selective autoencoders (VSAE) to learn from partially-observed multimodal data. Overall, the proposed method is elegant; however, the presentation, the claim, and the experiments suffer from significant flaws. See below for detailed comments.\\n\\n[Pros]\\n1. The main idea of the paper is to propose a generative model that can handle partially-observed multimodal data during training. Specifically, prior work considered non-missing data during training, while we can't always guarantee that all the modalities are available. Especially in the field of multimodal learning, we often face the issue of imperfect sensors. This line of work should be encouraged. \\n\\n2. In my opinion, the idea is elegant. The way the author handles the missingness is by introducing an auxiliary binary random variable (the mask) for it. Nevertheless, its presentation and Figure 1 makes this elegant idea seems over-complicated.\\n[Cons]\\n\\n1. [The claim] One of my concerns for this paper is the assumption of the factorized latent variables from multimodal data. Specifically, the author mentioned Tsai et al. assumed factorized latent variables from the multimodal data, while Tsai et al. actually assumed the generation of multimodal data consists of disentangled modality-specific and multimodal factors. It seems to me; the author assumed data from one modality is generated by all the latent factors (see Eq. (11)), then what is the point for assuming the prior of the latent factor is factorized (see Eq. (4) and (5))? One possible explanation is because we want to handle the partially-observable issues from multimodal data, and it would be easier to make the latent factors factorized (see Eq. (6)). The author should comment on this. \\n\\n2. [Phrasing.] There are too many unconcise or informal phrases in the paper. For example, I don't understand what does it mean in \\\"However, if training data is complete, ..... handle during missing data during test.\\\" Another example would be the last few paragraphs on page 4; they are very unclear. Also, the author should avoid using the word \\\"simply\\\" too often (see the last few paragraphs on page 5). \\n\\n3. [Presentation.] The presentation is undesirable. It may make the readers hard to follow the paper. I list some instances here. \\n\\t\\ta. In Eq. (3), it surprises me to see the symbol \\\\epsilon without any explanation. \\n\\t\\tb. In Eq. (6), it also surprises me to see no description of \\\\phi and \\\\psi. The author should also add more explanation here, since Eq. (6) stands a crucial role in the author's method.\\n\\t\\tc. Figure 1 is over-complicated.\\n\\t\\td. What is the metric in Table 1 and 2? The author never explains. E.g., link to NRMSE and PFC to the Table. \\n\\t\\te. What are the two modalities in Table 2? The author should explain.\\n\\t\\tf. The author completely moved the results of MNIST-SVHN to Supplementary. It is fine, but it seems weird that the author still mentioned the setup of MNIST+SVHN in the main text. \\n\\t\\tg. The author mentioned, in Table , the last two rows serve the upper bound for other methods. While some results are even better than the last two rows. The author should explain this.\\n\\t\\th. Generally speaking, the paper does require a significant effort to polish Section 3 and 4. \\n\\n4. [Experiments.] The author presented a multimodal representation learning framework for partially-observable multimodal data, while the experiments cannot corraborrate the claim. First, I consider the tabular features as multi-feature data and less to be the multimodal data. Second, the synthetic image pairs are not multimodal in nature. These synthetic setting can be used for sanity check, but cannot be the main part of the experiments. The author can perhaps consider the datasets used by Tsai et al. There are seven datasets, and they can all be modified to the setting of partially-observable multimodal data. Also, since the synthetic image pairs are not multimodal in nature, it is unclear to me for what the messages are conveyed in Figure 3 and 4. \\n\\n\\nI do expect the paper be a strong submission after a significant effort in presentation and experimental designs. Therefore, I vote for weak rejection at this moment.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes a novel training method for variational autoencoders that allows using partially-observed data with multiple modalities. A modality can be a whole block of features (e.g., a MNIST image) or just a single scalar feature. The probabilistic model contains a latent vector per modality. The key idea is to use two types of encoder networks: a unimodal encoder for every modality which is used when the modality is observed, and a shared multimodal encoder that is provided all the observed modalities and produces the latent vectors for the unobserved modalities. The whole latent vector is passed through a decoder that predicts the mask of observed modalities, and another decoder that predicts the actual values of all modalities. The \\u201cground truth\\u201d values for the unobserved modalities are provided by sampling from the corresponding latent variables from the prior distribution once at some point of training.\\n\\nWhile I like the premise of the paper, I feel that it needs more work. My main concern is that sampling the target values for the unobserved modalities from the prior would almost necessarily lead to blurry synthetic \\u201cground truth\\u201d for these modalities, which in turn means that the model would produce underconfident predictions for them. The samples from MNIST in Figure 3 are indeed very blurry, supporting this. Furthermore, the claims of the model working for non-MCAR missingness are not substantiated by the experiments. I believe that the paper should currently be rejected, but I encourage the authors to revise the paper.\", \"pros\": [\"Generative modelling of partially observed data is a very important topic that would benefit from fresh ideas and new approaches\", \"I really like the idea of explicitly modelling the mask/missingness vector. I agree with the authors that this should help a lot with non completely random missingness.\"], \"cons\": [\"The text is quite hard to read. There are many typos (see below). The text is over the 8 page limit, but I don\\u2019t think this is justified. For example, the paragraph around Eqn. (11) just says that the decoder takes in a concatenated latent vector. The MNIST+SVHN dataset setup is described in detail, yet there is no summary of the experimental results, which are presented in the appendix.\", \"The approach taken to train on partially-observed data is described in three sentences after the Eqn. (10). The non-observed dimensions are imputed by reconstructions from the prior from a partially trained model. I think that this is the crux of the paper that should be significantly expanded and experimentally validated. It is possible that due to this design choice the method would not produce sharper reconstructions than the original samples from the prior. Figures 3, 5 and 6 indeed show very blurry samples from the model. Furthermore, it is not obvious to me why these prior samples would be sensible at all, given that all modalities have independent latents by construction.\", \"The paper states multiple times that VAEAC [Ivanov et al., 2019] cannot handle partially missing data, but I don\\u2019t think this is true, since their missing features imputation experiment uses the setup of 50% truly missing features. The trick they use is adding \\u201csynthetic\\u201d missing features in addition to the real ones and only train on those. See Section 4.3.3 of that paper for more details.\", \"The paper states that \\u201cit can model the joint distribution of the data and the mask together and avoid limiting assumptions such as MCAR\\u201d. However, all experiments only show results in the MCAR setting, so the claim is not experimentally validated.\", \"The baselines in the experiments could be improved. First of all, the setup for the AE and VAE is not specified. Secondly, it would be good to include a GAN-based baseline such as GAIN, as well as some more classic feature imputation method, e.g. MICE or MissForest.\", \"The experiments do not demonstrate that the model learns a meaningful *conditional* distribution for the missing modalities, since the provided figures show just one sample per conditioning image.\"], \"questions_to_the_authors\": \"1. Could you comment on the differences in your setup in Section 4.1 compared to the VAEAC paper? I\\u2019ve noticed that the results you report for this method significantly differ from the original paper, e.g. for VAEAC on Phishing dataset you report PFC of 0.24, whereas the original paper reports 0.394; for Mushroom it\\u2019s 0.403 vs. 0.244. I\\u2019ve compared the experimental details yet couldn\\u2019t find any differences, for example the missing rate is 0.5 in both papers.\\n2. How do you explain that all methods have NRMSE > 1 on the Glass dataset (Table 1), meaning that they all most likely perform worse than a constant baseline?\", \"typos_and_minor_comments\": [\"Contributions (1) and (2) should be merged together.\", \"Page 2: to literature -> to the literature\", \"Page 2: \\u201cThis algorithm needs complete data during training cannot learn from partially-observed data only.\\u201d\", \"Equations (1, 2): z and \\\\phi are not consistently boldfaced\", \"Equations (4, 5): you can save some space by only specifying the factorization (left column) and merging the two equations on one row\", \"Page 4, bottom: use Bernoulli distribution -> use factorized/independent Bernoulli distribution\", \"Page 5, bottom: the word \\u201csimply\\u201d is used twice\", \"Page 9: learn to useful -> learn useful\", \"Page 9: term is included -> term included\", \"Page 9: variable follows Bernoulli -> variable following Bernoulli\", \"Page 9: conditions on -> conditioning on\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper proposes to impute multimodal data when certain modalities are present. The authors present a variational selective autoencoder model that learns only from partially-observed data. VSAE is capable of learning the joint\\ndistribution of observed and unobserved modalities as well as the imputation mask, resulting in a model that is suitable for various down-stream tasks including data generation and imputation. The authors evaluate on both synthetic high-dimensional and challenging low-dimensional multimodal datasets and show improvement over the state-of-the-art data imputation models.\", \"strengths\": [\"This is an interesting paper that is well written and motivated.\", \"The authors show good results on several multimodal datasets, improving upon several recent works in learning from missing multimodal data.\"], \"weaknesses\": \"- How multimodal are the datasets provided by UCI? It seems like they consist of different tabular datasets with numerical or categorical variables, but it was not clear what the modalities are (each variable is a modality?) and how correlated the modalities are. If they are not correlated at all and share no joint information I'm not sure how these experiments can represent multimodal data. \\n- Some of the datasets the authors currently test on are quite toy, especially for the image-based MNIST and SVHN datasets. They should consider larger-scale datasets including image and text-based like VQA/VCR, or video-based like the datasets in (Tsai et al., ICLR 2019).\\n- In terms of prediction performance, the authors should also compare to [1] and [2] which either predict the other modalities completely during training or use tensor-based methods to learn from noisy or missing time-series data.\\n- One drawback is that this method requires the mask during training. How can it be adapted for scenarios where the mask is not present? In other words, we only see multiple modalities as input, but we are not sure which are noisy and which are not?\\n\\n[1] Pham et al. Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities, AAAI 2019 \\n[2] Liang et al. Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization, ACL 2019\\n\\n### Post rebuttal ###\\nThank you for your detailed answers to my questions.\"}"
]
} |
SJl3CANKvB | A SIMPLE AND EFFECTIVE FRAMEWORK FOR PAIRWISE DEEP METRIC LEARNING | [
"Qi Qi",
"Yan Yan",
"Zixuan Wu",
"Xiaoyu Wang",
"Tianbao Yang"
] | Deep metric learning (DML) has received much attention in deep learning due to its wide applications in computer vision. Previous studies have focused on designing complicated losses and hard example mining methods, which are mostly heuristic and lack of theoretical understanding. In this paper, we cast DML as a simple pairwise binary classification problem that classifies a pair of examples as similar or dissimilar. It identifies the most critical issue in this problem---imbalanced data pairs. To tackle this issue, we propose a simple and effective framework to sample pairs in a batch of data for updating the model. The key to this framework is to define a robust loss for all pairs over a mini-batch of data, which is formulated by distributionally robust optimization. The flexibility in constructing the {\it uncertainty decision set} of the dual variable allows us to recover state-of-the-art complicated losses and also to induce novel variants. Empirical studies on several benchmark data sets demonstrate that our simple and effective method outperforms the state-of-the-art results. | [
"Deep Metric Learning",
"Distributionally Robust Optimization"
] | Reject | https://openreview.net/pdf?id=SJl3CANKvB | https://openreview.net/forum?id=SJl3CANKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"E8WAc3bqm8",
"ryguiHzojH",
"rJxRWXGsjB",
"BJgxWlzsoS",
"HJgZmWvTtr",
"SyevOXVstr",
"H1lyrJVHtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723499,
1573754271598,
1573753606394,
1573752824248,
1571807513465,
1571664751435,
1571270455145
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1446/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1446/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1446/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers agree that this is a reasonable paper but somewhat derivative. The authors discussed the contribution further in the rebuttal, but even in light of their comments, I consider the significance of this work too low for acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Difference from traditional DRO\", \"comment\": \"Thanks for your comments! For differences between our framework and traditional DRO method, please also check response to Reviewer 3. We want to emphasize that the modifications (i.e., defining over a mini-bath for the robust loss, and more general and flexible regularization of the dual variables) are subtle but very important for achieving better empirical results than complicated losses and bringing more theoretical insights for complicated losses.\"}",
"{\"title\": \"Significant improvement over MS loss\", \"comment\": \"We believe that our experimental improvement over MS loss is significant.\\nPlease note that (from Table 1 and 3), compared with the best baselines, MS improved 2.1% (over Margin), -1.1% (over ABE), 2.4% (over ABE) on Cub-200-2011, Cars-196 and In-Shop in Recall@1, respectively. On the other hand, our best DRO variant always achieves the best performance, improving 2.4% (over MS) on Cub-200-2011, 1.2% (over ABE), 2.5% (over MS) on Cars-196, 1.6% over MS on In-Shop. Among these variants of our framework, DRO-KL_M improves 2.0% (over MS), 2.5% (over MS), and 1.1% (over MS) on Cub-200-2011, Cars-196 and In-Shop, respectively. \\n\\nIn our ablation study, we show that DRO-KL-G could recover the performance of MS and LS by changing the hyper parameter \\\\gamma (Table 2 and 4). Furthermore, tuning \\\\gamma helps DRO-KL-G outperform MS in Recall@1 by 1.4%, 1.8% and 6% on Cub-200-2011, Cars-196 and In-Shop, respectively.\"}",
"{\"title\": \"Difference from traditional DRO framework and our contributions to DML\", \"comment\": \"We would emphasize that our framework is not a straightforward application of DRO. Instead, by addressing the critical issues in DML, our framework is an effective and general approach to DML. We summarize two significant contributions of our paper.\\n\\nFirst, our framework is more general, flexible and practical than traditional DRO. While traditional DRO usually restricts the dual variable to be on a simplex, our framework is built upon the minibatch and its uncertainty set for the dual variable is not necessarily restricted to such probability simplex. Please note that this is very important for us 1) to make the proposed approach practical for big data than traditional DRO defined on the whole data set; 2) to recover the approaches based on MS loss and LS loss by choosing different regularizations on the dual variables. It is notable that such recovery hinges on special grouping of the dual variables, which is not possible under traditional DRO framework; 3) to design more powerful variants such as DRO-TopK-PN, which is less sensitive to the positive to negative ratio (shown in Figure 1). \\n\\nSecond, our framework introduces significant contributions to DML by 1) connecting loss development and sampling strategy design together, 2) justifying the existing loss functions in DML (e.g., MS and LS loss) from our general framework, and 3) providing insights to design new variants.\\nIn literature, as mentioned in our paper, many studies of DML either focus on developing increasingly complicated minibatch losses which rarely provide deep insights for why it is effective, or designing sampling strategies in terms of pairs. Consequently, developing loss functions and designing sampling strategies are two independent research directions, and may not benefit from each other. However, in our framework, we unify these two lines of work into a general framework to jointly take advantage of sampling and loss development. Specifically, the sampling weights according to the variable p is updated based on the relative magnitude of pairwise losses within a mini-batch at each iteration. Then the constructed robust loss is able to promote performance, which has been extensively demonstrated in our experiments.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper casts deep metric learning (DML) as a pairwise binary classification problem such that pairs of examples need to be classified as similar or dissimilar. The authors propose an objective function that computes a weighted sum over the pairwise losses in a mini-batch. The weight vector is selected to maximize the objective from a decision set encoding constraints. This formulation is called the distributionally robust optimization (DRO) framework.\\n\\nThe authors argue that the DRO framework is theoretically justified by showing how certain decision sets result in existing machine learning loss functions. This portion of the paper seemed hand-wavy. It is not clear what is the purpose of including the theorem from Namkoon & Duchi. It would be more clear in my view to just make the short point that a certain decision set recovers the DRO with f-divergence as would be expected. The claims with regard to learning theory are over-stated in the paper.\\n\\nThe authors proposed three variants of the general framework. They include a top-K formulation, a variance-regularized version, and a top-K version using a balance between positive and negative examples. The DRO framework and the variants are the main contributions in terms of methodology in this paper. It is also shown that the framework generalizes more complicated recently proposed losses.\\n\\nThe experiments demonstrate the DRO framework consistently outperforms state of the art deep metric learning methods on benchmark datasets by small margins. There is also a computational speed advantage that is shown.\\nOverall, this paper shows that the ideas from distributionally robust optimization work well in deep metric learning. In particular, the paper shows that by combining the DRO framework with simple loss functions, performance comparable with complicated loss functions can be obtained. This aspect, along with the generality are the main strong suits. That being said, I do not see this paper to be that significant of a contribution. The main idea in the paper seems like a rather direct application of the DRO modeling framework and it does not provide too significant of improvement over the MS loss. The paper was not written super clearly and was too long. Reviewers were instructed to apply a higher standard to papers in excess of 8 pages and this paper would have been presented more effectively if it was shorter. For these reasons, I recommended a weak reject.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors address the (increasingly popular) problem of learning a metric from a given multi-dimensional data set. They consider the deep metric learning setup, where target distances are defined as the euclidean distances in an artificial feature space (created by a deep neural network). Main focus of the paper is to cases where the data set is affected by a substantial imbalance between the amount of examples that are similar to each other and the total number of examples.\\n\\nI would tend to accept the paper because handling the imbalance problem in metric learning is important and both the theoretical analysis and the experiments show that the proposed method may have some impact.\\n\\nThe idea of reducing the problem to a binary classification between similar and dissimilar examples may look too simple but i) is a common approach in deep metric learning, ii) helps to handle the implicit imbalance problem and iii) suggests possible generalisations to other network-based problems (for example, where similarity is naturally defined by the existence of absence of a link). Showing that many complicated losses are equivalent to DRO may also help the general understanding of the metric learning task.\\n\\nMy main concerns are about the net contribution of the paper. Tackling the imbalance problem is important but it is not clear whether the full metric learning setup is really needed. The authors could have stated more precisely in what sense the metric learning unbalanced problem they consider is different from usual unbalanced binary classification. Otherwise, as DRO is well known, it is hard to identify the real novelty of their method.\", \"questions\": [\"how does the specific metric learning setup make the considered DRO different from usual unbalanced classification?\", \"how the network architecture affects the performance? For example, would the size of the embedding space change the recall/imbalance plot?\", \"Is the choice of euclidean distances standard in deep metric learning? Would a choice of more general distances be incorporated in the proposed method?\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a framework for deep metric learning. Using ideas from distributionally robust optimization, the loss (in each batch) is the worst case weighted average of all pairwise classification losses, taken over an uncertainty set of possible weights. The framework is shown to be general and encompass various previous approaches. Based on it, the authors propose several new algorithms, which are shown to outperform the SOTA on image retrieval data sets in terms of recall.\\n\\nThe main contribution of the paper is a unification of previous deep metric learning algorithms, which would be helpful to the community and could inspire new approaches. I found the empirical observation that the proposed algorithms are able to reduce the computation time by nearly half to be compelling. However, apart from DRO-TopK-PN, the proposed algorithms appear to be minor modifications of existing algorithms.\", \"questions_about_the_experimental_protocol\": \"1. Are the results from one run, or averaged over several? Standard errors of the evaluation metrics would be very helpful to judge the improvements made by the algorithms, especially as the algorithms are stochastic due to batching. \\n2. The proposed algorithms seem to be similar to those of Fan et al. (2017) and Namkoong and Duchi (2017). Is there a particular reason why they weren\\u2019t included in the experiments?\"}"
]
} |
r1e30AEKPr | A Group-Theoretic Framework for Knowledge Graph Embedding | [
"Tong Yang",
"Long Sha",
"Pengyu Hong"
] | We have rigorously proved the existence of a group algebraic structure hidden in relational knowledge embedding problems, which suggests that a group-based embedding framework is essential for model design. Our theoretical analysis explores merely the intrinsic property of the embedding problem itself without introducing extra designs. Using the proposed framework, one could construct embedding models that naturally accommodate all possible local graph patterns, which are necessary for reproducing a complete graph from atomic knowledge triplets. We reconstruct many state-of-the-art models from the framework and re-interpret them as embeddings with different groups. Moreover, we also propose new instantiation models using simple continuous non-abelian groups. | [
"group theory",
"knowledge graph embedding",
"representation learning"
] | Reject | https://openreview.net/pdf?id=r1e30AEKPr | https://openreview.net/forum?id=r1e30AEKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OV0riDsBfw",
"DDZb-1uWSu",
"Syl6_zJ8sH",
"ryxiA-1Ujr",
"r1xCB-1LiS",
"BkeQ-y18oS",
"SygVUgzHqH",
"SylHMCB0FH",
"HJg7z_3TKH",
"BJxGzC_j_S",
"Hkgbb7ej_S",
"SJeWWSp5OS",
"rJgxcMUYuS",
"SylRlWbFdB",
"B1eoGvA8_B",
"H1gpT8NrOS"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1577680629335,
1576798723471,
1573413493317,
1573413331253,
1573413189633,
1573412603296,
1572311116256,
1571868173185,
1571829770551,
1570635274174,
1570599672955,
1570587896926,
1570493064042,
1570472182264,
1570330386579,
1570223813282
],
"note_signatures": [
[
"~Hung-Nghiep_Tran1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1445/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1445/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1445/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"~Dai_Quoc_Nguyen1"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"~Chen_Cai1"
],
[
"ICLR.cc/2020/Conference/Paper1445/Authors"
],
[
"~Chen_Cai1"
]
],
"structured_content_str": [
"{\"title\": \"Another Quaternion-based knowledge graph embedding paper\", \"comment\": \"I would like to add that an earlier Quaternion-based model was also discovered in [1] from the perspective of weighted sum of multi-embedding trilinear products, which is the first one to my knowledge. We proposed a simple model using the Quaternion algebra (Sect. 3.4) and showed promising results, outperforming ComplEx on comparable settings (Sect. 6.3).\\n\\nThe results were published in an EDBT/ICDT workshop early 2019, but we did not follow this idea further because we were busy with a more advanced generalized idea and quite happy with it. We are glad to see that the NeurIPS paper nicely presents and confirms the advantage of Quaternion-based embedding, although there are still some rooms for improvement.\\n\\n[1] Analyzing Knowledge Graph Embedding Methods from a Multi-Embedding Interaction Perspective, https://arxiv.org/abs/1903.11406\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a rigorous mathematical framework for knowledge graph embedding. The paper received 3 reviews. R1 recommends Weak Reject based on concerns about the contributions of the paper; the authors, in their response, indicate that R1 may have been confused about what the contributions were meant to be. R2 initially recommended Reject, based on concerns that the paper was overselling its claims, and on the clarity and quality of writing. After the author response, R2 raised their score to Weak Reject but still felt that their main concerns had gone unanswered, and in particular that the authors seemed unwilling to tone down their claims. R3 recommends Weak Reject, indicating that they found the paper difficult to follow and gave some specific technical concerns. The authors, in their response, express confusion about R3's comments and suggest that R3 also did not understand the paper. However, in light of these unanimous Weak Reject reviews, we cannot recommend acceptance at this time. We understand that the authors may feel that some reviewers did not properly understand or appreciate the contribution, but all three reviewers are researchers working at highly-ranked institutions and thus are fairly representative of the attendees of ICLR; we hope that their points of confusion and concern, as reflected in their reviews, will help authors to clarify a revision of the paper for another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"response\", \"comment\": \"We thank the reviewer for the detailed review and would like to address the following points:\\n\\n\\n1.\", \"reviewer\": \"\\u201cFor the algorithm section, I feel that it is also lacking in the sense that there is still no automatic way to choose which group to embed. It is also unclear what is the purpose of the simulation section. While it says \\\"As theoretically analyzed in Section 3.2, and empirically shown above, continuous nonabelian groups are more reasonable choices for general tasks\\\", the advantage of continuous nonabelian groups are not so significant in the tables.\\u201d\", \"response\": \"To solve a generic KGE task, there are three stages:\\n1. What are the task\\u2019s requirements for the model?\\n2. What type of models can satisfy these requirements?\\n3. How to construct a specific model for practical usage?\\n\\nFor the first question, our analysis in Sec.3 finds the requirements that coincide with the definition of groups (this is the first work that formally proved this as we know); therefore, for the second one, group manifolds are natural choice for relational embedding model; for the third one, we have provided a general recipe to automatically construct a model as long as the embedding group is chosen. \\n\\nIn this sense, our work targets at the most general KGE tasks, rather than on a specific one. The reviewer\\u2019s concern about \\u201cwhich group to embed\\u201d is asking a detailed version of the second question. General KGE problems only restrict the embedding space from arbitrary spaces into group manifolds, while \\u201cwhich group is proper\\u201d depends on the details of specific tasks. \\n\\nBesides, with the most general concerns, we explained that the choice of groups could be further narrowed into the category of *continuous* *non-abelian* groups. Continuous groups are more efficient for gradient-based tasks, and the non-abelian nature could handle relation compositions which are not commutative, both of which concern general KGE problems.\\n\\nWhile these are all the restrictions one could derive for generic tasks, in practice, if more details of the given task are available, one could further restrict the choice of groups into a smaller category of groups. For example, if one knows in advance that all (or most) relation compositions are commutative in a specific task, then a larger non-abelian group would be redundant, and one could simply use an abelian sub-group, which is much smaller and computationally efficient.\"}",
"{\"title\": \"response-2\", \"comment\": \"2.\", \"reviewer\": \"Minor issues.\", \"response\": \"We thank the reviewer for pointing out minor issues, we will rephrase in the revised version.\"}",
"{\"title\": \"response-1\", \"comment\": \"We thank the reviewer for the detailed review and would like to address the following points:\\n\\n\\n1.\", \"reviewer\": \"\\u201cWhy would the set of relations be closed? Say, Owns \\\\cdot Spouse_of = ? I cannot see why and how this is necessary. Must all embedding methods be closed? I dispute that the paper proves that the definition of groups emerges purely from the nature of the knowledge graph embedding (KGE) task. I would actually say the opposite, it is clear to me that for most KGs, their relations do not fit the definition of a group. I am open to be (formally) proven wrong. E.g., can you list all relations in FB15K and show that they fit the group axioms?\\u201d\", \"response\": \"\", \"this_concern_is_related_to_an_important_difference_between_the_following_two_things\": \"the relation pattern in a specific KG and the structure of the relation embedding space. We have discussed this in a very careful way in Sec.3.2 of our manuscript. To put it in short: all properties mentioned in the paper, including closure, identity, inversion, and associativity, are desired properties of relation embedding spaces, rather than existing properties of specific knowledge graphs.\\n\\nIn our paper, we choose the proper words very carefully to address this difference. For the closure property, we phrased as: \\u201cTo allow the possible existence of composition, in general, the elements R1 \\u00b7 R2 should also be an element living in the same relation-embedding space\\u201d, which clearly suggests that it is a requirement for the *relation-embedding space* to accommodate *possible existence of composition* in specific KG datasets. Besides, we also explained explicitly, in the footnote on page.3, that \\u201cGiven a graph, not all compositions correspond to meaningful relations, but an embedding model should be able to capture this possibility in general\\u201d. \\n\\n================================\", \"below_we_give_a_more_detailed_discussion\": \"a. The relation patterns in a specific KG: \\nWe completely agree that for a specific KG, the meaningful relations existing in the graph may not form an exact group. However, it is clear that some relation patterns indeed can emerge from compositions and inversions in real KGs, which is the exact concern taken by a lot of proceeding studies, including RotatE, QuatE, and DihEdral. The problem is that, in a specific KG, these patterns, which we denote as super-relations, are not accessible to users/researchers, since it is not practical at all to enumerate every single pattern in a large KG.\\n\\nb. The structure of the relation embedding space: \\nWhen proposing a model for general KGE tasks, due to the difficulty of directly access all super-relations in every single task, one should try to accommodate all possibilities. Take the \\u2018closure\\u2019 as an example, in a specific KG dataset, one might not know if certain compositional relations are existing. Note that there are two types of\\u2018 compositions\\u2019: conceptual and mathematical ones. The conceptual one depends on the specific dataset; the mathematical one depends on the embedding method. To accommodate the possible existence of the conceptual composition r1\\u00b7 r2, whatever representations, say R1 and R2, are assigned to r1 and r2, the mathematical composition R1\\u00b7 R2 must also be contained in the relation embedding space. For instance, in RotatE, where each relation is a U(1) rotation, the composition of two U(1) elements are still contained in the U(1) group.\", \"prove_the_requirement_of_closure_by_contradiction\": \"Suppose abstract relation r1, r2 are embedded as R1, R2. Importantly, since the operation rule of a relation-embedding R and an entity-embedding E are already fixed, the mathematical composition rule between R1 and R2 has also been fixed: it is defined as operating by R1 first then by R2 subsequently. If the closure is absent (i.e. R1\\u00b7 R2 is not contained in the embedding space), this method would never be able to correctly represent the r1\\u00b7 r2. Therefore, the model would fail the KGE tasks where r1\\u00b7 r2 exists.\\n================================\"}",
"{\"title\": \"response\", \"comment\": \"We thank the reviewer for the review and would like to address the following points:\\n\\n\\n1.\", \"reviewer\": \"\\u201cGiven this overarching family of models, the authors proceed to identify existing models as certain choices of that family. I see little use in inventing this abstraction as the authors do not show any practical insights or interesting theoretical analysis that comes from this higher-level abstraction.\\u201d\", \"response\": \"We do not understand what \\u201cfamily\\u201d the reviewer is referring to. We would like to emphasize again: the properties in Sec.3.2, which happen to coincide with the mathematical definition of groups, are requirements of general KGE tasks, not restricted to any specific modeling structures. We offer a deep understanding of these requirements, following which, a natural choice for general KGE model would be using non-Abelian continuous groups. Our recipe in Sec.4 describes in detail how to construct the embedding model when a group is selected, which directly produces models for *practical usage*. Our experimental instantiation selected SU(2) as the embedding group and constructed the corresponding embedding model that can be used in practice, which finished a complete demonstration of the proposed framework. Without this understanding, there is no general recipe to systematically construct KGE models at all.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors approach the problem of representing knowledge graphs from a top-down perspective, in which they argue for certain fundamental desiderata for hyper-relations (composition, inversion) followed by properties that define a mathematical group.\\n\\nI found the paper extremely difficult to follow. As defined in Eq. 1, knowledge graph embeddings are a model family with a choice of domain for the entity and relation, and a choice of how that relation operates on a head entity. This means one can devise arbitrary properties and restrictions on that family. It's not clear to me what motivates selecting a (abelian) group, where inversion, closure, identity, associativity, and commutativity are demanded to be properties of knowledge graph embedding models. This seems more a definition of what models they consider, rather than a novel insight about knowledge graph embedding models itself (the authors claim \\\"we proved for the first time the emergence of a group definition in the KG representation learning\\\", which seems hard to wrap one head's around).\\n\\nGiven this overarching family of models, the authors proceed to identify existing models as certain choices of that family. I see little use in inventing this abstraction as the authors do not show any practical insights, or interesting theoretical analysis that comes from this higher-level abstraction.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper start merely by studying the graph reconstruction problem and prove that the intrinsic structure of this task itself automatically produces the complete definition of groups. it seems to be a novel result. Based on this result, one could construct embedding models that naturally accommodate all possible local graph patterns, and the paper also shows a few simulations.\\n\\nMy main concern is that, while the focus on this work is the theoretical finding, there is no rigorous statement of it as a theorem. As a result, I am not exactly sure what the proofs in the appendix is trying to show. In addition, the proofs seems to be very trivial.\\n\\nFor the algorithm section, I feel that it is also lacking in the sense that there is still no automatic way to choose which group to embed. It is also unclear what is the purpose of the simulation section. While it says \\\"As theoretically analyzed in Section 3.2, and empirically shown above, continuous nonabelian groups are more reasonable choices for general tasks\\\", the advantage of continuous nonabelian groups are not so significant in the tables.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"I would like to thank the authors for the detailed rebuttal.\\n\\nMy concern is generalizing something that is not correctly describing the object we should model (KGs).\\n\\nFor instance, \\\"all properties mentioned in the paper, including closure, identity, inversion, and associativity, are desired properties of relation embedding spaces, rather than existing properties of specific knowledge graphs. \\\" => should we not care about the actual relations of knowledge graphs? Should we not build a theory that closely matches the properties of the actual objects we are modeling? The argument seems to be: forget the actual knowledge graph, here is a generalization of existing KGE methods. Even the proof provided was unrelated to my question: I gave an example: can you list all relations in FB15K and prove that they fit the group axioms? If they don't fit, what is the advantage of modeling the relations as a group? What is the representation error this causes? \\n\\n\\nThe authors misunderstood my comment about (Bloem-Reddy and Teh 2019). \\\"Therefore the previous works mentioned by the reviewer above are irrelevant.\\\" => this is a weird statement. Permutation invariance is relevant to all objects that are graphs. I recommend reading the long history relating graph models and exchangeability (Persi Diaconis has a good overview). I was just mentioning that I don't see why KGs should have another associated permutation group besides permutation invariance. I would also recommend following the emerging literature in graph representation learning using group theory, of which KGs are but a special case.\\n\\n\\\"The (entity) manifold, however, means that the relations no longer form a group. I see no easy fix.\\\" => I clarified it, since I was talking about the entity embedding.\\n\\nI did not see a revised version of the manuscript.\\n\\n\\nI will raise my score because the paper could be published as a niche paper. We generalize KGE but we acknowledge its shortcomings: And here is a way to measure the embedding error of modeling the relations as a group, when they are not actually a group. \\n\\n--------------\\n\\nThe paper is well written and an interesting read. I think it complements well the existing literature. \\n\\nUnfortunately, I think the paper overstate its claims. It is clear that permutation groups are the natural language of all graphs (Kondor and Trivedi, 2018) and (Bloem-Reddy and Teh, 2019). It is less clear that knowledge graphs also comply with another set of group axioms. Why would the set of relations be closed? Say, Owns \\\\cdot Spouse_of = ? . I cannot see why and how this is necessary. Must all the relations be closed? I understand why this is true for KGE but this is not true for KGs, which only shows that KGEs may not be the right method to represent KGs moving forward. I see no easy fix. \\n\\nThe rest of the paper is straightforward, just applying the definitions of groups. I found the classification of different methods interesting, and should be made more clear in the experiments. \\n\\nThe experimental results are interesting but their significance is unclear. Please add standard deviations to all experiments. We cannot have a sense of the significance of the results without knowing how many runs were executed, how they were executed (e.g., k-fold cross-validation, bootstrapping), and their standard deviation.\\n\\nI dispute that the paper proves that the definition of groups emerges purely from the nature of the knowledge graph embedding (KGE) task. I would actually say the opposite, it is clear to me that for most KGs, their relations do not fit the definition of a group. I am open to be (formally) proven wrong. E.g., can you list all relations in FB15K and show that they fit the group axioms?\", \"fixing_the_paper\": \"Maybe the paper could be rewritten, constrained to Euclidean spaces? Then prove (formally) that the group axioms are a sufficient(?) and necessary(?) condition for such embeddings?\", \"minor_issues\": \"\", \"honestly_could_not_make_sense_of_the_abstract\": \"The sentence choices\\n- \\\"which suggests that a group-based embedding framework is essential for model design\\\" \\n- \\\"Our theoretical analysis explores merely the intrinsic property of the embedding problem itself without introducing extra design\\\"\\nin the abstract are very strange. It means nothing to a reader at that point. What a model design? What is an extra design? A modeling assumption? A model prior? \\n\\n\\\"Using the proposed framework, one could construct embedding models that naturally accommodate all possible local graph patterns, which are necessary for reproducing a complete graph from atomic knowledge triplets\\\" => what is an \\\"atomic knowledge triplet\\\"? Why are they necessary to reproduce a complete graph? This is all very confusing.\\n\\n\\\" contradicts to the entity regularization\\\" => contradicts the entity regularization\", \"references\": \"Kondor, R. and Trivedi, S., 2018. On the generalization of equivariance and convolution in neural networks to the action of compact groups. ICML\\nBloem-Reddy, B. and Teh, Y.W., 2019. Probabilistic symmetry and invariant neural networks. arXiv:1901.06082.\"}",
"{\"comment\": \"Thanks for pointing out the missing citation. We would definitely include it in the revision.\", \"title\": \"response\"}",
"{\"comment\": \"Hi,\\n\\nI saw that the TransE results on WN18RR and FB15k-237 in your paper are taken from the ConvKB paper [1] without citing it. It would be nice if you can cite the ConvKB paper. Otherwise, you simply do not include TransE as a baseline on these two datasets.\\n\\nBest,\\nDai.\\n\\n[1] A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. NAACL 2018.\", \"title\": \"Cite the ConvKB paper or simply do not include the TransE results on WN18RR and FB15K-237\"}",
"{\"comment\": \"In the above discussion, we explained the \\u201cunit quaternion puzzle\\u201d: unit-quaternions work much better than non-unit ones in relation-embeddings, as only unit ones are consistent with the group structure of SU(2). Now we provide an explanation for another confusing while important phenomenon in [1].\\n\\nIn the work of QuatE [1], there are in total three different number-systems mentioned: quaternions, octonions, and sedenions, all of which are extended from complex numbers. The quaternion based model QuatE has achieved great performance, while the octonion based model, namely, OctonionE, did not show any further promising improvement (Appendix 7.3 in [1]). Viewed from the number-system perspective, octonions (also sedenions) indeed should have been more expressive than quaternions. This is not consistent with the experimental results.\\n\\nWe now interpret these two models from our Graph Embedding Framework perspective. The QuatE model, as discussed in detail above, can be related to the SU(2) group embedding model; at the same time, the OctonionE can also be related to a group embedding model using Spin(8), although the relation mapping would be more complicated. Limiting to a discussion on the three major properties of a group: continuousness, commutativity, and compactness, there is no essential difference between SU(2) and Spin(8). Both models fall into the category of continuous non-Abelian group embeddings, hence, should process similar expressive power. This explains why there is no further improvement in OctonionE, when compared with QuatE.\\n\\n\\n[1] Quaternion Knowledge Graph Embeddings, Neurips 2019\", \"title\": \"Supplementary discussion\"}",
"{\"comment\": \"1. Our paper is delivering a systematic Group Embedding Framework, rather than a single model example. This framework is not equal to Quaternion Embedding (i.e. QuatE) [5] at all. Instead, QuatE [5], and equivalently our SU2E model, only serves as a demonstrating example of the framework; other examples include RotatE [1], TorusE [2], and DihEdral [3]. And in fact, we are implementing various other groups for KGE problems in our proceeding works right now.\\n\\n2. We don't \\u2018discover\\u2019 models by, for example, testing different number systems (complex, quaternion, octonion, sedenion) as in ComplEx [4], RotatE [1] and QuatE [5]. Following our framework, we can systematically construct them with a clear awareness of their expressive power. And in this sense, the work QuatE, in fact, demonstrates the power of continuous non-Abelian groups nicely.\\n\\n\\n[1] RotatE: Knowledge Graph Embedding by Relational Rotation In Complex Space, ICLR 2019\\n[2] TorusE: Knowledge graph embedding on a lie group. AAAI 2018\\n[3] Relation Embedding with Dihedral Group in Knowledge Graph, ACL 2019\\n[4] Complex embeddings for simple link prediction. ICML 2016\\n[5] Quaternion Knowledge Graph Embeddings, Neurips 2019\", \"title\": \"Clarification\"}",
"{\"comment\": \"Thanks for the detailed reply! I also realized that the method proposed in the paper is essentially the same as Quaterninon embedding. It's quite interesting to see that the same model is discovered twice from different perspectives.\", \"title\": \"Some further comments\"}",
"{\"comment\": \"We appreciate that the reviewer pointed out two relevant works [1, 4]. Since they are still under proceeding, they slipped out attention. We will cite them in our manuscript. Below, we discuss the major differences between our work and them.\\n\\nThe work in [1] nicely introduced the mathematical definition of groups along with related important mathematical concepts. With such concepts, the authors [1] provide an alternative theoretical explanation for RotatE [2] in the field of abelian group embeddings, which is a subclass in our proposed group embedding framework. In our manuscript, we cited DihEdral [2] (see Section 2 on comparing related works) that provided a gentle introduction to groups. [1] enriches the discussion with more mathematical ingredients including representation theory, Schur\\u2019s Lemma, etc.\\n\\nWe would like to emphasize a major contribution in our work, compared with both [1] and [3]. Our theory does not aim at *introducing* a group perspective. Instead, we *proved* the existence of the definition of groups, merged purely from the nature of the knowledge graph embedding (KGE) task. Following our findings, group theory becomes the essential language for the problem rather than simply an alternative perspective for improving KGE models. That is our theory lays down a unified theoretical foundation for the KGE research.\\n\\nThe work in [4] mentioned by the reviewer is very interesting. The authors proposed to use quaternions (also octonions and sedenions in the appendix), which served as an extension of complex numbers, to improve performance. The intuition of their model was not initiated from group theory. However, in Model Analysis (Section 5.3 in [4]), the authors stated that: \\n\\n \\u201cNormalizing the relation to unit quaternion is a critical step for the embedding performance. This is likely because scaling effects in non-unit quaternions are detrimental.\\u201d \\n\\nHowever, an explicit reason is absent. After a thorough investigation, we realized quite an interesting fact: there is a mathematical correspondence between unit quaternions and SU(2) group used in our work. More rigorously, SU(2) group is an isomorphism [5] of unit quaternions. This explains the necessity of applying unit quaternions: only unit quaternions are consistent with the group structure, while non-unit ones cannot. The QuatE model [4] embeds relations with unit quaternions, each of which can be mapped to a SU(2) matrix element in our work. Therefore, one could construct a one-to-one mapping, from the exampling model SU2E in our work to QuatE model in [4]. This indicates that QuatE model falls into the category of continuous non-Abelian group embedding (Section 3.3 Table 1 in our work) thus has the potential to outperform simpler groups (e.g. group U(1) in RotatE [2]). Indeed, QuetE model achieved great performance on Freebase and WordNet experiments. We believe some of the performance boost might be rooted in the implementation details (including negative sampling and self-advisory approach, regularizations, value function forms and other hyperparameter setups), which is out of the scope of our work. (Our implementation codebase is derived from the public repository of [2].) Although we do not have an opportunity to replicate the result due to the recency of the work in [4] and not able to access to its codes, the performance advantage of QuatE model is in line with our proposed framework. It would be really interesting to adopt the QuatE implementation on other promising groups in the future.\\n\\nFinally, we want to highlight that the SU2E model serves only as an instantiation of our proposed modeling framework, which provides a generic recipe for constructing KGE models. This exhibits instructive guidelines for designing more general KGE models in future works and may motivate further theoretical analysis based on group theory.\\n\\n\\n[1] Group Representation Theory for Knowledge Graph Embedding, Neurips workshop 2019\\n[2] RotatE: Knowledge Graph Embedding by Relational Rotation In Complex Space, ICLR 2019\\n[3] Relation Embedding with Dihedral Group in Knowledge Graph, ACL 2019\\n[4] Quaternion Knowledge Graph Embeddings, Neurips 2019\\n[5] https://en.wikipedia.org/wiki/Quaternion\", \"title\": \"response\"}",
"{\"title\": \"Relevant paper\", \"comment\": \"Hello,\\n\\nIt is very interesting to see that you have a very similar idea about introducing group theory into KGE. I also wrote a small paper connecting group representation theory with KGE, which is recently accepted in NeurIPS graph representation workshop. \\n\\nBest,\\nChen \\n\\nGroup Representation Theory for Knowledge Graph Embedding\", \"https\": \"//grlearning.github.io/papers/15.pdf\\nAnother relevant paper (NeurIPS 2019) is Quaternion Knowledge Graph Embeddings https://arxiv.org/pdf/1904.10281.pdf\"}"
]
} |
SJloA0EYDr | A⋆MCTS: SEARCH WITH THEORETICAL GUARANTEE USING POLICY AND VALUE FUNCTIONS | [
"Xian Wu",
"Yuandong Tian",
"Lexing Ying"
] | Combined with policy and value neural networks, Monte Carlos Tree Search (MCTS) is a critical component of the recent success of AI agents in learning to play board games like Chess and Go (Silver et al., 2017). However, the theoretical foundations of MCTS with policy and value networks remains open. Inspired by MCTS, we propose A⋆MCTS, a novel search algorithm that uses both the policy and value predictors to guide search and enjoys theoretical guarantees. Specifically, assuming that value and policy networks give reasonably accurate signals of the values of each state and action, the sample complexity (number of calls to the value network) to estimate the value of the current state, as well as the optimal one-step action to take from the current state, can be bounded. We apply our theoretical framework to different models for the noise distribution of the policy and value network as well as the distribution of rewards, and show that for these general models, the sample complexity is polynomial in D, where D is the depth of the search tree. Empirically, our method outperforms MCTS in these models. | [
"tree search",
"reinforcement learning",
"value neural network",
"policy neural network"
] | Reject | https://openreview.net/pdf?id=SJloA0EYDr | https://openreview.net/forum?id=SJloA0EYDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"L34Ot-8mE",
"SylklJK5sr",
"r1g5CtzZjr",
"HygnvKMZjS",
"r1leI_GboS",
"Bkefd1gj9r",
"S1lF9obI9r",
"SkeXSQ86tr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723443,
1573715686738,
1573099986026,
1573099876260,
1573099592133,
1572695913708,
1572375441140,
1571803962998
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1444/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1444/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1444/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposed an extension of the Monte Carlos Tree Search to find the optimal policy. The method combines A* and MCTS algorithms to prioritize the state to be explored. Compare with traditional MCTS based on UCT, A* MCTS seem to perform better.\\n\\nOne concern of the reviewers is the paper's presentation, which is hard to follow. The second concern is the strong restriction of assumption, which make the setting too simple and unrealistic. The rebuttal did not fully address these problems.\\n\\nThis paper needs further polish to meet the standard of ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"rebuttal\", \"comment\": \"Summary of revisions to the manuscript\\n\\n1) moved the proofs to the appendix\\n2) added an intuition paragraph to each section\\n3) added illustrations to further explain the main techniques and the models\\n4) added a notation table to summarize notations used\\n5) added explanations in the main contributions section to clarify the deterministic nature of the noisy value and policy networks and explain at a high level our algorithmic ideas.\"}",
"{\"title\": \"thanks for your comments\", \"comment\": \"We think that this reviewer misunderstood our problem setting. The value network is a pre-trained and *deterministic* function that outputs the same value estimate for a particular node, no matter how many times it is called. This is standard in applications like AlphaZero. In our formulation, the noise random variable $X_d$ was instantiated once at each particular node and is fixed afterwards. Therefore, one cannot call the value network repeatedly on the same node and average over the results to achieve the true value. Instead, one must expand further down to children nodes, because the value network estimates for the children nodes are less noisy.\\n\\nThe finite horizon Markov Decision tree is standard in applications, in our opinion. We are not aware of any study on infinite horizon Markov Decision trees. \\n\\nWe are not sure what you mean by \\u201cno backtracking\\u201d. If you mean the averaging operation following the parents from a new node in the regular MCTS, our algorithm also has it, by always replacing the old value of a node with the new one from more accurate child estimates. \\n\\nThe transition and reward assumptions are valid for a lot of applications, ie games. These assumptions are also a good starting point on which to develop our results because they make the problem simpler while preserving the main challenges that we want to tackle. The value network that we assume in our paper is increasingly becoming an integral part of deep reinforcement learning.\"}",
"{\"title\": \"thanks for your comments\", \"comment\": \"We address the concerns in the \\\"Cons\\\" section in order.\\n\\n1. [Our scheme vs MCTS] We hypothesize that the averaging in the back-propagation step in MCTS is ineffective, since the goal is to estimate the value of the optimal policy, any averaging will inevitably bring down this value. Note that back propagation in classical value iteration, for example, takes the max, rather than the average of the different Q values. Our technique does not use the averaging scheme in MCTS. \\n\\nHowever, we are hesitant to put such a big emphasis on this intuition because there is little theoretical understanding behind MCTS and UCT applied on the entire search tree. It could be possible that the averaging could alleviate some issues in MCTS (e.g., when the value function of a node over-estimates). While we leave this to future work, the main goal of our work is to provide a tree search algorithm with provable guarantees that also works well in practice, and we think that our contribution is valuable to the ICLR community. \\n\\n2. Line 8 of Algorithm 2 has an \\u201cor\\u201d condition which is crucial, we do not just add the top 3 child nodes. Depending on the probabilities given by the policy network, we may add all the children nodes to the queue. We add the top 2 (ordered by the probabilities given by the policy network) to the queue by default. For the k-th child node, where k > 2, we add this child if there is a small gap between the probability of the k-1-th child and the probability of the top child, as given by the policy network, where the threshold for the gap is given by the noise model. Intuitively, we are saying that if the gap is sufficiently big, even after accounting for the noise in the policy network, there is no way that this k-th child is part of the optimal policy, so we do not have to add it to the queue. The proof for Theorem 2 establishes in Section 4.1 why our condition is sufficient. We can add more explanation for this. \\n\\n3. The complexity bound in Theorem 1 is actually based on one very simple observation, which we explain in the proof of Theorem 1. A sub-optimal internal node s is chosen if V^* <= U_s + c_d = V_s + X_s + c_d. U_s is the value network estimate for state s, c_d is the upper bound on the possible error for the value estimate. So we end up choosing a sub-optimal node if the value network estimate plus the upper bound on the possible error (optimism) is higher than the true optimal value. Otherwise we will not choose to expand that node, since we are sure that it will not be the optimal node. U_s is equal to V_s (true value of node s) plus X_s, which is the noise random variable. V^* - V_s is the expression for the gap of state s, therefore we end up with the expression in Theorem 1. Notice that we should also account for the ancestors of s, if we are lucky enough to be able to rule out one of the ancestors of s, we would never expand beyond that ancestor to reach s, and s would not be in our priority queue. We can give more explanation (and/or provide graphical illustration) on this in the manuscript, and we can introduce the concrete examples earlier. \\n\\n4. [On experiments] We give a general analysis for A*MCTS in our main theorems that can be applied to a broad range of problems and models. In this paper, our main focus is on developing a principled search algorithm and providing provable theoretical guarantees, which is completely lacking in this space. The selection of models is not an emphasis of our paper, they mainly offer a sanity check that our algorithm also performs well in practice. That being said, we will definitely perform more elaborate experiments in the real situations (e.g., Atari or even self-play in AlphaZero) in the future work. \\n\\n5. We can handle the case where there is error even at the leaf node. If there is error even at the leaf node, then our scheme would produce an approximation of the optimal value and an approximately optimal policy, this would be exactly similar to the results in Section 5. In Section 5, we consider the use case where we are willing to tolerate an approximately optimal policy and an approximately optimal value estimate, this basically means that we are willing to stop at a depth $\\\\tilde D$, where the noise at level $\\\\tilde D$ is smaller than the approximation tolerance. When there is error even at the leaf nodes, then we will find an approximately optimal policy and value estimate, where the approximation depends on the error at the leaf nodes. It also makes sense, intuitively, that getting an approximation is the best that one can hope for when there is error even at the leaf nodes.\"}",
"{\"title\": \"thanks for your comments\", \"comment\": \"Our main focus is on developing a principled prioritized search algorithm with a deep learning component (e.g., Monte Carlo Tree Search algorithm with policy/value network) with provable theoretical guarantees, which is currently lacking in this space. The selection of models is not an emphasis of our paper, they mainly offer a sanity check that our algorithm also performs well in practice. We set the parameters for the tree so that we would have a reasonable model that could offer a valid sanity check.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents the search algorithm A*MCTS to find the optimal policies for problems in Reinforcement Learning. In particular, A*MCTS combines the A* and MCTS algorithms to use the pre-trained value networks for facilitating the exploration and making optimal decisions. A*MCTS refers the value network as a black box and builds a statistical model for the prediction accuracies, which provides theoretical guarantees for the sample complexity. The experiments verify the effectiveness of the proposed A*MCTS.\\n\\nIn summary, I think the proposed A*MCTS algorithm is promising to push the frontier of studies of the tree search for optimal actions in RL. But the experiments should be improved to illustrate the reasons for the hyper-param setting. For example, in Sec. 6.2, the authors should give some explanations on why the depth of the tree is set as 10 and the number of children per state is set as 5.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes A*MCTS, which combines A* and MCTS with policy and value networks to prioritize the next state to be explored. It further establishes the sample complexity to determine optimal actions. Experimental results validate the theoretical analysis and demonstrate the effectiveness of A*MCTS over benchmark MCTS algorithms with value and policy networks.\", \"pros\": \"This paper presents the first study of tree search for optimal actions in the presence of pretrained value and policy networks. And it combines A* search with MCTS to improve the performance over the traditional MCTS approaches based on UCT or PUCT tree policies. Experimental results show that the proposed algorithm outperform the MCTS algorithms.\", \"cons\": \"However, there are several issues that should be addressed including the presentation of the paper:\\n\\u2022\\tThe algorithm seeks to combine A* search with MCTS (combined with policy and value networks), and is shown to outperform the baseline MCTS method. However, it does not clearly explain the key insights of why it could perform better. For example, what kind of additional benefit will it bring when integrating the priority queue into the MCTS algorithms? How could it improve over the traditional tree policy (e.g., UCT) for the selection step in MCTS? These discussions are critical to understand the merit of the proposed algorithms. In addition, more experimental analysis should also be presented to support why such a combination is the key contribution to the performance gain.\\n\\u2022\\tMany design choices for the algorithms are not clearly explained. For example, in line 8 of Algorithm 2, why only the top 3 child nodes are added to the queue?\\n\\u2022\\tThe complexity bound in Theorem 1 is hard to understand. It does not give the explicit relations of the sample complexity with respect to different quantities in the algorithms. In particular, the probability in the second term of Theorem 1 is hard to parse. The authors need to give more discussion and explanation about it. This is also the case for Theorems 2-4. The authors give some concrete examples in Section 6.2 for these bounds. However, it would be better to have some discussion earlier right after these theorems are presented.\\n\\u2022\\tThe experimental results are carried out under the very simplified settings for both the proposed algorithm and the baseline MCTS. In fact, it is performed under the exact assumption where the theoretical analysis is done for the A*MCTS. This may bring some advantage for the proposed algorithm. It is not clear whether such assumptions hold for practical problems. More convincing experimental comparison should be done under real environment such as Atari games (by using the simulator as the environment model as shown in [Guo et al 2014] \\u201cDeep learning for real-time atari game play using offline monte-carlo tree search planning\\u201d).\", \"other_comments\": \"\\u2022\\tIt is assumed that the noise of value and policy network is zero at the leaf node. In practice, this is not true because even at the leaf node the value could still be estimated by an inaccurate value network (e.g., AlphaGo or AlphaZero). How would this affect the results?\\n\\u2022\\tIn fact, the proof of the theorems could be moved to appendices.\\n\\u2022\\tIn the first paragraph of Section 6.2, there is a typo: V*=V_{l*}=\\\\eta should be V*-V_{l*}=\\\\eta ?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a novel search algorithm that uses the policy and value predictors to guide search and provides theoretical guarantee on the sample complexity. The aim is to estimate the optimal value of an initial state as well as the one-step optimal action to take.\\nThe algorithm uses a priority queue to store all states being visited so far and picks the most optimistic one to expand, according to an upper confidence bound heuristic function. The algorithm assumes access to pre-trained value and policy networks and it uses calls to these networks to prioritize the next state to be explored.\", \"the_authors_consider_a_very_restrictive_setting\": [\"Finite horizon Markov decision tree: no backtrack\", \"No intermediate reward and only reward at the end of the episode.\", \"Deterministic transition\", \"Importantly, access to value network that gives noisy estimates of the optimal value function\", \"The noise model is additive and i.i.d and satisfies a concentration inequality\", \"All this assumption makes the setting very simple and unrealistic. Moreover, I think we can frame the problem into bandit problem and solve it easily with sample complexity independent of the horizon D.\", \"In fact, given an initial state s, we consider the K possible actions a_1, a_2, \\u2026, a_K that lead deterministically to next states (r_1, r_2, \\u2026, r_K). As the intermediate reward is zero, the state-action value of (s, a_k) is equal to V_{r_k}. As we have noisy estimates of V_{r_k} and we know precisely the noise model, we can run UCB-like algorithm for multi-armed bandit where each arm corresponds to action a_k and expected reward correspond to V_{r_k}. This determines the optimal action in constant time with respect to the horizon.\"]}"
]
} |
SkgsACVKPH | Picking Winning Tickets Before Training by Preserving Gradient Flow | [
"Chaoqi Wang",
"Guodong Zhang",
"Roger Grosse"
] | Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. Our code is made public
at: https://github.com/alecwangcq/GraSP. | [
"neural network",
"pruning before training",
"weight pruning"
] | Accept (Poster) | https://openreview.net/pdf?id=SkgsACVKPH | https://openreview.net/forum?id=SkgsACVKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Vb-xRUGnoP",
"rJeE_952sS",
"HyezXScnsH",
"HyeCnTKnsH",
"HJxYY6OhsH",
"ByxUSu5soH",
"S1xyP07isS",
"BJl1D6WiiB",
"HJxKfTbisB",
"BJlWGFWsjr",
"Syen2_ZjiS",
"S1eT_9uVjr",
"SJg-rWYfiB",
"BJlsJthO5S",
"S1e9hXy0FS",
"HkgFzZMcFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723416,
1573853803701,
1573852441718,
1573850549803,
1573846400999,
1573787709645,
1573760599373,
1573752151275,
1573752080661,
1573751048822,
1573750964328,
1573321332561,
1573191993263,
1572550882844,
1571840945906,
1571590417202
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1443/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1443/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1443/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1443/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a method to improve the training of sparse network by ensuring the gradient is preserved at initialization. The reviewers found that the approach was well motivated and well explained. The experimental evaluation considers challenging benchmarks such as Imagenet and includes strong baselines.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We've updated the paper.\", \"comment\": \"We've updated the paper to include three DST baselines and separate DST methods from *Pruning during training* ones in related works. However, we haven't finished the experiments of DeepR on Tiny-ImageNet. We will add it to our camera-ready version if our paper gets accepted.\\n\\nWe thank again for your detailed and constructive comments.\"}",
"{\"title\": \"Further Response\", \"comment\": \"Thank you for your reply and continuing effort to provide constructive feedback until the end of the response/discussion phase.\\n\\n1) Because GraSP shouldn\\u2019t increase the scale, so increasing the gradient norm is more likely to be achieved by aligning larger eigenvalues of the NTK with the target. We strongly agree with the reviewer that large gradient norm might not be desirable, but it can typically be handled with small learning rate. If the reviewer is comfortable with our statement about NTK, we will modify it in our camera-ready version if our paper gets accepted.\\n\\n2) The goal of pruning is to reduce the model size while with minimal loss in test accuracy, it does not matter the resulting model overfits or underfits the training data. Therefore, the common strategy for comparing pruning algorithms is to compare the trade-off between model size and test accuracy achieved by each algorithm. Also, underfitting is common for high pruning ratios, but different pruning algorithms always result in drastically different empirical performance, and usually better algorithm will achieve better performance. For example, algorithms such as DSR, SET and DeepR, they all underfit the training data for pruning ratio 98% on CIFAR10 with ResNet32, but DSR can achieve significantly better results than the other two. We totally agree we can design heuristics to improve all of these pruning algorithms.\\n\\n-------------------------------------------------------------------------------------------------------------------------------------\\n\\nFinally, we argue that single-shot pruning is promising and offers a new way to speed up network training and inference. This area is new but we believe it will be a very impactful research direction. \\nMore importantly, unlike other traditional pruning algorithms, it has a deep connection with neural network training dynamics and our work may be of independent interest for deep learning theory community. Particularly, large gradient norm indicates big stiffness/gradient confusion (assuming we don't change the scale, see the following references), which seems to correlate with good generalization performance across different tasks.\", \"https\": \"//openreview.net/pdf?id=ryeFY0EFwS\\n\\nAnother potential but promising application of single-shot pruning is to select a big winning ticket (with similar size of standard neural networks) from a gigantic network which cannot fit in our hardware for training. As shown by recent papers in deep learning theory, over-parameterization leads to better generalization, therefore the winning ticket from a gigantic network may perform better than standard neural networks of same size.\"}",
"{\"title\": \"Quick response\", \"comment\": \"Thank you for your reply. We sincerely appreciate your valuable comments.\\n\\nThank you for correcting us in terms of DST algorithms. We will take your suggestions in our new revision.\\n\\n(A) Currently, we\\u2019re not aware of such guarantee. But if the sparsity pattern does not change during training, this might ease the difficulty of designing dedicated hardware for achieve accelerations.\\n\\n(B) It seems that there is no GPU acceleration library for sparse tensors (Dettmers & Zettlemoyer, 2019). However, we cannot exclude the possibility of it in the future. CPU is in general not capable of dealing with massive computations, and even for sparse networks, it is still requires a large amount of computations.\\n\\n(C) We\\u2019re now working on a new revision to include those results and will try our best to update to openreview before the rebuttal deadline. \\n\\nFinally, we\\u2019d like to note that our single-shot pruning algorithm may be of independent interest for deep learning theory community. Lottery ticket hypothesis (they showed that there exist winning tickets at the initialization) paper opened a new line of research in understanding neural network dynamics. However, in the original lottery ticket hypothesis paper, they had to use *pruning after training* method to identify the ticket. Our paper is trying to show that we are able to identify them before training and push it to the limit. We believe it may have some inspirations for other researchers in understanding deep learning and neural network training.\\n\\nWe also believe our pruning criteria has some deep connections with generalization performance of neural networks. Particularly, large gradient norm indicates big stiffness/gradient confusion (assuming we don't change the scale, see the following references), which seems to correlate with good generalization performance across different tasks.\", \"https\": \"//openreview.net/pdf?id=ryeFY0EFwS\\n\\n\\n------------------------------------------------------------------------------------------------------------------------------\\nTim Dettmers and Luke Zettlemoyer. Sparse networks from scratch: Faster training without losingperformance.arXiv preprint arXiv:1907.04840, 2019.\"}",
"{\"title\": \"Few Clarifications Needed\", \"comment\": \"Naming Dynamic Sparse Training methods as 'Pruning during Training' can be misleading. Since most of the pruning algorithms start with a dense network and prune the networks during training. I would suggest naming those methods as 'Dynamic Sparse Training'(DST) methods. These methods start with a predefined sparsity, but unlike `One Shot pruning` algorithms, they do change the connectivity of layers during training. This has been shown to improve performance over keeping the connectivity static.\\n\\n\\\"Our goal is to propose an improved pruning algorithm which can conduct pruning before training. In this sense, we don\\u2019t think those sparse training baselines relevant enough. \\\" \\n\\nI disagree with this. Methods come with the problem they address. DST algorithms are very relevant to your algorithm, since they attack the same problem: *Training sparse neural networks* with the same goal: *Reducing training cost*. They would benefit same improvements in terms of FLOPs as GrasP since they do start from a *predefined sparsity* (term taken from Dey et.al. 2019).\\n\\nI think the comparison you provided for DST and One-shot-pruning algorithms is a great start. Dey et.al.'s paper is very interesting. They talk about hardware friendly, clash-free, sparse connectivity patterns and show that such patterns perform just as good as any random pattern (static training). There is no guarantee that GrasP would find such clash-free connectivity. Please correct me if I am wrong (A). I am not an expert on hardware, but it is also not obvious to me that FPGA's would be the choice of hardware for training in the future(it is currently not). What about CPU, GPU acceleration of sparse neural network training? (B)\\n\\nThanks for running extra experiments. I appreciate your work. Results show that GrasP performs worse than some of the DST methods (which is fine). As discussed by the authors there might be some settings where static sparsity is preferred (still not sure about this) and not all papers should get SOTA for the problem they attack. However, they should make a fair comparison with relevant methods.\\n\\nI don't see any of comparisons and numbers provided below in the revised paper. Are the authors planning adding those results in the next version? Similarly I believe experiments done for '(4) Usefulness of the pruning criteria' would be a great addition to your work. (C) \\n\\nA quick response to questions (A), (B), (C) would be helpful.\"}",
"{\"title\": \"Thank you for your rebuttal\", \"comment\": \"Thank you for your rebuttal.\\n\\nAd 1) I agree that GraSP shouldn't increase the scale. But my argument was actually mainly just to highlight a weird implication of the theory. The theoretical argument is equivalent to saying that large gradient norm is desirable, but this is clearly not true generally speaking; gradient explosion is not desirable in the general case. \\n\\nI think the argument should be constructed around a standard metric in optimization such as conditioning of the Hessian. The norm of the NTK kernel is clearly not a standard way to argue for an optimization benefit. Maybe GraSP doesn't change the scale, but reduces gradient confusion, or other related metric? Or maybe it improves conditioning of the NTK kernel? Either argument would be more convincing from the optimization perspective.\\n\\nAd 2) Thank you for checking different learning rates. \\n\\nIf the primary reason that GraSP outperforms SNP at high prunning ratios is that SNP underfits, I am not sure this is a novel enough contribution. Perhaps there are some simple heuristics that would reduce change that SNP picks some critical connections? \\n\\nI am not an expert in the field. In the case Reviewer #4 thinks that this is a strong enough contribution, I would be OK accepting the submission.\"}",
"{\"title\": \"To all reviewers\", \"comment\": \"We thank all reviewers again for your detailed comments and constructive suggestions.\\n\\nWe've updated our paper and responded to you. We really sorry for the late response to reviewer #4, it took many days for us to run experiments you requested. We hope that our responses address your concerns. If so, it would be great if you can update your review and rating. But if not, we're open to answer more questions and further improve our paper\"}",
"{\"title\": \"Response to reviewer #4 [1/4]\", \"comment\": [\"Thanks for your detailed reviews. We really appreciate your time for reviewing our paper carefully.\", \"Before moving on to answer your questions and comments, we\\u2019d like to first clarify the focus of this work. Our goal is to propose an improved pruning algorithm which can conduct pruning before training. In this sense, we don\\u2019t think those sparse training baselines relevant enough. To the best of our knowledge, the only existing baseline is SNIP (Lee et al., 2019). Other algorithms such as OBD (LeCun et al., 1990) and MLPrune (Zeng & Urtasun, 2019) in our paper are serving as upper bound for single-shot pruning algorithm and the main reason to include them is to inform readers of the gap between pruning before and after training. To address your concerns, we run experiments with the methods you mentioned. However, we note again that those methods only serve as the upper bound and are not really \\u201cimportant\\u201d baselines.\", \"[A] Regarding the sparse training baselines you mentioned, we would like to clarify the difference between \\u2018Pruning before Training\\u2019 and \\u2018Pruning during Training\\u2019.\", \"(a) \\u2018Pruning during Training\\u2019 methods, such as SET (Mocanu et al., 2018) and DSR (Mostafa & Wang, 2019), need to redistribute the weights during training by different heuristics. As shown in Dey et al. (2019), with *pre-defined sparsity*, the training can be accelerated by 5X, and the speedup performance can be optimized in the hardware level by programming the sparse structure using FPGA in advance of training. However, SET and DSR need to change the sparse structures during the whole training process, and thus it is unclear for those methods to enjoy the potential acceleration from hardwares.\", \"(b) \\u2018Pruning during Training\\u2019 methods change the standard training procedure because they need to redistribute the weights during training, which makes it more complicated than \\u2018Pruning before Training\\u2019.\", \"(c) \\u2018Pruning during Training\\u2019 methods enjoy more flexibility than \\u2018Pruning before training\\u2019 as they have the freedom to change the sparsity during training. However, this results in the problem of (a) and (b).\", \"For other detailed differences, please refer to the ICLR2020 submission at: https://openreview.net/pdf?id=SJem8lSFwB\", \"----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"Lee, Namhoon, Thalaiyasingam Ajanthan, and Philip HS Torr. \\\"Snip: Single-shot network pruning based on connection sensitivity.\\\" ICLR 2019.\", \"LeCun, Yann, John S. Denker, and Sara A. Solla. \\\"Optimal brain damage.\\\" Advances in neural information processing systems. 1990.\", \"Zeng, W. and Urtasun, R. MLPrune: Multi-layer pruning for automated neural network compression, 2019. URL https://openreview.net/forum? id=r1g5b2RcKm.\", \"Mocanu, Decebal Constantin, et al. \\\"Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.\\\" Nature communications 9.1 (2018): 2383.\", \"Mostafa, Hesham, and Xin Wang. \\\"Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.\\\" ICML 2019.\", \"-Sourya Dey, Kuan-Wen Huang, Peter A Beerel, and Keith M Chugg. \\\"Pre-defined sparse neural networks with hardware acceleration.\\\" IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019.\"]}",
"{\"title\": \"Response to reviewer #4 [2/4]\", \"comment\": \"(1-2) Include more baselines. (DSR, SET, DeepR ); Pruning baselines can be improved. I am not convinced that they represent the best achievable sparse training results.\\n\\nWe really thank the reviewer for pointing out these sparse training papers and related concurrent submissions. We have updated the paper to include them in the section of related works. We also agree with the reviewer that including those sparse training methods, DSR, SET and DeepR, will greatly improve the experiments section. Therefore, we adopt the public implementation from https://github.com/IntelAI/dynamic-reparameterization for the experiments with DSR, SET and DeepR (Bellec et al., 2018). Specifically, we test them on three datasets (CIFAR-10, CIFAR-100 and TinyImageNet) with two networks (VGG19 and ResNet32). The results are presented in the following:\\n\\n# ResNet32 on CIFAR10\\n+-----------+----------------+----------------+----------------+\\n| Ratio | 90% | 95% | 98% | \\n+-----------+----------------+----------------+----------------+\\n| DSR | 92.97 | 91.61 | 88.46 | \\n+-----------+----------------+----------------+----------------+\\n| SET | 92.30 | 90.76 | 88.29 | \\n+-----------+----------------+----------------+----------------+\\n| DeepR | 91.62 | 89.84 | 86.45 | \\n+-----------+----------------+----------------+----------------+\\n| Grasp | 92.38(0.2) | 91.39(0.3) | 88.81(0.1) |\\n+-----------+----------------+----------------+----------------+\\n\\n# ResNet32 on CIFAR100\\n+-----------+----------------+----------------+----------------+\\n| Ratio | 90% | 95% | 98% | \\n+-----------+----------------+----------------+----------------+\\n| DSR | 69.63 | 68.20 | 61.24 | \\n+-----------+----------------+----------------+----------------+\\n| SET | 69.66 | 67.41 | 62.25 | \\n+-----------+----------------+----------------+----------------+\\n| DeepR | 66.78 | 63.90 | 58.47 | \\n+-----------+----------------+----------------+----------------+\\n| Grasp | 69.24(0.2) | 66.50(0.1) | 58.43(0.4) |\\n+-----------+----------------+----------------+----------------+\\n\\n# VGG19 on CIFAR10\\n+-----------+----------------+----------------+----------------+\\n| Ratio | 90% | 95% | 98% | \\n+-----------+----------------+----------------+----------------+\\n| DSR | 93.75 | 93.86 | 93.13 | \\n+-----------+----------------+----------------+----------------+\\n| SET | 92.46 | 91.73 | 89.18 | \\n+-----------+----------------+----------------+----------------+\\n| DeepR | 90.81 | 89.59 | 86.77 | \\n+-----------+----------------+----------------+----------------+\\n| Grasp | 93.30(0.1) | 93.04(0.2) | 92.19(0.1) |\\n+-----------+----------------+----------------+----------------+\\n\\n# VGG19 on CIFAR100\\n+-----------+----------------+----------------+----------------+\\n| Ratio | 90% | 95% | 98% | \\n+-----------+----------------+----------------+----------------+\\n| DSR | 72.31 | 71.98 | 70.70 | \\n+-----------+----------------+----------------+----------------+\\n| SET | 72.36 | 69.81 | 65.94 | \\n+-----------+----------------+----------------+----------------+\\n| DeepR | 66.83 | 63.46 | 59.58 | \\n+-----------+----------------+----------------+----------------+\\n| Grasp | 71.95(0.2) | 71.23(0.1) | 68.90(0.4) |\\n+-----------+----------------+----------------+----------------+\\n\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n- Guillaume Bellec , David Kappel, Wolfgang Maass, and Robert Legenstein\\\"Deep Rewiring: Training very sparse deep networks.\\\" ICLR 2018.\"}",
"{\"title\": \"Response to reviewer #4 [3/4]\", \"comment\": \"# ResNet32 on TinyImageNet\\n+-----------+----------------+----------------+----------------+\\n| Ratio | 85% | 90% | 95% | \\n+-----------+----------------+----------------+----------------+\\n| DSR | 57.08 | 57.19 | 56.08 | \\n+-----------+----------------+----------------+----------------+\\n| SET | 57.02 | 56.92 | 56.18 | \\n+-----------+----------------+----------------+----------------+\\n| DeepR | 53.29 | 52.62 | 52.00 | \\n+-----------+----------------+----------------+----------------+\\n| Grasp | 57.25(0.1) | 55.53(0.1) | 51.34(0.3) |\\n+-----------+----------------+----------------+----------------+\\n\\n# VGG19 on TinyImageNet\\n+-----------+----------------+----------------+----------------+\\n| Ratio | 90% | 95% | 98% | \\n+-----------+----------------+----------------+----------------+\\n| DSR | 62.43 | 59.81 | 58.36 | \\n+-----------+----------------+----------------+----------------+\\n| SET | 62.49 | 59.42 | 56.22 | \\n+-----------+----------------+----------------+----------------+\\n| Grasp | 60.76(0.2) | 59.50(0.3) | 57.28(0.3) |\\n+-----------+----------------+----------------+----------------+\\n(Note: DeepR is missing in the above table because it is extremely slow. We will complete once the experiments are finished.)\", \"further_discussions_for_the_above_results\": \"We can observe that DSR is the best performing one, and GraSP is quite competitive. In particular, GraSP performs much better than DeepR in most settings, and can outperform SET in more than half of the settings. Furthermore, we would also argue that GraSP has several advantages over these three \\u2018Pruning during Training\\u2019 methods:\\n\\n - Simplicity and easy-to-use: GraSP is much simpler than DSR, SET and DeepR, as it only needs to conduct pruning prior to training in a single-shot, and it does not change the sparsity dynamically during training. Moreover, there is almost no hyperparameters to tune for GraSP in comparisons with DSR, SET and DeepR. (i.e. DeepR is extremely slow and not scalable; SET requires manually specified pruning ratio for each layer; DSR requires some specific layers to be dense.)\\n\\n - Efficiency: GraSP can enjoy training acceleration (5x) by optimization in the hardware level (i.e. mapping the network topology structure to circuits or FPGA pre-programmed wiring). As we mentioned in [A], Dey et al. (2019) show that with **pre-specified** sparsity, the training can be accelerated by 5x. However, for dynamic sparse training (DSR, SET and DeepR), they need to change the sparse mask during training and thus cannot be optimized in the hardware level, and there is no GPU-accelerated libraries that utilize sparse tensor exist (Dettmers & Zettlemoyer 2019). Also it is almost impractical to optimize the training efficiency of dynamic sparse method in the hardware level, because their topology will change during training and recompile the FPGA program or changing the circuits will make the training even slower. \\n\\nIn general, we will not be surprised if DSR, SET, DeepR and other \\u2018Pruning during Training\\u2019 methods outperform \\u2018Pruning before Training\\u2019 methods, as they can change the sparsity during training dynamically and thus enjoy more flexibility than GraSP and SNIP. But they also have the disadvantages as we stated in A.a, A.b and the previous paragraph. \\n\\nThe baselines contained in \\u2018State of the sparsity \\u2019 are all in the category of \\u2018Pruning after Training\\u2019, and they all require a pretrained network, which does not save the training cost and cannot be directly compared with GraSP. In our paper, these methods are only used to serve as an upper bound for \\u2018Pruning before Training\\u2019 methods, and we already included two of them, OBD and MLPrune. Moreover, in the \\u2018State of the sparsity\\u2019 paper (Gale et al., 2019), they did not include Hessian-based pruning algorithms, such as OBD and MLPrune, into their comparisons with magnitude pruning. We also find that in another ICLR submission ( https://openreview.net/pdf?id=ryl3ygHYDB ), they have demonstrated that OBD can significantly outperform magnitude pruning (See Table 19 and Table 20), which is the best performing one in the \\u2018State of the sparsity\\u2019 paper. Besides, we would also argue that SNIP is the most related, state-of-the-art baseline for pruning network prior to training. \\n\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nGale, Trevor, Erich Elsen, and Sara Hooker. \\\"The state of sparsity in deep neural networks.\\\" arXiv preprint arXiv:1902.09574 (2019).\"}",
"{\"title\": \"Response to reviewer #4 [4/4]\", \"comment\": \"(3) ImageNet baselines;\\n\\nThank you for your kind words, we strongly agree with you that large scale experiments are important and necessary. Our purpose of ImageNet experiments is only for showing that GraSP can beat SNIP consistently even on more challenging and larger datasets. As we mentioned in the beginning of our response, we think the only baseline of single-shot pruning is SNIP. Therefore, we did not include other baselines in this experiment. We agree that including more baselines will make our empirical results stronger, but it won\\u2019t change our conclusion that GraSP is better than SNIP. To have a sense, we provide a rough comparison between the results of SET, DSR, Deep-R and GraSP on ImageNet with ResNet50 referred from their original papers: \\n\\n+\\u2014\\u2014\\u2014---+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--+\\u2014\\u2014\\u2014-\\u2014\\u2014--+\\n| Model | SET | DSR | Deep-R | GraSP |\\n+\\u2014\\u2014\\u2014---+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--+\\u2014\\u2014\\u2014\\u2014\\u2014---+\\n| 80% | 72.6 | 73.3 | 71.7 | 72.06 |\\n+\\u2014\\u2014\\u2014---+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--+\\u2014\\u2014\\u2014\\u2014\\u2014---+\\n| 90% | 70.4 | 71.6 | 70.2 | 68.14 |\\n+\\u2014\\u2014\\u2014---+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014+\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--+\\u2014\\u2014\\u2014\\u2014\\u2014---+\\nWe can observe that GraSP is still quite competitive in this setting, and it outperforms DeepR at the pruning ratio of 80%, though GraSP is a single-shot pruning algorithm. It is very encouraging that single-shot pruning algorithm can perform as competitively as other \\u2018Pruning during Training\\u2019 methods. \\n\\n\\n(4) Usefulness of the pruning criteria.\\n\\nWe really thank reviewer for proposing some interesting ablation studies. (1) For reducing gradient norm, we found that it will result in disconnected networks for high pruning ratios, and thus correspondingly performs much worse. (2) For \\u2018random pruning\\u2019, we adopt the sparsity allocation identified by GraSP and then shuffle the sparse mask. We found that for low pruning ratios, shuffling the mask does not degrade the performance much, while for high pruning ratios, i.e., 98%, 99%, it will degrade the performance a lot. We conjecture that for low pruning ratios, the pruned network is still moderately over-parameterized, and thus the shuffling operation will not affect the performance much. Apart from these ablation study, we believe that the best way for showing the usefulness of a pruning criteria is the empirical results in terms of pruning-ratio vs. Test accuracy. \\n\\n(5) (Page 8 / Table 5) Do you aggregate all accuracies in Table 5 using different batch sizes and initialization methods? \\n\\nYes, they are averaged over multiple runs. The purpose of them is for sensitivity analysis, so as to show that our pruning criteria is not sensitive to different batch sizes and initialization schemes. \\n\\n- Response to minor comments.\\n\\nWe've updated our paper to incorporate your suggestions on writing and citations. In terms of the reason we reported the results of VGG networks, our main purpose is to simulate the case of feedforward networks without skip-connections (we also reported results on ResNet (i.e. with skip-connections) in our paper). We agree that experimenting with more recent networks is good, but we should avoid doing duplicated experiments. \\n\\nWe really appreciate your valuable comments, and careful assessment of our work. We hope our response can address your concerns well, and if you have any further concerns/questions/suggestions, please let us know!\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thanks for your detailed comments, and in particular for your valuable suggestions on improving the writing. We've updated our paper to incorporate your suggestions.\\n\\nResponses to questions/comments:\\n\\t\\n(1) In paragraph below Equation (8): what does \\\"can be computed by backward twice\\\" mean?\\n\\n\\u201cBackward twice\\u201d means that we first compute the gradient with respect to the weights as $\\\\mathbf{g} = \\\\partial \\\\mathcal{L}/\\\\partial \\\\mathbf{\\\\theta}$ (the first backward), and then we compute the Hessian vector product by simply computing $\\\\mathbf{Hv} = \\\\partial (\\\\mathbf{g}^\\\\top \\\\mathbf{v})/\\\\partial \\\\mathbf{\\\\theta}$ (the second backward, and we only differentiate through $\\\\mathbf{g}$). By doing so, we do not need to compute the Hessian explicitly. \\n\\t\\n(2) Please specify where the equalities in equation (9) are coming from.\\n\\nFirst of all, $\\\\nabla \\\\mathcal{ L}(\\\\mathbf{\\\\theta}) = \\\\nabla_\\\\mathbf{\\\\theta} \\\\mathcal{Z}^\\\\top \\\\nabla_\\\\mathcal{Z} \\\\mathcal{L}$, where $\\\\mathcal{Z}$ is defined in sec 2.2, page 3. Then we can rewrite $\\\\nabla \\\\mathcal{L}(\\\\mathbf{\\\\theta})^\\\\top \\\\nabla \\\\mathcal{L}(\\\\mathbf{\\\\theta})$ as the second term in equation (9). As we reviewed in sec 2.2 (the paragraph below equation (3)), we can decompose the NTK $\\\\Theta$ as $\\\\sum_{i=1}^n\\\\lambda_i\\\\mathbf{u}_i\\\\mathbf{u_i}^\\\\top$, and plug it in the equation (9) can show the equality.\\n\\n(3) Table 3 & 4: Why are the pruning ratios different for each model?\\n\\nThe choice of pruning ratios depends on the specific dataset and base network. For ImageNet, we cannot prune as extreme as on Tiny-ImageNet, otherwise the performance of the pruned network will degrade too much and making the comparisons not meaningful. For ResNet32, it is already much more compact than VGG19, so we need to use smaller pruning ratios for ensuring the comparisons are meaningful. \\n\\n(4) Table 3: Why are values missing for the baseline for 80% and 90%?\\n\\nIt\\u2019s not missing. The baseline is the unpruned network (pruning ratio = 0%), rather the sub-network corresponding to pruning ratios of 60%, 80% or 90%.\\n\\n(5) Section 5.2: \\\"We observed that, the main bottleneck or pruned... when deriving the pruning criteria\\\": it's not clear where this conclusion is coming from.\\n\\nThe observation comes from Figure 2. We can see that the training error of SNIP-pruned network is far away from 0, which means it cannot fit the training data well, i.e. underfitting.\\n\\n\\nWe hope our response can address your concerns well. If you have any further questions or concerns, please let us know!\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your detailed comments! It\\u2019s really encouraging that you think the research direction we\\u2019re working on is important.\\n\\nIn terms of your concerns, we address them one by one below.\\n\\n(1) We agree that scaling up the logits weights leads to the same effect on the NTK. Nevertheless, we would like to argue that pruning itself might not have the flexibility to change the scale since it only involves the operation of removing weights. From this perspective, we believe that our algorithm is to align labels with NTK eigenspectrum rather than changing the scale. Indeed, we provide the training loss curve for both SNIP and GraSP in Figure 2, and we can observe that models pruned by GraSP converge much faster than SNIP and also achieve lower training error. To further verify if the difference is caused by using too small a learning rate for SNIP, we conducted experiments with the same setting as in Figure 2, but increased the learning rates for SNIP. We tried learning rates of 0.3, 1.0 and 2.0. The final test accuracies are: \\n+------------------------------------------------------------------------+\\n| LR | 0.3 | 1.0 | 2.0 |\\n+------------------------------------------------------------------------+\\n| Acc | 55.5(+/- 1.2) | 48.7(+/- 1.6) | 10.95(+/- 6.9) |\\n+------------------------------------------------------------------------+\\nAll experiments are averaged over three runs. These results show that further performance gain cannot be obtained by simply using larger learning rates. The corresponding training loss curve can be viewed in https://drive.google.com/file/d/1KUcsGhgj9p1X7rPa7v_0_D5JEWTtVjOR/view . Overall, increasing the learning rate for SNIP does NOT result in better final accuracy or accelerated optimization. \\n\\n(Minor: Precisely, NTK has the same eigenspectrum as the empirical Fisher matrix rather than the Hessian matrix, though in some cases they are equivalent.)\\n\\n(2) We have observed that, for large pruning ratios, underfitting is indeed the major problem for pruning algorithms such as SNIP (indicated by the fact that final training error is far away from 0, see Figure 2), because the capacity of the pruned network will be largely affected by the resulting structure, and SNIP will in general result in a bottleneck (prune too many weights) in intermediate layers, whereas it is less severe for GraSP. Besides, we would argue that the bad performance of SNIP for high pruning ratios is not due to a lower effective learning rate based on our results reported in (1) (see above). We didn't observe clear performance improvement by tuning the learning rates for SNIP. Therefore, it\\u2019s much more likely that the bad performance of SNIP is due to the pruning strategy induced by the SNIP objective. \\n\\n(3) No, the batch size is only for the computation of the Hessian vector product in the GraSP. The training procedure is the same as stated in sec 5.1. \\n\\t\\nWe hope our response resolves your concerns well, and if you have any further questions or concerns, please let us know!\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes a novel one-shot-pruning algorithm which improves the training of sparse networks by maximizing the norm of the gradient at initialization. The utility of training sparse neural networks and shortcomings of dense-to-sparse algorithms like Pruning, LotteryTicket are nicely motivated at introduction. The pruning criterion is motivated by the first order approximation of the change in the gradient norm when a single connection is removed, though the results show that removing many connections together with GraSP increases the total gradient norm therefore allowing the loss to decrease faster. Experiments suggest employing such pruning algorithm improves final performance over two baselines: random and SNIP.\\n\\nThough I find the proposed method intriguing and well motivated, experiments section of the paper misses some important sparse training baselines and needs some improvement. I am willing to increase my score given my concerns/questions below are addressed. \\n\\n(1) The paper doesn't mention some important prior work on the topic. Since the paper focuses on end-to-end sparse training, the following sparse training methods needs to be considered and compared with:\\n- Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science [Mocanu, 2018]\\n- Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization [Mostafa, 2019]\\n- Deep Rewiring: Training very sparse deep networks [Bellec, 2017] \\n- There is also few recent work submitted to ICLR2020: https://openreview.net/forum?id=SJlbGJrtDB, https://openreview.net/forum?id=ryg7vA4tPB, https://openreview.net/forum?id=ByeSYa4KPS\\n\\n(2) Pruning baselines can be improved. I am not convinced that they represent the best achievable sparse training results. I would recommend method proposed by `Prune or Not to Prune` as a strong baseline. You can also check `State of Sparsity` paper to obtain some competitive pruning results.\\n\\n(3) It's great that the authors are aware of the importance of having experiments on larger datasets. Though, I found the results reported on Imagenet to be limited. Is there a reason why Imagenet-2012 results are missing pruning baselines? I think having other reported pruning results here along with performance of other sparse training methods (SET, DSR) would be useful. Most of these numbers should be readily available in the papers mentioned above, but I guess it is always better to run them using the same settings. \\n\\n(4) To demonstrate the usefulness of the pruning criteria proposed, it would be nice to do some simple ablations. Some suggestions: (1) Remove weights that would *decrease* the gradient norm most (2) Do random pruning while preserving exact per layer sparsity fractions. (3) sweep over batch size used to calculate the importance scores and evaluate final accuracies or the initial gradient norm. The second experiment would help identifying whether the gains are due to better allocation of sparsities across layers or due to increased gradient norm. Looking at Figure-4 and seeing the per layer sparsities are different, It is not clear to me which one is the underlying reason for improved performance.\\n\\n(5) (Page 8 / Table 5) Do you aggregate all accuracies in Table 5 using different batch sizes and initialization methods? If, so I am not sure what the intended message here is, since it is difficult to infer how these hyper-parameters affect the result. Do you sweep different batch sizes for estimating importance of units, too? It would be nice to see whether the two batch sizes interact with each other and/or how increased batch size affects the quality of pruned networks.\", \"some_minor_comments\": \"(a) (Page 1) I found the motivation very intriguing. Though the statement `Recently, F&C (2019) shed light on this...` seems a bit off, given that LT can't find solutions as well as the pruning solution in most practical (larger datasets and architectures) settings. Therefore I would be better to pose this as an `open problem`. \\n\\n(b) (end of page-1) `However, connection sensitivity is sub-optimal as a criterion because the gradient of each weight might change dramatically after pruning due to complicated interactions between weights`. I think this is still the case for GraSP. Since the criterion it uses assumes independence (i.e. what if we remove a single weight?). It would be nice to see some ablations on this. Does `K=number of weights removed` affect the norm of the sparsified networks?\\n\\n(c) (Figure 1) I find the comparative illustration between SNIP and GraSP very useful. Though, the architecture presented seems a bit artificial (i.e. I am not aware of any architecture with single hidden layer and a single output unit). I think the same motivation can be made by removing the top unit (therefore having 6-4-1 units) and removing all incoming connections for the output unit until a single connection remains. Then SNIP would remove that single connection whereas GraSP would remove one of the connections in the previous layer.\\n\\n(d) (Section 2.1) `In contrast, Hessian based algorithms...` Though it is a structured pruning algorithm It might be nice to include the following work, https://arxiv.org/abs/1611.06440. \\n\\n(e) (Section 2.1) Previous work needs following citations: [Bellec, 2017], [Mocanu, 2018] and [Mostafa, 2019] \\n\\n(f) (Section 2.2) Why the initial dynamics affect the final performance? One explanation given in the paper is through recent work on NTK and this is great. Though training settings used at `Lee et al (2019a)` and in the paper are a bit different. Usage of MSE, small datasets, etc\\u2026 So it might be nice to point out differences. \\n\\n(g) (Section 3) At $D = {(x_i, y_i)}_{i=1}^n$, `n`->`N` \\n\\n(h) (Page 4) `Preserving the loss value motivated several\\u2026` -> `motivated by several\\u2026`\\nI think it is better to use existing terminology whenever available.I think using `One-shot pruning` instead of `Foresight pruning` would be a better choice and would prevent confusion. \\n\\n(j) (Page 5) `However, it has been observed that different weights are highly coupled \\u2026` This has been observed much earlier, too: like in Hassibi, 1993. \\n\\n(k) (Page 7) Last sentence `and thus hopefully..`: needs to be fixed.\\n\\n(l) (Page 8) The whole page needs some proof-reading. Some of them: (a) `SNIP and GraSP. We present...` probably connect with comma (b) `aims for preserving` -> `aims to preserve` (c) `In contrast, SNIP are more` `are`->`is` (d) `for ablation study` -> `as an ablation study`... \\n\\n(m) Is there a specific reason why VGG networks are preferred for experiments? I don't think they are relevant to any practical application anymore and they are massively over-parameterized for tasks in hand. Specifically for Cifar-10. I think focusing on more recent networks and larger datasets would increase the impact of the work.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a method to prune networks at initialization in a way that (mostly) preserves the gradient flow through the resulting pruned network. This is a direct improvement over previous methods (e.g. SNIP) which have no guarantees that pruned connections will break the gradient flow and thereby harm learning.\\n\\nI quite like this paper, the motivation and results are convincing and it is well presented. The writing is excellent for most of the paper. From section 5 onwards the writing does need quite a bit of editing, as its quality is significantly reduced from what came before.\", \"some_detailed_comments\": [\"Figure 1 is very nice and really clarifies the idea!\", \"In paragraph below Equation (8): what does \\\"can be computed by backward twice\\\" mean?\", \"Please specify where the equalities in equation (9) are coming from.\", \"Table 3 & 4: Why are the pruning ratios different for each model?\", \"Table 3: Why are values missing for the baseline for 80% and 90%?\", \"Section 5.2: \\\"We observed that, the main bottleneck or pruned... when deriving the pruning criteria\\\": it's not clear where this conclusion is coming from.\", \"Table 5 has no batch size results, even though you're referencing them in the text.\"], \"and_some_minor_comments_to_help_with_the_writing\": [\"Intro: \\\"As shown in Dey et al. (2019) that with pre-specified sparsity, they can achieve\\\" would read better as \\\"As shown by Dey et al. (2019), with pre-specified sparsity one can achieve\\\"\", \"Equation (3): Clarify that this is a function of $t$\", \"Sentence below Equation (6): \\\"of the pruned network, and thus our goal\\\" remove the \\\"and thus\\\"\", \"Table 1: Specify that you're reporting accuracy.\", \"Section 4.1: \\\"e.g. wide ResNet (Zagaruyko & Komodakis, 2016), and thus we can regard\\\" remove the \\\"and thus\\\"\", \"Sentence below equation (9): \\\"encouraging the eigenspace of \\\\Theta align\\\" add a \\\"to\\\" before \\\"align\\\"\", \"Sentence before section 5: \\\"it will encourage the eigenspace of the NTK distributing large eigenvalues in the direction of Y, which will in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits to the optimization in A\\\" would read better as \\\"it will encourage the eigenspace of the NTK to distribute large eigenvalues in the direction of Y, which in turn accelerates the decrease of the loss (Arora et al., 2019) and benefits the optimization in A\\\"\", \"Throughout section 5, write it in present tense rather than past tense. e.g. \\\"In this section, we conduct various experiments\\\" instead of \\\"In this section, we conducted various experiments\\\"\", \"Sentence below table 2: you have \\\"the the\\\"\", \"Second paragraph of section 5.1: \\\"We can observe GraSP outperform random pruning clearly\\\" would read better as \\\"We can observe GraSP clearly outperforms random pruning\\\"\", \"Second paragraph of section 5.1: \\\"In the next, we further compared\\\" remove \\\"In the next\\\"\", \"Second paragraph of section 5.1: \\\"Besides, we further experimented with the late resetting\\\" remove \\\"Besides\\\"\", \"Paragraph above section 5.2: \\\"GraSP surpassing SNIP\\\" use \\\"surpasses\\\" instead\", \"Paragraph above section 5.2: \\\"investigate the reasons behind in Section 5.2 for promoting better understanding\\\" would read better as \\\"investigate the reasons behind this in Section 5.2 for obtaining a better understanding\\\"\", \"Section 5.2: \\\"We observed that, the main bottleneck\\\" -> \\\"We observe that the main bottleneck\\\"\", \"Section 5.2: \\\"Besides, we also plotted the the gradient norm of the pruned\\\", remove \\\"Besides\\\" and the extra \\\"the\\\"\", \"Section 5.2: \\\"the average of the gradients of the entire dataset\\\" use \\\"over the entire dataset\\\"\", \"Section 5.2: \\\"hopefully more training progress can make as evidenced\\\" would read better as \\\"hopefully more training progress can be made as evidenced\\\"\", \"Section 5.3 title would be better using \\\"Visualizing\\\" instead of \\\"Visualize\\\"\", \"Section 5.3: Join the first two sentences with a comma into a single sentence.\", \"Section 5.3: \\\"In contrast, SNIP are more likely\\\" -> In contrast, SNIP is more likely\\\"\", \"Section 5.4: \\\"for ablation study\\\" would read better as \\\"via ablations\\\"\", \"Section 5.4: \\\"we tested GraSP with three different initialization methods;\\\" use a \\\":\\\" instead of \\\";\\\"\", \"Section 6: \\\"Besides, readers may notice that\\\", remove the \\\"Besides\\\"\", \"Section 6: \\\"traditional pruning algorithms while still enjoy the cheaper training cost. As an evidence,\\\" would read better as \\\"traditional pruning algorithms while still enjoying cheaper training costs. As evidence,\\\"\", \"Your citation for Evci et al. (2019) is missing the publication venue/arxiv ID.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a new prunning criterion that performs better than Single-shot Network Pruning (SNIP) in prunning a network at the initalization. This is an important and potentially very impactful research direction, The key idea is to optimize the mask for the loss decrease after an infinimitesal step, rather than for the preservation of loss after prunning. While with the benefit of hindsights it might seem simple, it is a clever innovation. However, I am not convinced by the theoretical explanation and some of the experimental results (see detailed comment below). Based on this I am leaning at the moment towards rejecting the paper. I will be happy to revisit my score if these concerns are addressed.\", \"detailed_comments\": \"1. I am not sure that NTK based analysis helps explain the efficacy of the method. An increase of the (matrix) norm of the NTK kernel can be achieved by simply scaling up by a constant scalar the logits weights (see for instance https://arxiv.org/abs/1901.08244). Or equivalently (comparing the resulting learning dynamics in NTK, as also can be read from (3)), by just increasing the learning rate. In other words, I could just prune weights randomly, and then scale up logits' weights, and end up with the same effect on the NTK kernel. I think that for this argument to work, NTK kernel should change in a scale-invariant manner. This would correspond to a better conditioning of the loss surface (because Hessian has the same eigenspectrum as the NTK kernel under the NTK assumption), which is a scale invariant property.\\n\\n2. From the Figure 2 it seems SNIP-prunned network underfits data severly. Could you add training accuracy to the Tables (maybe in the Supplement)? If in all cases when GraSP wins, it is due to underfitting, this should be commented on. Is it common for prunning algorithms to result in underfitting, or is achieving generalization a larger challenge? Could the bad performance at high prunning ratios of SNIP be due to a conflation of two effects: (1) \\\"good\\\" prunning, but (2) lowering the effective learning rate (given the gradient norm is low)? Would, for high prunning ratios, a tuned learning rate improve SNIP performance/reduce underfitting? \\n\\n3. In Table 5 is the batch-size used for training of the network, or only for the computation of the Hessian-vector product in the GraSP procedure? If for training, then the relatively small spread of results is a bit surprising given results by Keskar (https://arxiv.org/abs/1609.04836)\\n\\nEdit\\n\\nThank you for the rebuttal. Raise my score. I agree with Reviewer #4 that increasing gradient norm at initialization is a promising direction on its own, which warrants acceptance.\"}"
]
} |
Byg9AR4YDB | Exploring Cellular Protein Localization Through Semantic Image Synthesis | [
"Daniel Li",
"Qiang Ma",
"Andrew Liu",
"Justin Cheung",
"Dana Pe’er",
"Itsik Pe’er"
] | Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses. As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies. A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight (MIBI-TOF), allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images. To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention. By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels. Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods. Our model, cell-cell interaction GAN (CCIGAN), outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics. To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis. | [
"Computational biology",
"image synthesis",
"GANs",
"exploring multiplex images",
"attention",
"interpretability"
] | Reject | https://openreview.net/pdf?id=Byg9AR4YDB | https://openreview.net/forum?id=Byg9AR4YDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"50oUezAM1n",
"r1x5KUi_jB",
"S1gVULo_or",
"SkxiZLoOor",
"HkgtWHoOor",
"S1gvgVs_sB",
"r1xZRXjOjr",
"SyO0QgHqr",
"ryxXuMHAtH",
"HygLks1AFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723388,
1573594753601,
1573594699692,
1573594626630,
1573594368755,
1573594095453,
1573594057096,
1572303823517,
1571865195046,
1571842782348
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1442/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1442/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1442/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a dedicated deep models for analysis of multiplexed ion beam imaging by time-of-flight (MIBI-TOF).\\n\\nThe reviewers appreciated the contributions of the paper but not quite enough to make the cut.\\n\\nRejection is recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2 Part 1\", \"comment\": \"We thank the reviewer for their time and comments and would like to clarify and include the following in response to the listed major concerns:\\n\\nWe\\u2019re glad you liked our manuscript! All of the major concerns (and additional experiments) are also reflected in the updated copy of our manuscript.\\n\\n1. \\u201cWet experiments are mature\\u201d\\nThough wet lab experiments have significantly progressed, there are still major challenges in acquiring the data and interpreting it. Obtaining data is a time and labor intensive process and for patient data, we usually do not necessarily have nice controls while trying to interpret complicated interactions. Furthermore there\\u2019s a trade off on resolution and quality for the data obtained; this is where we believe CCIGAN is useful. We have included an additional paragraph at the end of the introduction which clarifies the value of CCIGAN as an extension of MIBI-TOF data collection and its benefits for high throughput assessment of biological scenarios that would not be possible with wet experiments alone. \\n\\n\\u201cHow can the computational synthesized image help the biologists then? Are there any cases that only the CCIGAN can do while the real experiment can not do?\\u201d\\nIt seems that a possible source of confusion is the lack of clarity in the introduction. We have updated it appropriately. We have added a Figure 1(B) (https://imgur.com/a/pKzch1J ) to show the purpose of the model is that it allows us to pose counterfactual scenarios such as \\u201cwhat effect does adding cell type X next to cell type Y have on these two cells\\u201d. This allows us to define a hypothesis testing environment for biologists without necessarily needing to scan multiple tissue slices to search for the exact scenarios. Theoretically given unlimited data, we can search for each cell-cell interaction scenario we wish to test, however, given the heterogeneous nature of cell interactions within tumor environments this is not a feasible approach. CCIGAN circumvents this hurdle posed by the true biological data in that it allows for us to construct any number of exacting cellular scenarios and output biologically consistent predictions. Because of this, the situations we have tested and quantified (e.g. varying number of cells, cell type, surface area through direct manipulation) cannot be easily (or feasibly) achieved by examining the real data. Recent work by Wu et al [1] have demonstrated the value of hypothesis testing in deep learning models in contributing to understanding of complex biological interactions. We have included this citation in our updated introduction as a demonstration of the additional value that deep learning models bring to the investigation of basic biological relationships. \\n\\n\\u201call the results in Section 6.2 can be easily achieved with the real data, right?\\u201d\\nNo, this is not the case. Similarly to above, to find the situations we have tested (if it exists at all in the limited data), we would not have enough permutations and perturbations of the experiment to make a claim or to functionally quantify the experiment. \\n\\n2. \\u201cwould it be accepted by the biologists in terms of performance? Would they believe in the results?\\u201d\\nWhile it is already difficult for anyone to trust a GAN without proper evaluation, by recapitulating prior accepted and clinically proven biological phenomenon under various control and test studies, we can demonstrate that CCIGAN learns biologically consistent insights and suggests that posing counterfactual scenarios are in turn, accurate. In addition to the current evaluation of recapitulating and newly quantifying previous established biological phenomenon (PD1/PD-L1 [2], CD8/pan keratin [3], section 5.2, 5.3), we have added another experiment to demonstrate that CCIGAN is able to capture and further quantify the same effects reported in MIBI-TOF\\u2019s triple negative breast cancer (TNBC) [4] study regarding tumor infiltrated and non infiltrated environments (section 5.4). This shows CCIGAN is able to recapitulate prior discovered biology from non-GAN methods and show the first steps towards quantifying them.\\n\\n\\u201cin the last column of Figure 2, we can see clear artifacts for the results generated by CCIGAN.\\u201d\\nGood eye! While this may seem like an artifact, it\\u2019s actually a biologically consistent scenario of tumor expression of PD-L1. Any of the tumor cells (red) could potentially express PD-L1 as shown in the real data. While other models may learn this association, they do not learn it in a biologically consistent manner as demonstrated in section 5.3\\u2019s CD8/pan keratin experiment.\"}",
"{\"title\": \"Response to Reviewer #2 Part 2\", \"comment\": \"\", \"minor_concern\": \"1. What's the resolution of the MIBI-TOF and CCIGAN?\\nMIBI TOF is 800 $mm^2$ at 2048x2048 pixels, doing some rearranging we see that 64x64 $\\\\rightarrow$ 64/2048 *800 = $25 mm^2$\\n\\n2. The introduction of the background could be further refined. The many-to-many mapping between different cell types and different protein markers is not emphasized explicitly. Readers can get lost easily. \\nFixed! We also included Figures 1(B) (https://imgur.com/a/pKzch1J ), Figure 5 (https://imgur.com/a/1gckfA3 ) showing the purpose of the model of being able to pose arbitrary cell orientation scenarios. Additionally we have revised our evaluation portion to make it easier to understand what biological phenomenon we are corroborating and which metrics we used to do so.\\n\\n3. The data part is not clear. How many M*2048*2048 images do the authors have for training as well for testing? \\n1 image with a 90-10 split on the image space to ensure no cell overlap. Now we have trained and included additional patients (each patient has one (M,2048,2048) image).\\n\\n4. Why do the authors choose 64*64 as the image path size?\\nThis is a common output size, we can scale our model larger but computationally for other datasets. This provides on average 8 cells, with a max of 15 and this captures enough interactions. Later on as we explore more datasets, we can scale this resolution up.\\n\\n5. Can the model be generalized well for data collected in different experiments (i.e., different tissues) but from the same machine? Can the model be generalized well across different machines with the same imaging experimental setting?\\nYes the model is agnostic to the technology (for example CODEX, Vectra) as long as semantic segmentations are given. We are evaluating more patients to show cross patient tumor (infiltrated/non infiltrated) micro-environments that is of value to biologists. This will allow different hypothesis testing environments for understanding interactions between patients.\\n\\n6. Is the model sensitive to the preprocessing of the data, like normalization? As for as I know, the baseline expression level of tissue can vary significantly at different time points within one day. If the model is sensitive to that, it will affect the usage of the model. \\nNormalization matters in these experiments (current protocols stain together to limit batch effects between FOV, but this is experimental wet lab side). Yes normalization can affect samples and in turn the model, but this is all on the experimental side. We are assuming these steps are done properly before. Additionally, we are measuring relatively within the patients. There are experimental techniques to try to mitigate these effects, such as patient samples being made into a TMA (tissue microarray) in one slide and stained at the same time to try to minimize the batch effects. Autofluorescence will add some base noise, but MIBI does not do this, so signals are much less affected in terms of expression and non expression.\\n\\n\\nCitations\\n[1] Yichen Wu, Yair Rivenson, Hongda Wang, Yilin Luo, Eyal Ben-David, Laurent A Bentolila, Chris-tian Pritz, and Aydogan Ozcan. Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning.Nature Methods, 2019.\\n[2] Yoshiko Iwai, Masayoshi Ishida, Yoshimasa Tanaka, Taku Okazaki, Tasuku Honjo, and Nagahiro Minato. Involvement of pd-l1 on tumor cells in the escape from host immune system and tumor immunotherapy by pd-l1 blockade. PNAS, 19:12293\\u201312297, 2002.\\n[3] RG Oshima. Apoptosis and keratin intermediate filaments. Cell Death and Differentiation, 9:486\\u2013 492, 2002.\\n[4] Keren, Leeat, et al. \\\"A structured tumor-immune microenvironment in triple negative breast cancer revealed by multiplexed ion beam imaging.\\\" Cell 174.6 (2018): 1373-1387.\"}",
"{\"title\": \"Response to Reviewer #1 Part 1\", \"comment\": \"We thank the reviewer for their time and thorough comments!\\n\\n(1) Not yet finished w.r.t. conveying sufficient motivation for the use of image synthesis for distilling biological insights:\\n\\nSimilar to the later specific criticisms and suggestions 1), this stems from our previous introduction not being clear. We have revised our introduction to showcase the model\\u2019s ability to \\u201c[engage] with counterfactual scenarios like \\u2018what effect does it have to add cell type X next to cell type Y\\u2019 \\u201d (Figure 1(B), https://imgur.com/a/pKzch1J ). The motivation for image synthesis is that it allows us to distill more information (in a many to many mapping) from beyond just the cell type (segmentation map). As MIBI measures much more information in the protein channels (ie various localizations at a subcellular resolution), we want to see how predictive the neighborhoods and cell types are of a cell\\u2019s phenotype. Additionally through SPADE and our attention mechanism as an image synthesis technique, we are able to condition on surrounding cell neighbors and capture cell-cell interactions in a local receptive field. The role of CCIGAN as a step forward from MIBI-TOF data collection is more clearly stated earlier in the paper. Furthermore, CCIGAN\\u2019s value as a tool towards expediting high throughput assessments of cell-cell interactions is clarified, particularly with regard to the rationale of our research aim. This is reflected in our new introduction.\\n\\n(2) Not yet finished w.r.t. evaluating the success or failure of the technique primarily in terms of generating useful biological insights + there are enough red flags with regard to knowledge of the data generation and underlying biology that it's unclear if the authors could correctly sanity check any insights they extract from their model\\n\\nIn addition to the current evaluation of recapitulating and newly quantifying previous established biological phenomenon (PD1/PD-L1 [1], CD8/pan keratin [2]), we have added another experiment to demonstrate that CCIGAN is able to capture and quantify the same effects reported in a study of triple negative breast cancer (TNBC) patients imaged by MIBI-TOF [3] publication regarding tumor infiltrated and non infiltrated environments (more on this later). This illustrates CCIGAN is able to recapitulate prior discovered biology from non-GAN methods and the first steps to quantify them.\\n\\nFor data generation, the MIBI machine is limited because of the long time it takes to generate data. This is where CCIGAN is useful in creating counterfactual scenarios as opposed to searching for these instances (if any) in the real data. Additionally for CCIGAN, it is helpful in signal representation as it is mass-spec as opposed to fluorescence based where the latter has signals that are amplified and saturated. This leads to non quantitative measurements. Whereas in MIBI, each count is an individual protein. Because of the counts CCIGAN is able to learn a more biologically consistent representation.\\n\\nSanity checking through quantifying and evaluating images is a biologically difficult problem at a cellular level, and a more difficult problem at a subcellular level. In addition to our previous our evaluation process of biologically motivated metrics where we spatially quantify direction and orientation through weighted centroid vector analysis and Earth Mover\\u2019s Distance, we use a simpler expression ratio sum to demonstrate we capture the same biological phenomena as [3] in the additional added (non) infiltration experiment.\", \"for_specific_criticisms_and_suggestions\": \"1. Thanks for pointing this out! We have revised our introduction, related works, and added Figure 1(B) exhibiting specific counterfactual scenarios that can be posed to CCIGAN. \\n\\n2. We have moved the bulk of the evaluation section to the appendix and have restructured section 5 and 6 to provide the biological phenomena and a high level overview of the metric chosen to evaluate it. We hope that this makes it easier to follow. We wholeheartedly agree that trusting a GAN is a bad idea without proper evaluation. In addition to the previous PD1/PDL1 [1], CD8/pan keratin [2] experiments we have updated the results portion (point (2)) to incorporate a MIBI-TOF TNBC study [3]. \\n\\nUnfortunately a global bank illustrating simple cell-cell interactions does not exist and even then, simple cell-cell interactions within the tumor microenvironment are quite complicated in that they have multiple higher order causal factors [4-6]. Furthermore, many of the markers in the original MIBI cohort were used for cell identification, not studying protein localization. The best a model can do to demonstrate biological consistency and establish trust (within the scope of ICLR) is to recapitulate known, vetted, and accepted biological phenomena. Through our now 3 independent experiments we believe we have demonstrated biological consistency and lay the foundation for interpreting cell-cell interactions through GAN methods.\"}",
"{\"title\": \"Response to Reviewer #1 Part 2\", \"comment\": \"3. Thanks for catching this! It was a miswriting error. We see how it can be construed as MIBI-TOF bombards ((a tissue sample )(with elemental metals tethered to respective antibodies)) as opposed to (a tissue sample, with elemental metals tethered to respective antibodies). We have updated it to: Given a tissue sample that is first stained with antibodies tethered with elemental metals, MIBI-TOF bombards the sample with simple ions causing the release of metal ions. \\n\\n4. We have updated this to be less vague to \\u201csuch as pan-keratin and the overexpression of beta-catenin\\u201d. In the MIBI-TOF TNBC paper, beta-catenin was used to classify tumor cells (Figure 1A, [3]). We have also included the full list of markers to be more clear.\\n\\nMinor comments/Small nits:\\n\\n1. We have updated some of the writing from the repeated use of \\u201cantibodies\\u201d\\n\\n2. Thanks for pointing out our unclear writing. All the markers were used for cell typing in the original paper (where we obtained our classifications from). For our specific purposes, some of them are not interesting to us (i.e. CD45 is a status indicator and is too broad to study its localizations). They were useful for cell typing but not for studying cellular protein localization. We have included a list of markers in the appendix and provide a brief explanation on the unused markers (usually they would be empty or indicators for status). Even still, some of the markers we had selected, still ended up being blank/empty.\\n\\nUsed (24): Pan-Keratin, EGFR, Beta catenin, dsDNA, Ki67, CD3, CD8, CD4, FoxP3, MPO, HLA-DR, HLA_Class_1, CD209, CD11b, CD11c, CD68, CD63, Lag3, PD1, PD-L1, IDO, Vimentin, SMA, CD31\\nNot used (12): CD16, B7H3, CD45, CD45RO, Keratin17, CD20, CD163, CD56, Keratin6, CSF-1R, p53, CD138\\n\\n3. Thanks for pointing this out! We see on the biological perspective, the typical TNBC patient has a single digit number of amplified segments which does not alter the overall amount of dsDNA than a few percent [7].\\n\\nFor this portion we were trying to explain the learned vectors from a technical attention perspective. While this is biologically the case, we can empirically observe the average values to be very similar across HLA Class 1 (MHC-I) and dsDNA and similarly expressed across all cells (average MHC-I/HLA Class 1: 0.049, average dsDNA: 0.045). The purpose of the control was to show that switching the learned vectors in the attention (which are fitted to all cells) did not yield a difference as their attention was learned to be very similar as MHC-I, dsDNA are similarly expressed across all cells (contrasted with a switch between CD 8 and pan keratin vectors).\\n\\n\\nCitations\\n[1] Yoshiko Iwai, Masayoshi Ishida, Yoshimasa Tanaka, Taku Okazaki, Tasuku Honjo, and Nagahiro Minato. Involvement of pd-l1 on tumor cells in the escape from host immune system and tumor immunotherapy by pd-l1 blockade. PNAS, 19:12293\\u201312297, 2002.\\n[2] RG Oshima. Apoptosis and keratin intermediate filaments. Cell Death and Differentiation, 9:486\\u2013 492, 2002.\\n[3] Keren, Leeat, et al. \\\"A structured tumor-immune microenvironment in triple negative breast cancer revealed by multiplexed ion beam imaging.\\\" Cell 174.6 (2018): 1373-1387.\\n[4] Weiping Zou. \\u201cImmunosuppressive networks in the tumour microenvironment and their therapeutic relevance.\\u201d Nature Reviews Cancer, 5, 263\\u2013274, 2005.\\n[5] M Egelblad, E Nakasone, Z Werb. \\u201cTumors as Organs: Complex Tissues that Interface with the Entire Organism.\\u201d Developmental Cell, 18(6), 884-901, 2010.\\n[6] Hendrik Ungefroren, Susanne Sebens, Daniel Seidl, Hendrik Lehnert & Ralf Hass. \\u201cInteraction of tumor cells with the microenvironment.\\u201d Cell Communication and Signalling, 9(18), 2011. \\n[7] Cancer Genome Atlas Network. \\\"Comprehensive molecular portraits of human breast tumours.\\\" Nature 490.7418 (2012): 61.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for their time and comments and would like to include the following:\\n\\n1. \\u201cIt would be great to extend the evaluation to other interactions and tissue types.\\u201d\\n\\nBecause MIBI only has 1 dataset available (as of November 2019), other tissues would be very difficult or impossible to compare/get. However, we can do further studies related to the different tumor scenarios across the patient dataset. We have added additional experiments in showing that we can recapitulate further established biological phenomena information beyond PD-1/PD-L1 [1], Pan-Keratin/CD8 [2] interactions (Section 5.2, 5.3). \\n\\nIn particular we added an experiment to demonstrate we can further recapitulate a study done on TNBC data captured by MIBI-TOF [3] (Section 5.4). We added macrophage, tumor, and various T-cell interactions in tumor infiltrated environments and tumor compartmentalized environments for different patients (Figure 1 (B)). We find that CCIGAN is able to not only recapitulate the results in tumor infiltrated and non infiltrated environments discovered in the TNBC data [3], but also quantify them at a subcellular level.\\n\\n2. \\u201cOverall the paper is well written, the application and especially the focus on cell-cell interactions is novel. The model is properly justified and evaluated, and there is a high demand for this framework in the multiplexed imaging field.\\u201d\\n\\nThanks for the positive comments! We have further refined and restructured the introduction to make our contributions more apparent, such as CCIGAN\\u2019s purpose of offering a model for quick in silico hypothesis testing of counterfactual cell-cell interaction scenarios. We have also moved the bulk of the evaluation criteria (section 5) into the appendix and combined Section 5 and 6 for easier reading to showcase more cell-cell interactions and their corresponding evaluation methods.\\n\\nCitations\\n[1] Yoshiko Iwai, Masayoshi Ishida, Yoshimasa Tanaka, Taku Okazaki, Tasuku Honjo, and Nagahiro Minato. Involvement of pd-l1 on tumor cells in the escape from host immune system and tumor immunotherapy by pd-l1 blockade. PNAS, 19:12293\\u201312297, 2002.\\n[2] RG Oshima. Apoptosis and keratin intermediate filaments. Cell Death and Differentiation, 9:486\\u2013 492, 2002.\\n[3] Keren, Leeat, et al. \\\"A structured tumor-immune microenvironment in triple negative breast cancer revealed by multiplexed ion beam imaging.\\\" Cell 174.6 (2018): 1373-1387.\"}",
"{\"title\": \"Overview of changes\", \"comment\": \"We would like to thank all the reviewers for their time and comments/suggestions. As a high level overview, we added an experiment from a clinical MIBI-TOF study [1] and demonstrate that CCIGAN is able to further recapitulate and quantify the discovered biology (Section 5.4 in addition to Sections 5.2, 5.3). We also addressed the biological concerns and clarity with our manuscript. Additionally, we have rewritten the introduction and related works to further clarify our contributions and motivations for image synthesis. Lastly, we combined sections 5 (evaluation metrics) and 6 (biological significance) into a single section (section 5) for easier reading.\\n\\n[1] Keren, Leeat, et al. \\\"A structured tumor-immune microenvironment in triple negative breast cancer revealed by multiplexed ion beam imaging.\\\" Cell 174.6 (2018): 1373-1387.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors present a GAN for multiplexed imaging (MIBI-TOF) data called CCIGAN. They propose an interesting architecture design with protein-specific attention to find association between cell types and neighboring pattens and cell-cell interactions. They also propose new and biologically interpretable metrics including a reconstruction metric, projected EMD and regressing of expression on neighbors.\\n\\nThey present improved reconstruction of interactions compared to other models in the context of PD-1 and PD-L1 interactions. It would be great to extend the evaluation to other interactions and tissue types.\\n\\nOverall the paper is well written, the application and especially the focus on cell-cell interactions is novel. The model is properly justified and evaluated, and there is a high demand for this framework in the multiplexed imaging field.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper applies the SPADE semantic image synthesis technique (with a custom attention mechanism) to MIBI-TOF data to examine hypotheses about cell-to-cell interactions in the context of an immune infiltrated tumor sample. I think that this is potentially an interesting application of GANs -- to generalize beyond specific gathered data and instead start engaging with counterfactual scenarios like \\\"what effect does it have to add cell type X next to cell type Y\\\".\\n\\nUnfortunately, I think this paper is not yet finished with regard to both (1) conveying sufficient motivation for the use of image synthesis for distilling biological insights and (2) evaluating the success or failure of the technique primarily in terms of generating useful biological insights. Simultaneously, there are enough red flags with regard to knowledge of the data generation and underlying biology that it's unclear if the authors could correctly sanity check any insights they extract from their model. \\n\\nMore specific criticisms & suggestions\\n\\n1) It would help to provide much clearer motivation for applying image synthesis to MIBI-TOF data. Section 1.2 skips straight from describing the MIBI-TOF instrument to \\u201cwe made a new kind of GAN\\u201d. Again in the beginning of the Related Work the paper states: \\\"We are interested in the task of generating biologically consistent expression patterns of cellular proteins given a segmentation map of cell neighborhoods\\u201d. But, why are synthetic images interesting given that we can actually look at real MIBI-TOF data. I start getting a sense of why this model might be interesting or useful only in Section 5 \\u2014 more rationale is required earlier in the paper. \\n\\n2) The evaluation section is hard to follow. I think it would be helpful to more clearly describe a larger set of biological phenomena that a practitioner would expect to see in the data, choose a single metric for each case, and show that that these phenomena are recapitulated. No one is going to trust a GAN to give them scientific insights unless they're very confident that all known / simple cell-to-cell interactions have a clear signal. \\n\\n3) \\\"MIBI-TOF bombards a tissue sample with elemental metals tethered to respective antibodies for dozens of distinct cellular proteins and detects each to obtain image channels\\u201d \\u2014 this is not an accurate description of MIBI-TOF, at least not the instrument I'm familiar with. Typically the tissue is first stained with antibodies tagged with heavy metals and the instrument then bombards the tissue sample with simple ions (like O2+), causing the release of metal ions. It seems very unlikely that bombarding anything with antibody/metal conjugates could be informative. \\n\\n4) \\\"Tumor cells could be identified by markers such as pan-keratin and beta-catenin\\u201d \\u2014 I guess in the context of a tumor sample beta-catenin could be over-expressed but it's present in pretty much every cell type, including lymphocytes (https://www.proteinatlas.org/ENSG00000168036-CTNNB1)\", \"small_nits\": [\"Repeated use of \\u201cantibodies\\u201d in \\\"Engineered antibodies for PD-1/PD-L1 antibodies\\u201d \\u2014 maybe better to write \\u201cAntibodies which block the interaction of PD-1/PD-L1\\\"\", \"\\\"While MIBI-TOF is capable of 36 different markers, we discarded uninformative and irrelevant markers1 resulting in M = 24.\\u201d \\u2014 I\\u2019m really surprised that someone put 12/36 uninformative markers in a MIBI-TOF panel. Aren\\u2019t these tagged antibodies expensive? Can we at least get a list of what got discarded?\", \"The model interpretability control seems weak in that lymphocytes are more likely to express MHC-I than tumor cells (which have a potential survival advantage from not expressing it) and tumor cells may actually have more dsDNA than lymphocytes due to changes in ploidy (or original differences in ploidy from e.g. liver cells).\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The manuscript proposed a new method to model the data generated by multiplexed ion beam imaging by time-of-flight (MIBI-TOF). Essentially, the model leans the many-to-many mapping between the cell types and different protein markers' expression levels. Compared with the other mainstream GAN methods, the authors show the proposed method, CCIGAN, can outperform them in terms of generating the expression map of different protein markers given the segmentation of cell types. The manuscript also has an in-depth discussion of the biological meaning of the learned model as well as the learned vectors.\\n\\nI personally like this manuscript a lot, considering the novelty and the thoroughness of the manuscript. However, I have the following concerns: \\n\\nMajor concern (The score will be significantly improved if the authors can handle these two concerns during revision): \\n1. My largest concern is how useful the proposed method is. It seems the wet experiments are mature. How can the computational synthesized image help the biologists then? Are there any cases that only the CCIGAN can do while the real experiment can not do? I am not an expert in MIBI-TOF, but I guess all the results in Section 6.2 can be easily achieved with the real data, right? \\n2. CCIGAN is indeed better than the other methods, but would it be accepted by the biologists in terms of performance? Would they believe in the results? In fact, in the last column of Figure 2, we can see clear artifacts for the results generated by CCIGAN.\", \"minor_concern\": \"1. What's the resolution of the MIBI-TOF and CCIGAN?\\n2. The introduction of the background could be further refined. The many-to-many mapping between different cell types and different protein markers is not emphasized explicitly. Readers can get lost easily. \\n3. The data part is not clear. How many M*2048*2048 images do the authors have for training as well for testing? \\n4. Why do the authors choose 64*64 as the image path size?\\n5. Can the model be generalized well for data collected in different experiments (i.e., different tissues) but from the same machine? Can the model be generalized well across different machines with the same imaging experimental setting?\\n6. Is the model sensitive to the preprocessing of the data, like normalization? As for as I know, the baseline expression level of tissue can vary significantly at different time points within one day. If the model is sensitive to that, it will affect the usage of the model.\"}"
]
} |
Byx5R0NKPr | Learning Calibratable Policies using Programmatic Style-Consistency | [
"Eric Zhan",
"Albert Tseng",
"Yisong Yue",
"Adith Swaminathan",
"Matthew Hausknecht"
] | We study the important and challenging problem of controllable generation of long-term sequential behaviors. Solutions to this problem would impact many applications, such as calibrating behaviors of AI agents in games or predicting player trajectories in sports. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are significant challenges that are unique to or exacerbated by generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated temporal behavior faithfully demonstrates diverse styles? In this paper, we leverage large amounts of raw behavioral data to learn policies that can be calibrated to generate a diverse range of behavior styles (e.g., aggressive versus passive play in sports). Inspired by recent work on leveraging programmatic labeling functions, we present a novel framework that combines imitation learning with data programming to learn style-calibratable policies. Our primary technical contribution is a formal notion of style-consistency as a learning objective, and its integration with conventional imitation learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that our learned policies can be accurately calibrated to generate interesting behavior styles in both domains. | [
"imitation learning",
"conditional generation",
"data programming"
] | Reject | https://openreview.net/pdf?id=Byx5R0NKPr | https://openreview.net/forum?id=Byx5R0NKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"r7uQN3HAQg",
"Bkeq0sQ5ir",
"H1gKdoQ9jH",
"BklnbjQ9sr",
"SyxOjqQcoS",
"SJgzQP-2FB",
"SJekg7fLKB",
"Skl6ZZOfKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723358,
1573694418487,
1573694320981,
1573694212056,
1573694111868,
1571718938467,
1571328742765,
1571090693237
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1441/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1441/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1441/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1441/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers generally reached a consensus that the work is not quite ready for acceptance in its current form. The central concerns were about the potentially limited novelty of the method, and the fact that it was not quite clear how good the annotations needed to be (or how robust the method would be to imperfect annotations). This, combined with an evaluation scenario that is non-standard and requires some guesswork to understand its difficulty, leaves one with the impression that it is not quite clear from the experiments whether the method really works well. I would recommend for the authors to improve the evaluation in the next submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Blind Review #3\", \"comment\": \"> \\u201c... credit assignment seems like the wrong word.\\u201d\\nYes, a significant benefit of learning a dynamics model is that it allows us to differentiate through the environment dynamics. While this is not exactly credit assignment in the RL sense (e.g. we do not learn the value of each action with a Q-network), the problem being solved is similar in nature, in that the policy receives an informative learning signal for each action taken.\\n\\n> \\u201cexact definition of style-consistency\\u201d\\nStyle-consistency is defined in Equation (4) and your understanding is correct: trajectories sampled from policies calibrated to a style should be consistent with the style. We will make this clear in the text.\\n\\n> \\u201cwhy style-consistency classification is better than mutual information baseline.\\u201d\\nOne of the main insights in this work is that style-consistency is the true objective we wish to optimize for, and we introduce an algorithm to optimize for it directly. On the other hand, mutual information maximization optimizes style-consistency indirectly in Equation (13). There is no guarantee that the learned discriminative network r_psi in Equation (13) matches the true labeling function lambda, and this is reflected in our experiments (in fact, mutual information maximization is marginally better than the other baselines relative to our method). \\n\\n> \\u201cdomains \\u2026 are fairly simple\\u201d\\nWhile one can, of course, work on much more challenging domains, we do think that our domains are already quite challenging. Generating style-consistent trajectories over 200 time-steps is highly non-trivial. We spent considerable effort in developing effective algorithmic approaches to address this \\u201ccredit assignment\\u201d problem (there were many non-obvious design choices), and will release the software implementation upon publication.\\n\\n> \\u201cassumes the dynamics of the environment are learnable\\u201d\\nYes, we agree that this is a strong assumption for the algorithm we presented in the paper. However, possible extensions for future work is to consider model-free approaches, such as learning a style-conditioned value network to guide the policy during training. Another possible direction is to only enforce style-consistency at higher-level representations, similar to ideas presented in [1, 2].\\n\\n> \\u201cassumes \\u2026 we can define labeling functions that cover the space of styles we want\\u201d\\nWe believe that this is a relatively weak assumption, as labeling functions are already prevalent in many real world applications. For example, sports analysts can define player roles based on positional heat maps [3]; aggressiveness in video games can be characterized by a player\\u2019s tendency to attack enemies [1]; and safety while driving can be quantified as the distance to the center of the lane [4]. There are many domains equipped with user-defined labeling functions that are naturally compatible with our framework. Furthermore, our framework paves the way for semi-supervised approaches that can derive some of the labeling functions automatically.\\n\\n[1] Customizing Scripted Bots: Sample Efficient Imitation Learning for Human-like Behavior in Minecraft, Broll et. al.\\n[2] Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings, Co-Reyes et. al.\\n[3] Fine-Grained Retrieval of Sports Plays using Tree-Based Alignment of Trajectories, Sha et. al.\\n[4] Batch Policy Learning under Constraints, Le et. al.\"}",
"{\"title\": \"Response to Blind Review #1\", \"comment\": \"> \\u201cInconsistencies: \\u2026 NLL results do not match the trajectories shown in Figures\\u201d\\nThe values we reported in Tables 4 and 12 are the average log-densities rather than the log-likelihoods. We apologize for the confusion and have clarified this in the text. See the discussion in the global comments for more details.\\n\\n> \\u201cInconsistencies: \\u2026 some equations do not reflect what is being optimized\\u201d\\nEquation (8) indeed had a typo: we optimize with C_psi and L^label instead of L^style. This has been corrected in the text.\\n\\n> \\u201cWeaknesses: \\u2026 quality of models\\u201d\\nWe\\u2019ve added Tables 14 and 15 in the appendix that report the test errors of our label approximators C_psi and dynamics models M_varphi respectively. The approximations are generally good enough for maximizing style-consistency (we hypothesize that better approximations will lead to better style-consistency, e.g. CURVATURE for basketball was harder to approximate and thus, CTVAE-style did not perform better than baselines). Note that label approximators C_psi are only used during training and not evaluation; we use the original labeling functions when computing style-consistency in our quantitative results.\\n\\n> \\u201cWeaknesses: \\u2026 diversity of policies\\u201d\\nWe have included histograms of labeling functions for basketball and Cheetah in Figures 5 and 6 in the appendix to visualize the diversity of styles in training demonstrations. In our first set of experiments (Section 6.1) we threshold the labeling functions such that the labels are uniformly distributed. In our second set of experiments (Section 6.2) we apply thresholds at fixed intervals, which can lead to highly peaked and non-uniform distributions for some labeling functions.\\n\\n> \\u201cWeaknesses: \\u2026 source of datasets should be clarified\\u201d\\nThe basketball dataset was collected from real NBA games, whereas we collected the Cheetah dataset ourselves by pre-training diverse policies (i.e. with slightly varying reward functions). The basketball dataset does not come with a simulator, but we assume a known dynamics function (next position = current position + velocity), which allows us to generate trajectories. See discussion about source of datasets in the global comments.\\n\\n> \\u201cWhat happens .. where users may specify hundreds of different labels?\\u201d\\nThis is a very challenging setting! We believe our framework lays the groundwork for studying such settings, and would love to aspire towards that in the future. For instance, a possible next step is to investigate how we can quickly calibrate to new styles without having to train a new policy from scratch; this can potentially have many connections with the original data programming paradigm, as well as multi-task learning.\\n\\n> \\u201c... resembles DIAYN [1] but with some grounding.\\u201d\\nYes, DIAYN [1] with grounding is similar to our CTVAE-mi baseline. DIAYN itself is a fully unsupervised algorithm in a reinforcement learning setting, but with no guarantee that a style a user cares about is represented among the skills that DIAYN learns. Our work focuses on the imitation learning setting, where we aim to calibrate to styles present in a collection of demonstration trajectories. We can make this clearer in the text.\\n\\n> \\u201cWhy report (only) the median over 5 seeds?\\u201d\\nWe expect that the practical use case of our framework is to run our algorithm over a few random seeds and then select the best one. From that perspective, we believe that the median best captures the reliability of our method in learning style-consistent policies. We also reported the min and max style-consistencies in Tables 8 and 9 of the appendix, which shows that our algorithm can sometimes have failure cases (in basketball). Understanding the cause of these failure cases and improving the stability of our algorithm are possible directions for future work. As per your request, we have included the mean and standard deviation style-consistencies as well in Tables 10 and 11 of the appendix. Note that low mean style-consistency (and high standard deviation) for CTVAE-style is exacerbated by the aforementioned failure cases. \\n\\n> \\u201cDo you have a train/valid/test split to choose hyperparameters?\\u201d\\nWe chose hyperparameters that appeared to work well for all baselines and kept them consistent for fair comparison. In a real application, we would perform a hyperparameter search to find the best training configuration.\\n\\n> \\u201cTable 1 \\u2026 report \\u2026 accuracy?\\u201d\\nYes, we report 1 - L^style, which can be interpreted as an accuracy. We mention this in the discussion of labeling functions used in our experiments.\\n\\n[1] Diversity is All You Need: Learning Skills without a Reward Function, Eysenbach et. al.\"}",
"{\"title\": \"Response to Blind Review #2\", \"comment\": \"> \\u201cCTVAE-style does not depend on any latent variable?\\u201d\\nWe apologize for the confusing notation. All policies in our experiments are CTVAEs conditioned on both latent variables and style labels. We\\u2019ve updated the paper to make this clear in the experiments section.\\n\\n> \\u201cCTVAE baselines are overkill ...\\u201d\\nLatent variable models generally improve imitation quality in sequential domains [1, 2, 3]. In particular, TVAE models have also been recently used in similar work for capturing diversity in behaviors [4, 5]. However, as also noted in our global comments, we emphasize that the choice of underlying policy model is orthogonal to our contributions, and we demonstrate this with an additional experiment where we train an RNN policy model instead. Table 13 in the appendix of the revised submission shows that even with a simpler RNN model, our algorithm still improves style-consistency.\\n\\n> \\u201cTable 4, ... results do not seem to match?\\u201d\\nThe values in Table 4 are correct. Note that NLD is the negative log-density instead of negative log-likelihood (see discussion in global comments). The NLD of CTVAE-style in Cheetah is also correct and indicates a tradeoff between imitation quality and style-consistency. We verify this in Table 12 when we show that more training iterations can improve imitation quality, but can sometimes degrade style-consistency.\\n\\n> \\u201c... continuous labels?\\u201d\\nYes, continuous labels also fall under our framework (e.g. by using mean-squared error for L^style). An interesting direction is to consider labeling functions that are already differentiable so we would not have to learn a differentiable approximator C_psi. We leave this for future work.\\n\\n[1] A Recurrent Latent Variable Model for Sequential Data, Chung et. al.\\n[2] Sequential Neural Models with Stochastic Layers, Fraccaro et. al.\\n[3] Z-Forcing: Training Stochastic Recurrent Neural Networks, Goyal et. al.\\n[4] Robust Imitation of Diverse Behaviors, Wang et. al.\\n[5] Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings, Co-Reyes et. al.\"}",
"{\"title\": \"Global comments to all reviewers\", \"comment\": \"We thank all reviewers for their insightful comments -- based on their feedback, we have revised the writing and added new experimental results that improve the paper in important ways. Our revised version has been uploaded. Here, we address common concerns (e.g. datasets, log-likelihood metrics), and we also respond to each review individually.\\n\\n> Core Contributions\\n\\nWe first clarify our core contributions. \\n\\n1) The first contribution is strategic: what systematic form of domain knowledge can we leverage to quickly and cleanly extract style information from raw behavioral data? Our contribution is to leverage programmatic labeling functions crafted by domain experts. Such labeling functions are readily available in many domains, and from that perspective, we view our work as opening new directions of research by studying how to formally and systematically leverage such information. In that way, this contribution bears affinity to research that studies multimodal embeddings (e.g., how to systematically integrate image and audio). \\n\\n2) The second contribution is formulaic: how can we formalize the learning objective to encourage learning style-calibratable policies? Our contribution is to formulate a new metric called style-consistency that captures and optimizes for the task objective: that trajectories sampled from a policy calibrated to a style should be consistent with that style. Previous approaches use indirect methods for encouraging such notions of consistency, and we show that directly enforcing it can be very beneficial. Furthermore, the underlying choice of the imitation learning algorithm is orthogonal to style-calibration, and we have included an additional experiment with a different underlying imitation learner (Appendix:Table 13) to verify this.\\n\\n3) The third contribution is algorithmic: how do we design practical learning approaches that reliably optimize the learning objective? Our contribution is an effective model-based approach for optimizing style-consistency over trajectories. While the specific ingredients of our approach are each well-known, many of the design choices were non-obvious a priori, and we will release our implementation upon publication.\\n\\n> Datasets\\n\\nWe chose 2 domains that have different problem characteristics -- Basketball does not have a simulator but plausibly admits a simple dynamics function (next position = current position + velocity); MuJoCo has an unknown dynamics function but lets us simulate trajectories. Furthermore, the Basketball dataset exhibits genuine real-world diversity in player behavior, while MuJoCo trajectories have less diverse agent behaviors (outlined below).\\n\\nThe basketball dataset was collected by STATS, a company that tracks real players from NBA games (~40 games in our case). Different players have different play-styles; for instance, a center or power forward may spend more time around the basket, while a faster player may cover more distance on the court. We describe these styles with DESTINATION and SPEED labeling functions respectively. We have added the distributions of styles returned by the labeling functions (Appendix:Figure 5) to visualize the diversity in the dataset.\\n\\nFor the MuJoCo Cheetah domain, we follow a similar procedure as [1] to collect diverse behaviors -- we pre-train Cheetah policies to walk at various speeds (the distribution of style labels is visualized in Appendix:Figure 6). \\n\\n[1] Robust Imitation of Diverse Behaviors, Wang et. al.\\n\\n> Log-likelihood\\n\\nThe values in Tables 4 and 12 are more accurately described as log-densities; we followed terminology from previous work in sequential generative models [2, 3, 4] that report this as \\u201clog-likelihoods\\u201d. Log-density of actions at each timestep is computed with respect to the parameters of a Guassian distribution output from our policy. Optimizing the CTVAE imitation learning objective will push the policy to assign higher density to actions observed in the training set. \\n\\nAs log-densities, the values in Tables 4 and 12 can be correctly interpreted. For instance, the average log-density per timestep in basketball is -7.9, which means the density is e^(-7.9) = 3.7e-4. The average variance output by the policy on test trajectories is 2e-5 per dimension. Thus, the average mean-squared error is roughly 2.16e-5 per dimension, which indicates that the imitation learning objective is indeed respected. \\n\\nWe apologize for the confusion. Tables 4 and 12 now scale log-densities per timestep, and we will update the text to clarify this.\\n\\n[2] A Recurrent Latent Variable Model for Sequential Data, Chung et. al.\\n[3] Sequential Neural Models with Stochastic Layers, Fraccaro et. al.\\n[4] Z-Forcing: Training Stochastic Recurrent Neural Networks, Goyal et. al.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a weak supervision method to obtain labels from functions that are easily programmable, and propose to use this for learning policies that can be \\\"calibrated\\\" for specific style. The paper demonstrates some experiments on a basketball environment and a halfcheetah environment, showing that the agent will perform according to corresponding styles.\", \"my_main_concern_here_is_the_technical_novelty_of_the_proposed_method\": \"it seems that once we have the labels (which are limited to programmable functions), all we need to do is to learn a policy that conditions on the labels. In this case, we are not concerned with the latent variables whatsoever, therefore it seems that the CTVAE baselines are overkill for the task (learning latent variables that are not actually needed). Maybe more interesting baselines is to see how the two terms in (8) affect self-consistency performance, and not consider any methods that use unsupervised latent variables?\", \"minor_questions\": [\"The method's name, CTVAE-style is a bit confusing, since the policy does not depend on any latent variable z? At least from how the policy is described pi(\\\\cdot |y) does not depend on unsupervised latent variables z.\", \"Table 4, KL and NLL results do not seem to match? I wonder if the basketball kl should be multiplied by 10 and the cheetah ctval-style NLL is a typo?\", \"Is it possible to extend this to continuous labels? This seems technically viable but unclear empirically.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method to train style-conditional policies. The method has three components, a dynamics model, a labeling procedure and its approximation, and a policy. The model is trained on real trajectories, while the policy is trained on both real and simulated data coming from the model. The policy is trained to both imitate and be sensitive to the style labelling of states.\\nSuch a method yields policies that can be executed with styles chosen externally, e.g. by a user.\\n\\nThis paper proposes a novel method that is an interesting take on imitation learning, but it is hard to judge how relevant this method is, as the paper has several inconsistencies and weaknesses that need to be resolved before it is accepted.\", \"inconsistencies\": \"the reported NLL results do not match the trajectories shown in Figures. Some equations do not seem to reflect what is being optimized.\", \"weaknesses\": \"Many quantities that should be reported are not: quality of models, diversity of policies, etc. The source of datasets should be clarified. (see detailed comments)\\n\\nIt's interesting that this method can leverage these very sparse or poor quality annotations, but it would be helpful to get a sense of how good the annotations provided here are. What's the threshold were annotation's quality is too low to be helpful? In the general case I suspect this threshold to be higher than what is hinted at in this paper, especially given Table 2. What happens in more realistic settings where e.g. users may specify hundreds of different labels?\", \"detailed_comments\": [\"how dependent on having a variety of policies is this method? It's not clear how diverse the set of policies coming from the basketball dataset nor the Cheetah policies is, nor how this diversity affects learning. An experiment with explicitly different levels of diversity would strengthen understanding of this method. For Cheetah, it would be easy to report p(y) as a function of the target forward speed, that would give readers a sense of diversity for each label.\", \"For something like cheetah, with the labels that you propose being a very simple function of the state space, this somewhat resembles DIAYN [1] but with some grounding. Would it make sense to compare to a similar baseline?\", \"(Table 1) Why report (only) the median over 5 seeds? Papers usually report means. Plus, an indication of variance would be nice.\", \"It's not clear what the basketball dataset is. Where does it come from? Does it come with a simulator? If not, how do you evaluate style consistency or run step 11-12 of Alg. 2? Do you have a train/valid/test split to choose hyperparameters? (the appendix only suggests a train/test split)\", \"Why not cite MuJoCo? [2]\", \"For Figure 2 & 3, I suggest lowering the transparency/alpha value of the trajectories, as there is a lot of overlap.\", \"Table 1 is somewhat confusing. In (4) and the paragraph thereafter, you define \\\\mathcal{L}^{style} as an error rate, i.e. when \\\\lambda(\\\\tau) \\\\neq y, but in Table 1 you seem to report instead accuracy? (i.e. when \\\\lambda(\\\\tau) = y) If so, then you are reporting percentages, so \\\\times 10^2 rather than 10^-2.\", \"You never report how well C_\\\\psi is doing, and it's not clear to me why C_\\\\psi is needed at all. When optimizing (8) are you directly treating (8)'s inner terms as negative rewards which is differentiated wrt \\\\pi's parameters? If so, you are doing a form of DDPG, but there is a problem: after (4) you mention that L^style is not differentiable, meaning C_\\\\psi doesn't provide gradient information to \\\\pi. I assume that you acutally optimize (8) with L^label? Whether that's what you're doing or not, it should be clarified. If you are truly using L^style, then it's not clear why C_\\\\psi is needed, as there is no differentiability anyways, and simply using \\\\lambda directly will provide more signal.\", \"I'm somewhat perplexed by the values of Table 4. A log-likelihood of -190 represents a probability of 10^-83 (a _negative_ LL of -190 is, on the other hand, impossible by virtue of logs and probabilities, but I assume it is a \\\"typo\\\"). What is this the probability of? Entire trajectories? If so it would make more sense to report the _average_ log-likelihood, i.e. per timestep, because at this point in the paper, readers have no sense of how long trajectories are, and thus what these likelihoods represent. (for example, a NLL of 190 could be an average likelihood of 0.5 for 275 steps, or of 0.1 for 83 steps, or of .8 for 850 steps, which are all very different results! According to the appendix Table 5, the basketball trajectories are 25 steps long, which would mean that the imitation objective is not respected at all, exp(-190/25)=.0005, which would mean that pi(a|s) is .0005 on average.)\", \"Again Table 4, if the \\\"-\\\" is indeed a typo, then CTVAE-style is actually performing better than the baselines, especially for Cheetah (normally one wants negative log-likelihood to be as close to 0 as possible). Same for Table 10, if the NLL column is truly actually log-likelihood, then the \\\"style+\\\" objective really degrades imitation quality rather than improves it.\", \"[1] Diversity is All You Need: Learning Skills without a Reward Function, Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine\", \"[2] MuJoCo: A physics engine for model-based control, Emanuel Todorov, Tom Erez Yuval Tassa\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose learning generative models for long-term sequences that can take a style argument and generate trajectories in that style. The motivating example used is reconstructing expert trajectories from basketball games - trajectories can be sampled based on whether we want fast movement (SPEED), or whether they end close to the basket (DESTINATION).\\n\\nIt follows a data programming paradigm. We do not directly have style labels, so to get around this, we define labeling functions, which take a trajectory and output some boolean value. (Real valued labels are allowed but are not considered in this work). We assume the desired style is defined by some combination of labels, and that we know this combination (i.e. a fast trajectory to the basket should have the \\\"speed above threshold c\\\" label and \\\"final location close to basket\\\" label, which we have labeling functions for.)\\n\\nOnce we have this labeling function, we learn a trajectory VAE with a few loss functions. Standard behavioral cloning loss, and a style consistency loss that encourages labels of the generated trajectory to match labels of the target style. To make the optimization fully differentiable, we approximate non-differentiable labels with a learned labeling function (i.e. learn a classifier and then use classifier probabilities as the label), and learn a model of the environment to allow backprops through the rolled-out dynamics model for the entire trajectory. It's argued that learning a model helps with credit assignment, from an RL perspective credit assignment seems like the wrong word. Nothing about learning the model makes it easier to assign credit, the main gain it gives is making the problem differentiable.\\n\\nOn the basketball dataset, and a dataset of episode collected from the HalfCheetah MuJoCo environment, they demonstrate better style consistency. I had trouble finding an exact definition of style-consistency here - I assume it's defined as \\\"given style c, how often does a trajectory sampled from pi(a|s,c) satisfy style c\\\". It would be good to define this.\\n\\nI would appreciate discussion on why style consistency classification is better than the mutual information baseline where MI between style labels and trajectory is maximized, it feels like they should be equivalent.\\n\\nOverall I think this is a reasonable paper. The domains considered are fairly simple, but the idea seems sounds and the results seem good. I am concerned at all the requirements though - the method assumes the dynamics of the environment are learnable, and that we can define labeling functions that cover the space of styles we want, both of which seem like strong requirements.\"}"
]
} |
H1x9004YPr | Contextual Temperature for Language Modeling | [
"Pei-Hsin Wang",
"Sheng-Iou Hsieh",
"Shieh-Chieh Chang",
"Jia-Yu Pan",
"Yu-Ting Chen",
"Wei Wei",
"Da-Cheng Juan"
] | Temperature scaling has been widely used to improve performance for NLP tasks that utilize Softmax decision layer. Current practices in using temperature either assume a fixed value or a dynamically changing temperature but with a fixed schedule. Little has been known on an optimal trajectory of temperature that can change with the context. In this paper, we propose contextual temperature, a mechanism that allows temperatures to change over the context for each vocabulary, and to co-adopt with model parameters during training. Experimental results illustrated that contextual temperature improves over state-of-the-art language models significantly. Our model CT-MoS achieved a perplexity of 55.31 in the test set of Penn Treebank and a perplexity of 62.89 in the test set of WikiText-2. The in-depth analysis showed that the behavior of temperature schedule varies dramatically by vocabulary. The optimal temperature trajectory drops as the context becomes longer to suppress uncertainties in language modeling. These evidence further justified the need for contextual temperature and explained its performance advantage over fixed temperature or scheduling. | [
"natural language processing",
"language modeling",
"sequence modeling",
"temperature scaling"
] | Reject | https://openreview.net/pdf?id=H1x9004YPr | https://openreview.net/forum?id=H1x9004YPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BiZnkni1lo",
"Byxh9hH2or",
"Bkx_JJQtor",
"H1xvj9zFjS",
"SyeltUfKiH",
"Sye-x7zKsH",
"Ske6l8RJjS",
"H1em6VUatr",
"rJlIaog6YS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723330,
1573833875987,
1573625568067,
1573624479334,
1573623415859,
1573622504993,
1573017076893,
1571804346940,
1571781565557
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1440/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1440/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1440/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"With an average post author response score of 4 - two weak rejects and one weak accept, it is just not possible for the AC to recommend acceptance. The author response was not able to shift the scores and general opinions of the reviewers and the reviewers have outlined their reasoning why their final scores remain unchanged during the discussion period.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper Update\", \"comment\": \"We appreciate the constructive feedback of every reviewer. We have thoroughly refined the paper: grammar errors are corrected, sections including abstract, introduction and experiments are retouched, and appendix is added to provide more clear explanations.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"First of all, we thank the reviewer for the feedback.\\n\\nTemperature scaling as a technique to control the smoothness of the softmax output is widely used in NLP, as can be seen from the large body of literature that we have surveyed in our paper. We have examined each of them but we found it generally difficult to justify the method from a theoretical perspective: either on its convergence or the property of the loss function that it leads to. However, even if theoretical analysis is difficult, its empirical performance of temperature has been verified by a large body of work in NLP. \\n\\nSimilar to the situation of the temperature in general, our model, which builds on a highly nonlinear transformation of inputs, is difficult to generate theoretical guarantees. However, given the large amount of empirical evidence, it is unlikely that the effectiveness of this approach is a coincidence. We believe the significance of the proposed contextual temperature is that it provides a more general view of the temperature mechanism. And its effectiveness can be demonstrated in a wide range of NLP tasks. If the reviewer has any further suggestions on the theoretical analysis, we would love to know.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"First of all, we thank the reviewer for the feedback.\\n\\n(Q1) The idea of dynamic temperature scaling has been tried in other works and tasks (e.g., attended temperature scaling). The paper parameterizes this mechanism with DNNs for the language model. Though the idea looks interesting, it fails to explain why the scaling is better than other dynamic temperature scaling frameworks. \\n\\n(A1) We appreciate the feedback from the reviewer. To the best of our knowledge, we are the first work to learn a different temperature for each token based on the context. We have done a comprehensive survey on related works, but we haven\\u2019t find any similar work. To answer the reviewer's question, the proposed method distinguishes from the other attended temperature scaling since attended temperature scaling paper learns a temperature that is universal for all the classes (tokens). Contextual temperature, on the other hand, learns a different temperature for each token and is thus a more general approach. To make it more clear to readers on the differences between our method and other related ones, we plan to modify the paper to emphasize the differences.\\n\\n(Q2) The experiments are not solid. The baseline only includes Mos, which is not very strong. To validate whether this approach works with other LM of high-order attention or self-attention, a better baseline model is required (e.g., transformer, GPT). \\n\\n(A2) We would like to point out that MoS is the state-of-the-art model on language modeling on the Penn Treebank dataset and WikiText-2 dataset. The Transformer-XL model, which is based on the transformer architecture, actually performs worse than the MoS model in these two datasets. To make the comparison clear, below we've put a summary of the comparisons between these baselines and our approach. Although the Transformer-XL model performs worse than MoS in the paper, we do agree that it should be added as a comparison to the paper. We will revise it in an updated version. \\n\\nModel | validation ppl | test ppl\\nCT-MoS (ours) | 55.31 | 53.20\\nMoS \\t\\t\\t| 56.54 | 54.44\\nTransformer-XL | 56.72 | 54.52\\nGPT-2 (w/ extra training data and significantly larger model params) |-| 35.76\\n\\nFinally, the GPT model works on a very different setting than the ones found in the mainstream language model research. In the GPT paper, it utilizes a large dataset that is collected outside the domain of the language modeling. We argue that a comparison between GPT and the other baselines would be unfair as the standard setting of language modeling do not use additional datasets. As the proposed contextual temperature method aims at improving language model in the standard setting without the use of additional dataset, we believe that it would be more appropriate to compare against baselines under the same setting.\\n\\n(Q3) I would like to see this technique can help either NLU or NLG tasks, instead of just pure modeling.\\n\\n(A3) We would like to point out that the goal of our paper is to study language modeling other than its performance on the downstreaming NLU or NLG tasks. It is true that many language models such as BERT and XLNet are designed specifically for boosting the performance of downstream NLU and NLG tasks, models that study language modeling such as MoS focus purely on the performance of the language model itself. To this end, we feel it is important to separate these two groups of research as they fundamentally serve as different goals. We will clarify the differences on an updated version of the paper.\\n\\n(Q4) The case analysis section needs more examples instead of just cherry-picking few.\\n\\n(A4) We have provided more examples in Appendix A. Hopefully that can provide more insights on the methods. We will opensource the codebase upon the acceptance of this paper to allow the examinations of more examples.\"}",
"{\"title\": \"Response to AnonReviewer1 [2/2]\", \"comment\": \"(Q3) Also, I don't see the thermodynamics connection and find calling the proposed method `temperature` a bit misleading.\\n\\n(A3) We follow the naming convention in related previous works we\\u2019ve known [1, 2, 3] and call our method \\u201ctemperature\\u201d. As mentioned in [2], the connection between temperature scaling (in deep learning domain) and thermodynamics can be found in statistical mechanics [4]. We are willing to hear if there are any advice about the naming. \\n\\n[1] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. 2015.\\n[2] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. 2017.\\n[3] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. Toward controlled generation of text. 2018.\\n[4] Jaynes, Edwin T. Information theory and statistical mechanics. 1957.\\n\\n(Q4) Adding onto above. [1] discusses the low-rank bottleneck of using a single softmax. Since elementwise matrix product can blow up the rank, how do the authors think the proposed method can serve as a more efficient way to deal with the softmax bottleneck?\\n\\n(A4) We thank the reviewer for the great feedback. According to rank inequality( R(A\\u0966B)\\u2264R(A)R(B) ), element-wise matrix product indeed will potentially increase the rank. This is a new theoretical direction for the proposed contextual temperature, and we will study more in-depth in this direction and update the manuscript when having concrete conclusion and/or findings. Due to the limited time of ICLR rebuttal, we are not able to finish the analysis before the deadline. However, we will keep working on finding the evidence of this conjecture as the reviewer suggested. \\n\\n(Q5) Last but not least, the paper can be improved a lot if the authors can thoroughly polish the writing.\\n\\n(A5) Thank you for the advice. We have identified several spots in the paper that we can further polishing. Additionally, we have also improved the introduction section as well as the analysis section. We will keep looking for potential issues in writing. Here we list a few of changes we have made:\", \"in_abstract\": \"co-adopt => co-adapt\", \"in_introduction\": \"exiting work => existing methods\\n\\\"explored the vocabulary differences when adjusting temperature\\\" => \\\"explored the differences among vocabulary tokens when adjusting temperature\\\"\\n\\\"tends to be heating up\\\" => \\\"tends to heat up\\\"\\n\\\"This suggests that temperature mechanism helps to promote stochasticity early in\\nthe sentence while suppressing uncertainties when the context gets longer\\\" => \\\"This suggests that the temperature mechanism helps promote stochasticity early in the sentence, and suppress uncertainties when the context gets longer.\\\"\\ndealing with these phenomenons => dealing with these phenomena.\"}",
"{\"title\": \"Response to AnonReviewer1 [1/2]\", \"comment\": \"First of all, we thank the reviewer for the constructive feedback.\\n\\n(Q1) Eq. 5. The temperature scalar for each token competes with each other, since they are calculated with a softmax (and then rescaled). Another way is to use, e.g., a sigmoid function. Can the authors explain the motivation behind the use of softmax?\\n\\n(A1) We agree with the reviewer that using a softmax to control the temperature of each token will make these temperatures compete with each other (since they have to sum up to 1). As the reviewer suggested, we conduct more experiments to compare softmax with tanh and sigmoid. Experiment results show that using softmax achieves the lower perplexity compared to the other two functions\\u2014softmax: 54.69, sigmoid: 57.74, and tanh: 58.89 (on the test set of the PTB dataset). We conjecture that the relationship among different tokens represents a certain kind of competitiveness as only a few tokens share similar semantics as the ground-truth token should be generated in a sentence. We appreciate the suggestion from the reviewer. \\n\\n(Q2) Another view of the proposed method is that it learns a context-dependent weighting of the tokens in the vocabulary, such that \\\"important\\\" tokens (those with smaller \\\\tau) receive more gradient updates. Can the authors comment on this? \\n\\n(A2) We appreciate the reviewer's insights on the relationship between the importance of tokens and the magnitude of the gradients from the contextual temperature. As each token is represented by an embedding that has multiple dimensions, we believe \\u2018more gradient updates\\u2019 mentioned by the reviewer actually refers to larger gradient norms. If that's the case, we generally agree with the reviewer's insights. \\n\\nIn the paper, we observe that common tokens ('<eos>\\u2019, \\u2018of\\u2019, \\u2018the\\u2019, ...) receive the temperature that increases dramatically during the course of training (please refer to Figure 1a in the paper), which effectively scales down the corresponding logits. We refer these tokens as \\u2018unimportant\\u2019 tokens to contrast the rest of tokens (referred to as \\u2018important\\u2019 tokens). We then calculate the average gradient norm of \\u2018important tokens\\u2019 and repeat the same procedure for \\u2018unimportant\\u2019 tokens. The results are provided below. \\n\\nTo confirm the reviewer's conjecture, we calculated the average norm of gradients with respect to the embedding parameters. We calculated the results separately for the case when `important tokens' are the ground truth and the case when `unimportant tokens` are ground truth. Results are shown in the next table. \\n\\nWe note that, when the ground truth belongs to \\u2018important tokens\\u2019, the average gradient norm of \\u2018important tokens\\u2019 is larger. Same story for unimportant tokens: When the ground truth belongs to unimportant tokens, the average gradient norm of unimportant tokens is larger. In other words, depending on the ground truth belonging to important or unimportant tokens, applying contextual temperature seems to make the corresponding type of tokens receive a larger gradient norm. \\n\\n(1) If the ground truth belongs to \\u201cimportant tokens\\u201d, then\\ngradient norm of \\\"important\\\" tokens\\t \\tavg. \\\\tau of \\u201cimportant\\u201d tokens\\t\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n 0.260164 (a)\\t\\t\\t 2.0001514\\n\\ngradient norm of \\\"unimportant\\\" tokens\\t avg. \\\\tau of \\u201cunimportant\\u201d tokens\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n 0.045674 (b) 2.0696018 \\n\\n(2) If the ground truth belongs to \\u201cunimportant tokens\\u201d, then\\ngradient norm of \\\"important\\\" tokens\\t\\tavg. \\\\tau of \\u201cimportant\\u201d tokens\\t\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n 0.280090 (c)\\t\\t\\t 2.0001671\\n\\ngradient norm of \\\"unimportant\\\" tokens\\t avg. \\\\tau of \\u201cunimportant\\u201d tokens\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n 0.491596 (d) 2.047023\\n\\nHere, each of the entries is calculated as the following, where y represents samples, L2(*) represents the norm function, \\u2207(y,\\u0398) represents the gradient vector of sample y and parameter \\u0398, \\u0398 represents a parameter and E is the expectation function. \\n \\na = E_{y in important} E_{\\u0398 in important} L2(\\u2207(y,\\u0398))\\nb = E_{y in important} E_{\\u0398 in unimportant} L2(\\u2207(y,\\u0398))\\nc = E_{y in unimportant} E_{\\u0398 in important} L2(\\u2207(y,\\u0398))\\nd = E_{y in unimportant} E_{\\u0398 in unimportant} L2(\\u2207(y,\\u0398))\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a strategy to automatically adjust the temperature scaling based on the context of words in a sentence for NLP. Experiments demonstrate that this approach can significantly improve perplexity scores on several datasets popular for NLP.\\n\\nNLP is not an area of research I'm very familiar with so this review is limited to my understanding of temperature scaling as a general technique to improve learning. As described in the paper, temperature scaling is a type of hyper-parameter estimation that adjusts the sensitivity of the softmax function as training evolves. The paper proposes to learn a function that given context, adjust the temperature automatically. This can be seen as a meta-learning method. \\n\\nI believe this can be a useful technique but before considering such an approach as a general strategy, more theoretical insights should be provided. The authors report on ablation studies that demonstrate some empirical benefits. However, until I see more theoretical analysis on how the method improves convergence or lead to better losses by smoothing out the output of the objective function, I remain skeptical of the usefulness of this as a general training method.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a contextual temperature scaling to improve language modeling. The temperature model is parameterized using a deep neural network. Experiments on the language modeling datasets show some effects of the method.\\n\\nThe idea of dynamic temperature scaling has been tried in other works and tasks (e.g., attended temperature scaling). The paper parameterizes this mechanism with DNNs for the language model. Though the idea looks interesting, it fails to explain why the scaling is better than other dynamic temperature scaling frameworks. \\n\\nThe experiments are not solid. The baseline only includes Mos, which is not very strong. To validate whether this approach works with other LM of high-order attention or self-attention, a better baseline model is required (e.g., transformer, GPT). I would like to see this technique can help either NLU or NLG tasks, instead of just pure modeling. The case analysis section needs more examples instead of just cherry-picking few.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This work proposes a learned and context dependent way to calculate the temperatures for the softmaxes. More specifically, a low-rank affine-transformation, taking the hidden state at the current step as input, is used to calculate scalar weighting for every token in the vocabulary. The method is very general, and can be used in combination with other techniques in tasks such as language modeling and text generation. Experiments on language modeling with Penn TreeBank and WikiText-2 show that the proposed method yields strong performance.\\n\\nOverall I found the paper well-motivated and easy to follow. The empirical results are solid and strong. The analysis is also interesting. I vote for an acceptance, if the authors can polish the writing.\", \"details\": [\"Eq. 5. The temperature scalar for each token competes with each other, since they are calculated with a softmax (and then rescaled). Another way is to use, e.g., a sigmoid function. Can the authors explain the motivation behind the use of softmax?\", \"Another view of the proposed method is that it learns a context-dependent weighting of the tokens in the vocabulary, such that \\\"important\\\" tokens (those with smaller \\\\tau) receive more gradient updates. Can the authors comment on this? Also, I don't see the thermodynamics connection and find calling the proposed method `temperature` a bit misleading.\", \"Adding onto above. [1] discusses the low-rank bottleneck of using a single softmax. Since elementwise matrix product can blow up the rank, how do the authors think the proposed method can serve as a more efficient way to deal with the softmax bottleneck?\", \"Last but not least, the paper can be improved a lot if the authors can thoroughly polish the writing.\", \"[1] Breaking the Softmax Bottleneck: A High-Rank RNN Language Model. https://arxiv.org/abs/1711.03953\"]}"
]
} |
H1eY00VFDB | Retrospection: Leveraging the Past for Efficient Training of Deep Neural Networks | [
"Ayush Chopra",
"Surgan Jandial",
"Mausoom Sarkar",
"Balaji Krishnamurthy",
"Vineeth Balasubramanian"
] | Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve the performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures. | [
"Deep Neural Networks",
"Supervised Learning",
"Classification",
"Training Strategy",
"Generative Adversarial Networks",
"Convolutional Neural Networks"
] | Reject | https://openreview.net/pdf?id=H1eY00VFDB | https://openreview.net/forum?id=H1eY00VFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"dSb6fWeC0N",
"SJgKOpYNsr",
"HkeQS6tEor",
"SyejA3FNjB",
"SyeRYntEiB",
"rkldcPYViB",
"H1xvLwKEoH",
"rygRVjNAFr",
"rJePCn43YB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798723300,
1573326193001,
1573326138673,
1573326035200,
1573325958075,
1573324688199,
1573324622627,
1571863350443,
1571732687305
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1439/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1439/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces a further regularizer, retrospection loss, for training neural networks, which leverages past parameter states. The authors added several ablation studies and extra experiments during the rebuttal, which are helpful to show that their method is useful. However, this is still one of those papers that essentially proposes an additional heuristic to train deep news, which is helpful but not clearly motivated from a theoretical point of view (despite the intuitions). Yes, it provides improvements across tasks but these are all relatively small, and the method is more involved. Therefore, I am recommending rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response for Reviewer #1 (Part 4/4)\", \"comment\": \"\", \"q\": \"\\u201c..no warm-up period used for the GAN experiments?..\\u201d\", \"r\": \"We believe that since GANs are inherently unstable and do not train to a fixed target, the warm-up period is unlikely to have an impact. Hence, we reported experiments in our original submission with a warm-up period of 0 epochs which resulted in performance improvement (max IS value) by faster convergence (Fig 3, Fig 4). Now, we have included an ablation study on the impact warm-up period on GAN training in appendix D, which corroborates the hypothesis.\\n\\n=====\\nThank you again for your comments, and we will be happy to discuss further any clarifications/questions.\", \"ablation_study\": \"DIFFERENT MOMENTUM FOR SGD (Lower the Better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-----\\n Momentum=0.5 || Momentum = 0.9 || Momentum = 0.7 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--\\n Original Retrospective || Original Retrospective || Original Retrospective\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\u2014-\\n 10.8 9.4 || 10.05 9.06 || 9.51 8.94\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014---\\n\\n=================================\"}",
"{\"title\": \"Response for Reviewer #1 (Part 3/4)\", \"comment\": \"**CONTINUED FROM PART 2/4****\\n\\nTEXT CLASSIFICATION TEST ERROR (H = Higher the better, L = lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014---\\n Method || IECOMAP || AVEC \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--\\n || F1-Score (H) || Accuracy (H) || MSE (L) || r (Pear Score) (H)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nRetrospective || 64.40 +- 0.4 || 64.97 +- 0.5 || 0.1772 +- 0.0006 || 0.332 +- 0.008\\nOriginal || 62.60 +- 0.9 || 62.70 +- 0.7 || 0.1798 +- 0.0005 || 0.317 +- 0.007\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--\\n\\nSPEECH CLASSIFICATION ERROR RATES (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nNetwork || Validation Set || Testing Set \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n Original Retrospective || Original Retrospective \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nLeNet || 9.77 +- 0.05 9.60 +- 0.03 || 10.26 +- 0.05 9.86 +- 0.04 \\nVGG-11 || 5.15 +- 0.08 4.37 +- 0.04 || 5.03 +- 0.06 4.16 +- 0.05 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nAs an extension, we also did experiments on the task of node classification using Graph Neural Networks, where the performance is ideally reported by averaging over several runs. Here, we report performances by averaging over 30 runs each, where each run was trained for 100 epochs. We carried out experiments on CORA and CITESEER datasets using two widely used/state-of-the-art networks: ARMA (Bianchi et al., CoRR 2019), and GCN (Kipf & Welling, ICLR 2017). Using retrospection loss improves both accuracy and std. deviation in almost all cases. \\nThese experiments with details have been added to Appendix A of the revised paper.\\n\\nGRAPH NODE CLASSIFICATION TEST ACCURACY (Higher the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nDataset || GCN || ARMA\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014 \\n || Original Retrospective || Original Retrospective\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nCORA || 80.85 +- 0.53 81.23 +- 0.27 || 78.53 +- 1.5 79.45 +- 1.15 \\nCITESEER || 70.65 +- 0.93 71.25 +- 0.75 || 63.63 +- 1.3 64.22 +- 1.2\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\n=============================================\", \"q\": \"\\u201c... the effect of warm-up period ...in ablation study\\u2026\\u201d\", \"r\": \". As in the ablation studies, we trained LeNet on the F-MNIST dataset (60k images) for 70k iterations with batch_size = 32 (we use momentum=0.9). Hence, 1 epoch lasts for around 2k iterations. The error rates with different warm-ups are mentioned in the table below. We observed that on simpler datasets (like FMNIST), since networks start training at a reasonable accuracy, retrospection is effective even when we introduce it with a very low warm-up period (Iw = 0 steps).\", \"ablation_study\": \"DIFFERENT WARM-UP PERIOD (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n Original || Retrospective \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n Iw = Infinity || Iw= 0 || Iw=10k || Iw = 15k || Iw = 20k \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n 10.05 || 9.06 || 9.3 || 9.33 || 9.06 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\nFurther, we observed that for tasks on more complex datasets, it is best to introduce the retrospection loss after training the network for some epochs when the network has started to converge to some extent, empirically around 50-75% of the training epochs. While introducing retrospection early also improves over the baseline, later introduction of the retrospection loss further improves performance. For instance, we trained ResNet-56 on the task of image classification using CIFAR-10 dataset for 200 epochs. Here, when the network is trained without retrospection (the original config as in the ResNet paper), we got an error rate of 6.86 (6.97 is reported in ResNet paper). However, on using retrospection, performance improved to 6.78 when the warm-up period (Iw) of 50 epochs was used and it further improved to 6.52 with a warm-up period of 150 epochs.\\n\\nIMAGE CLASSIFICATION ERROR RATE FOR RESNET-56 (CIFAR-10) (Lower the Better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n Original || Retrospective \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n Iw = Infinity || Iw = 0 Iw= 50 Iw = 100 Iw= 150 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n 6.86 (6.97) || 6.81 6.78 6.61 6.52 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\n=============================================\\n\\n***RESPONSE CONTINUED IN PART 4/4***\"}",
"{\"title\": \"Response for Reviewer #1 (Part 2/4)\", \"comment\": \"**CONTINUED FROM PART 1/4****\", \"q\": \"\\u201c...results \\u2026 include the mean and std...\\u201d\", \"r\": \"We did run multiple trials during our studies, and our results in the paper were consistent across these trials. We, however, ran the experiments again, and are reporting our results below for Image Classification, Speech Recognition and Text Classification tasks averaged over 10 runs. (Note that for few-shot learning, we already included this information in the original submission). We also note that all the results in the submitted paper are in the same range as the mean +- std in the results below, although these were separately performed - showing the consistency.\\n\\nIMAGE CLASSIFICATION TEST ERROR (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nDataset || Network || original || Retrospective\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nSVHN || VGG-11 || 5.51 +- 0.08 || 4.75 +- 0.09 \\nSVHN || ResNet-18 || 4.38 +- 0.09 || 4.01 +- 0.07\\nF-MNIST || LeNet ||10.74 +- 0.19 || 9.37 +- 0.14\\nF-MNIST || ResNet-20 || 7.63 +- 0.04 || 6.85 +- 0.06 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n***RESPONSE CONTINUED IN PART 3/4***\"}",
"{\"title\": \"Response for Reviewer #1 (Part 1/4)\", \"comment\": \"We thank the reviewer for the detailed review and helpful comments. The comments were insightful in recommending further experiments, which we have described below and also added to the appendix of the revised draft. We have also updated the paper by incorporating the recommended corrections.\\n\\nBefore we address the specific questions, we summarize our key contributions below for clarity:\\n1. We propose a new \\u201cretrospective loss\\u201d that is based on looking back at the trajectory of gradient descent and providing an earlier parameter state as guidance for further learning.\\n2. The key benefits of the proposed loss are that - it is simple and easy to implement (with any existing loss). Its simplicity allows us to easily generalize its use across tasks and application domains.\\n3. Our exhaustive experiments on a wide range of tasks including image classification (+ few-shot learning), GANs, speech recognition, text classification, and graph classification (included in the reply here) beat state-of-the-art methods on benchmark datasets with the addition of this loss term. \\n4. To the best of our knowledge, this is the first such effort; our empirical studies showed a consistent improvement in performance across the tasks in our multiple trials, demonstrating the potential of this method to have a strong impact on practical use in real-world applications across domains.\\n\\nWe have also added these at the end of Section 1 in our updated paper.\\n\\nBelow is our response to the individual comments (Q = question; R = our response):\", \"q\": \"\\u201c...F in Section 3 after Equation 2 is not properly defined\\u2026\\u201d\", \"r\": \"Thanks for pointing this out. F is the retrospective update frequency, we have added this to the revised submission.\\n\\n=============================================\\n***RESPONSE CONTINUED IN PART 2/4***\"}",
"{\"title\": \"Response for Reviewer #2 (Part 2/2)\", \"comment\": \"***CONTINUED FROM PART 1/2***\\n\\nGRAPH NODE CLASSIFICATION TEST ACCURACY (Higher the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nDataset || GCN || ARMA\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014 \\n || Original Retrospective || Original Retrospective\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nCORA || 80.85 +- 0.53 81.23 +- 0.27 || 78.53 +- 1.5 79.45 +- 1.15 \\nCITESEER || 70.65 +- 0.93 71.25 +- 0.75 || 63.63 +- 1.3 64.22 +- 1.2\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\", \"q\": \"I'm curious whether the use of the L1 norm is critical or not in the retrospective loss.\", \"r\": \"Our framework is independent of the norm, and we in fact experimented with other norms in our experiments. We have included the results with L1 and L2 norms below. L1 norm provided the best results overall, and hence was presented in the paper. Considering the choice of L1 norm is an implementation detail, we have moved the sentence regarding L1-norm in methodology to the experiments section.\\n\\nIMAGE CLASSIFICATION TEST ERROR ON F-MNIST (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nNetwork || Original || L1-norm || L2-norm\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nLeNet || 10.8 || 9.4 || 9.7\\nResNet-20 || 7.6 || 6.8 || 7.3\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\nIMAGE CLASSIFICATION TEST ERROR ON SVHN (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nNetwork || Original || L1-norm || L2-norm\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nVGG-11 || 5.54 || 4.70 || 5.15 \\nResNet-18 || 4.42 || 4.06 || 4.27 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\n\\nWe, in fact, even tried a KL-divergence based formulation of the retrospection loss. Consider an input (x_i, y_i) and network $G_{\\\\theta}$ parameterized by $\\\\theta$. Here $G_{\\\\theta}(x_i)$ are the activations of the softmax layer and y_i is the ground-truth class embedding. For the loss, we define: output_curr = $G_{\\\\theta^T}(x_i)$ ; output_prev = $G_{\\\\theta^T_p}(x_i)$ ; target = y_i. For KL_div, we used the following formulation of the retrospective loss at a training step T: Loss(KL) = -1*KLDiv(output_curr, output_prev) + CrossEntropy(output_curr, target). In the above experiment on SVHN, we obtained 5.45 and 4.31 as error rates for VGG-11 and ResNet-18 respectively. \\nWe have added these results to the Appendix E.\\n\\nWhile all our variants, L1-norm, L2-norm and KL_div, improved upon baselines, L1-norm resulted in better performance across tasks, except in unconditional GANs, where L2-norm is used to apply the retrospective loss on the adversarial predictions of the generator (Sec 4.2). One hypothesis is that when the L1-norm is used, the gradient is simply a dimension-wise sign (+ve vs -ve), which provides a clearer direction to gradient updates, especially when training to a fixed target embedding in predictive tasks.\\n\\nThank you again for your comments, and we will be happy to discuss further for any clarifications/questions.\"}",
"{\"title\": \"Response for Reviewer #2 (Part 1/2)\", \"comment\": \"We thank the reviewer for the review and helpful comments. We value every feedback and have updated the draft to address these concerns too. We hope that the editorial suggestions (which have now been addressed) will not be held against the technical merit of the paper.\", \"some_specific_changes_to_the_paper_are\": \"a. Updates\\n 1. The introduction is updated to highlight our contributions more explicitly.\\n 2. Algorithm 1 and Figure 6 replaced with high-res variants.\\nb. Additions (in appendix)\\n 1. Additional experiments on graph-structured data (with mean and std)\\n 2. Mean and std deviation of current experiments\\n 3. Ablation Study: Momentum for SGD\\n 4. Ablation Study: Warm-up period\\n 5. Ablation Study: Choice of Norm\\n\\nBefore we address the specific questions, we summarize our key contributions below for clarity:\\n1. We propose a new \\u201cretrospective loss\\u201d that is based on looking back at the trajectory of gradient descent and providing an earlier parameter state as guidance for further learning.\\n2. The key benefits of the proposed loss are that - it is simple and easy to implement (with any existing loss). Its simplicity allows us to easily generalize its use across tasks and application domains.\\n3. Our exhaustive experiments on a wide range of tasks including image classification (+ few-shot learning), GANs, speech recognition, text classification, and graph classification (included here) beat state-of-the-art methods on benchmark datasets with the addition of this loss term. \\n4. To the best of our knowledge, this is the first such effort; our empirical studies showed a consistent improvement in performance across the tasks in our multiple trials, demonstrating the potential of this method to have a strong impact on practical use in real-world applications across domains.\\n\\nWe have also added these at the end of Section 1 in our updated paper.\\n\\nBelow is our response to the individual comments (Q = question; R = our response):\", \"q\": \"...standard deviations for results\\u2026?\", \"r\": \"We did run multiple trials during our studies, and our results in the paper were consistent across these trials. We, however, ran the experiments again, and are reporting our results below for Image Classification, Speech Recognition and Text Classification tasks averaged over 10 runs. (Note that for few-shot learning, we already included this information in the original submission). We also note that all the results in the submitted paper are in the same range as the mean +- std in the results below, although these were separately performed - showing the consistency.\\n\\nIMAGE CLASSIFICATION TEST ERROR (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nDataset || Network || original || Retrospective\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nSVHN || VGG-11 || 5.51 +- 0.08 || 4.75 +- 0.09 \\nSVHN || ResNet-18 || 4.38 +- 0.09 || 4.01 +- 0.07\\nF-MNIST || LeNet ||10.74 +- 0.19 || 9.37 +- 0.14\\nF-MNIST || ResNet-20 || 7.63 +- 0.04 || 6.85 +- 0.06 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\nTEXT CLASSIFICATION TEST ERROR (H = Higher the better, L = lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014---\\n Method || IECOMAP || AVEC \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n || F1-Score (H) || Accuracy (H) || MSE (L) || r (Pear Score) (H)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nRetrospective || 64.40 +- 0.4 || 64.97 +- 0.5 || 0.1772 +- 0.0006 || 0.332 +- 0.008\\nOriginal || 62.60 +- 0.9 || 62.70 +- 0.7 || 0.1798 +- 0.0005 || 0.317 +- 0.007\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014--\\n\\nSPEECH CLASSIFICATION ERROR RATES (Lower the better)\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nNetwork || Validation Set || Testing Set \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n Original Retrospective || Original Retrospective \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\nLeNet || 9.77 +- 0.05 9.60 +- 0.03 || 10.26 +- 0.05 9.86 +- 0.04 \\nVGG-11 || 5.15 +- 0.08 4.37 +- 0.04 || 5.03 +- 0.06 4.16 +- 0.05 \\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\nAs an extension, we also did experiments on the task of node classification using Graph Neural Networks, where the performance is ideally reported by averaging over several runs. Here, we report performances by averaging over 30 runs each, where each run was trained for 100 epochs. We carried out experiments on CORA and CITESEER datasets using two widely used/state-of-the-art networks: ARMA (Bianchi et al., CoRR 2019), and GCN (Kipf & Welling, ICLR 2017). Using retrospection loss improves both accuracy and std. deviation in almost all cases. \\nThese experiments with details have been added to Appendix A of the revised paper.\\n\\nDue to space constraints, the response is continued in PART 2/2\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents the retrospective loss to optimize neural network training. The idea behind the retrospective loss is to add a penalization term between the current model to the model from a few iterations before. Extensive experimental results on a wide range of datasets are provided to show the effectiveness of the retrospective loss.\\n\\nThe retrospective loss is additionally controlled by two hyperparameters, the strength parameter K and the update frequency T_p. This loss, measured in L-1 norm, is added to the training objective. The geometric intuition of the added loss term is that this pushes the model away from the model at iteration T_p. The paper argues that this shrinks the parameter space of the loss function.\\n\\nOne of the concern regards the writing of the paper.\\n- Algorithm 1 and Figure 6 look very blurry, which I think are both below the publication standard.\\n- The introduction could be written to be more helpful, such as providing more context on why the obtained experimental results are important (e.g. getting state-of-the-art results on the datasets studied in the experiments)\\n- The Related Work contrasts with previous work which is not clear because the precise contribution has not been stated at the point.\", \"more_detailed_questions\": [\"What are the standard deviations for the experimental results (as you reported in Table 4 but not in other experiments)?\", \"I'm curious whether the use of L-1 norm is critical or not in the retrospective loss.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a new loss function which adds to the training objective another term that pulls the current parameters of a neural network further away from the parameters at a previous time step.\\nIntuitively, this aims to push the current parameters further to the local optimum.\\nOn a variety of benchmarks, optimizing the proposed loss function achieves better results than just optimizing the training loss.\\n\\nThe paper is well written and easy to follow. However, I am not entirely convinced about the intuition of the proposed method and I think further investigation are necessary.\\nWhile the method is simple and general, it also seems to be rather heuristic and requires carefully chosen hyperparameters.\\nHaving said that, the empirical evidence shows that the proposed loss function consistently improves performance.\", \"the_following_details_should_be_addressed_further\": [\"I am a bit confused by the definition of the loss function. In Equation 1 it seems that the term on the left represents the training objective. If that is correct than Equation 2 second case contains the training objective twice?\", \"F in Section 3 after Equation 2 is not properly defined\", \"Could it happen that the proposed loss function leads to divergence, for example if the parameter from a previous time step theta^Tp is close to the optimum theta_star?\", \"What is the motivation to use the L1 norm? How does this choice affect convergence compared to let's L2 norm?\", \"Section 4.1 typo in first paragraph: K instead of \\\\kappa\", \"Section 4.1 the results would be more convincing if all networks were trained multiple times with a different random initialization and Table 1 would include the mean and std.\", \"Why is no warm-up period used for the GAN experiments?\", \"Section 4.3: why is \\\\kappa increase by 1% for the speech recognition experiments where as by 2% for all other experiments?\", \"I suggest to increase the line width of all figures since they are somewhat hard to identify on a print version.\", \"Why is the momentum set to 0.5 for SGD in the ablation study? Most frameworks use a default value of 0.9.\", \"I would like to see the affect of the warm-up period to the performance in the ablation study.\", \"How does the choice of learning rate schedule, such as for example cosine annealing, affect the loss function?\", \"post rebuttal\", \"------------------\", \"I thank the authors for clarifying my questions and providing additional experiments. I think that especially the additional ablation studies and reporting the mean and std of multiple trials make the contribution of the paper more convincing. Hence, I increased my score.\"]}"
]
} |
rkgt0REKwS | Curriculum Loss: Robust Learning and Generalization against Label Corruption | [
"Yueming Lyu",
"Ivor W. Tsang"
] | Deep neural networks (DNNs) have great expressive power, which can even memorize samples with wrong labels. It is vitally important to reiterate robustness and generalization in DNNs against label corruption. To this end, this paper studies the 0-1 loss, which has a monotonic relationship between empirical adversary (reweighted) risk (Hu et al. 2018). Although the 0-1 loss is robust to outliers, it is also difficult to optimize. To efficiently optimize the 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for stagewise training. As a result, our loss can be deemed as a novel perspective of curriculum sample selection strategy, which bridges a connection between curriculum learning and robust learning. Experimental results on noisy MNIST, CIFAR10 and CIFAR100 dataset validate the robustness of the proposed loss. | [
"Curriculum Learning",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rkgt0REKwS | https://openreview.net/forum?id=rkgt0REKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jTC0ChX6YM",
"Skg2CS-Q3S",
"rJezxPnojH",
"BJlutLhioB",
"BJe6rL3jsS",
"HJghRHVAKr",
"SkxvXnoatH",
"S1efYqIiKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723272,
1574274516014,
1573795561694,
1573795455812,
1573795396746,
1571861971721,
1571826719332,
1571674746149
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1438/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1438/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1438/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1438/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper studies learning with noisy labels by integrating the idea of curriculum learning.\\n\\nAll reviewers and AC are happy with novelty, clear write-up and experimental results.\\n\\nI recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"On the motivation\", \"comment\": \"Thank you for the clarification! I would like to clarify that in my previous review about motivation, I did not misunderstand the motivation but I want to emphasize that the message of Theorem 1 basically says that\\n\\nthe minimizer of the clean distribution is identical to that of the worst-case distribution around that clean distribution (which refers to the f-divergence ball).\\n\\nAnd I acknowledged that the authors wanted to interpret as \\n\\nthe minimizer of the corrupted distribution is identical to that of the worst-case \\\"clean\\\" distribution around that corrupted distribution.\\n\\nMy concern is that if what the author suggested is true, although it is free from noise assumption, it sounds like the minimizer of the corrupted risk w.r.t. 0-1 loss is identical to the worst-case clean distribution with arbitrary delta (which determines the size of the f-divergence ball). This sounds highly pessimistic if I did not misunderstand this part. Could you please clarify this part?\", \"regarding_the_key_finding_of_authors_in_the_rebuttal\": \"\\\"Our key finding is that minimizing the classification risk under a corrupted distribution can minimize the classification risk of the worst-case clean distribution (in the f-divergence ball).\\\"\\n\\nI think this is the claim from the experimental results. I am not sure if the success of the proposed method is really because of that finding the author suggested, but the loss itself has some mechanics that make it robust to noise, e.g., adaptive sample selection.\\n\\nAlthough I am still not fully convinced with the motivation of the paper and still doubting whether NPCL works well because of the given motivation, I still believe that the proposed NPCL should improve the performance and also give a new perspective to deal with noisy labels. I like the idea of the paper. Thus, I increased my score.\"}",
"{\"title\": \"Clarify the motivation.\", \"comment\": \"Thanks for your comments. We acknowledge your concern. Here, we want to clarify some misunderstandings about our motivation. \\n\\nOur motivation is not \\\"If we have clean labeled data, minimizing the \\\"adversarial\\\" ERM risk using \\\"clean\\\" labeled data yields the same minimizer as minimizing the \\\"standard\\\" ERM risk using \\\"clean\\\" labeled data.\\\" The reviewer\\u2019s concern is training with the clean distribution. However, our motivation is to train on a corrupted training distribution p(x,y), not a clean distribution, and we want our model to perform well on the worst-case clean distribution (in the f-divergence ball). \\n\\nCompared with Hu et al., our motivation is not training with \\\"adversarial\\\" ERM to improve training with classification risk under the clean distribution. Our key finding is that minimizing the classification risk under a corrupted distribution can minimize the classification risk of the worst-case clean distribution(in the f-divergence ball). There is no contradiction to Hu et al. Note that the worst-case classification risk is an upper bound of the classification risk of the true clean distribution, minimizing the worst-case classification risk can usually decrease the true classification risk.\\n\\nSpecifically, suppose we have an observable training distribution p(x,y). The observable distribution p(x,y) may be corrupted from an underlying clean distribution q(x,y). We train a model based on the training distribution p(x,y), but we want our model to perform well on the clean distribution q(x,y). Since we do not know the clean distribution q(x,y), we want our model to perform well even for the worst-case clean distribution q, with the assumption that the f-divergence between the corrupted distribution p and the clean distribution q is bounded by delta. Because of Theorem 1, we do not need to optimize the worst-case risk directly; we can optimize the classification risk (on the corrupted training distribution) instead.\\n\\nWe provide a more detailed explanation in Appendix A and update the paper to make the motivation clear.\\n\\n\\nThanks for the suggestion of another line of analysis of the robustness of 0-1 loss.\\n\\nActually, the \\\"symmetric property\\\" of robust loss is derived under additional assumptions of noise type. For example, In Ghosh AAAI2017, they make assumptions of uniform noise, simple non-uniform noise, and class conditional noise. \\n\\nIf we assume the noise is uniform, minimizing the (empirical) risk of 0-1 loss using noisy data leads to a same minimizer as minimizing the (empirical) risk for the clean distribution. (Ghosh AAAI2017)\\n\\nIf we do not assume the noise type, minimizing the risk of 0-1 loss using noisy data leads to a same minimizer as minimizing the risk of the worst-case clean distribution (in the f-divergence ball), which is the case of this work.\\n\\nThe robustness of 0-1 is interesting; we will further analyze this robustness in further work.\"}",
"{\"title\": \"More experiments for evaluation\", \"comment\": \"Thanks for your comments. \\n\\nWe provide the suggested experiments in Appendix B.\\nWe use the open-sourced code of Lee et al., ICML 2019 on GitHub. We only change the loss by our CL and NPCL. The experimental results show the effectiveness of our CL/NPCL. Note that CL/NPCL is a single loss for network training; one can combine them with the ensemble method (Lee et al. 2019) to boost the performance.\"}",
"{\"title\": \"Explanation and more experiments.\", \"comment\": \"Thanks for your comments.\\n\\n1. Explanation of Eq.(9)\\n \\nThe difficulty of optimizing the 0-1 loss is that the 0-1 loss has zero gradients in almost everywhere (except at the breaking point). This issue prevents us from using first-order methods to optimize the 0-1 loss. Eq.(9) provides a surrogate of the 0-1 loss with non-zero subgradient for optimization, while preserving robust properties of the 0-1 loss. Note that our goal is to construct a tight upper bound of the 0-1 loss while preserving informative (sub)gradients. Eq.(9) balances the 0-1 loss and conventional surrogate by selecting (the trust) samples (index) for training progressively.\\n\\n2. We provide experiments on Tiny-ImageNet in Appendix B. We use ResNet18 as the testbed. Both symmetric 20% and symmetric 50% noise cases are evaluated. The experimental results show that NPCL can improve performance on the more difficult dataset Tiny-ImageNet. \\n\\n3. In Table 2 and 3, the NPCL, Co-teaching, and Co-teaching+ use the true noise rate. We further evaluate CL on CIFAR10, CIFAR100, and Tiny-ImageNet. The experimental results are provided in Appendix B. It shows that CL can obtain comparable results with NPCL. Moreover, CL obtains better performance on CIFAR10 and CIFAR100 with uniform noise, and competitive performance on cases with semantic noise, compared to the ensemble methods (RoG) of Lee et al., ICML 2019. Note that CL does not have parameters. It is much more convenient to use.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper tackles the problem of learning with noisy labels and proposes a novel cost function that integrates the idea of curriculum learning with the robustness of 0/1 loss. The resulting cost function is theoretically justified and experimentally validated.\", \"pros\": \"(1) The proposed cost function is novel in its design, especially the aspect of curriculum learning with a computationally efficient implementation, as in Algorithm 1.\\n(2) The new cost function could be treated as a simple-to-implement add-on to make learning more robust to noisy labels. \\n(3) The introduction is well formulated and organized with focused motivation.\", \"cons\": \"(1) Equ. 9 requires more explanation of the intuition of using a combination of conventional surrogate loss and 0/1 loss, and furthermore the role of the index indicator in balancing the above two parts.\\n(2) Curriculum learning focuses on easy example followed by hard ones. Yet noisy examples are mixed with difficult ones in your formulation of sample selection mechanism (index indicator). The pruned examples are therefore more likely to have a high proportion of hard examples, which is undesirable. To illustrates the effectiveness of the proposed algorithm against such scenarios , one would like to see experiments on more difficult datasets such as Tiny-ImageNet. \\n(3) It is not clear if the quantitative results in Table 2 and 3 are produced with the pre-defined \\\\epsilon beforehand or with grid search as done in Table 4. Knowing \\\\epsilon would render comparison unfair for baselines.\", \"other_remarks\": \"(1) E(u) threshold parameter changes from \\u201cn\\u201d in equation 11 to \\u201cC\\u201d in equation 13 (probably considering equation 9). In Equ 13, C is given as \\\"n+0/1 loss\\\", its transition to the other alternative forms in Equ 18 is not fully explained. \\n(2) The purpose of proposition 1 is unclear and may be at least shortened.\\n(3) Should have used some uncertainty metric instead.\\n(4) Incremental improvement over SOTA. SOTA was actually better in some cases.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"After rebuttal,\\n\\nI think the authors made a valid argument to address my concerns on evaluation. So, I'd like to increase my score as weak accept! \\n\\n=====\", \"summary\": \"To handle noisy labels, this paper proposed a curriculum loss that corresponds to the upper bound of 0-1 loss. Using synthetic noisy labels on MNIST and CIFAR, the authors verified that the proposed method can significantly improve the robustness against noisy labels.\", \"detailed_comments\": \"Overall, the paper is well-written and the ideas are novel. However, experiments are a little weak due to weak baselines and experimental setups (see suggestions for more details). I will consider raising my score according to the rebuttal.\", \"suggestions\": \"1. Could the authors consider more baselines like D2L [Ma' 18] and Reweight [Ren' 18] \\n\\n2. Similar to [Lee' 19], could the authors evaluate the performance of the proposed methods on more realistic noisy labels such as semantic noisy labels and open-set noisy labels? \\n\\n[Lee' 19] robust inference via generative classifiers for handling noisy labels, In ICML, 2019.\\n\\n[Ma' 18] Dimensionality-Driven Learning with Noisy Labels, In ICML, 2018.\\n\\n[Ren' 18] Learning to Reweight Examples for Robust Deep Learning, In ICML, 2018.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This paper proposes a new loss function: curriculum loss, which is a meta-loss function that we can still specify an existing surrogate loss to use this loss function. This meta-loss function guarantees to be tighter than using a traditional pointwise-sum loss function as used in the empirical risk minimization framework. Intuitively, the proposed CL loss embed the sample selection process in the objective function. The authors suggest that it is robust against label corruption because it is tighter and provided promising experimental results.\\n\\n========================================================\", \"clarity\": \"The paper is well-written and easy to follow. \\n\\n========================================================\", \"significance\": \"The proposed paradigm is interesting and I am convinced that it can be useful under label noise. The experiments look promising. Future work about the analysis of NPCL/CL is also interesting to consider (e.g., which surrogate loss to use, rigorous theoretical guarantee, etc.). I think the proposed method is impactful. \\n\\n========================================================\", \"comments\": \"The proposed method is interesting and can give a tighter bound for any surrogate loss by using this method (CL). Moreover, the author suggested a simple extension of CL for label corruption (NPCL) and the performance is impressive. I would like to vote accept for this paper but the following point highly concerns me and I am not sure about the correctness (see the concern below). It is about the motivation not the proposed method.\", \"concerns_about_motivation\": \"I disagree with the original motivation of this paper. The authors used the result of Hu et al. 2018 to motivate the use of CL. To my knowledge, the main point raised by Hu et al. is as follows:\\n\\nIn classification, minimizing the adversarial risk yields the same solution as using the standard empirical risk. This suggests that minimizing the adversarial risk may not enhance the robustness of a classifier. Yet, it may still be useful when we consider regression (other settings but not classification). As a result, in classification, we should try other methods to make a robust classifier. Then, Hu et al. considered to utilize some kind of structural assumption to make a robust classifier. From their title: \\\"Does Distributionally Robust Supervised Learning Give Robust Classifiers?\\\", I think they suggested \\\"No\\\" as an answer and the discussion about 0-1 loss in the curriculum loss paper will be contradicted to them from the motivation perspective. \\n\\nFurthermore, regarding the adversarial risk, it is not focusing on the label noise but rather the noise of the feature-label pair, i.e, perturb (x,y) adversarially within an f-divergence ball. However, in my opinion, if we randomly flip the label of the data regardless of x (as the authors and existing work did in experiments when considering label corruption: symmetric, partial, etc.), we cannot be confident to state that the f-divergence between test distribution and corrupted training distribution is small under label noise. \\n\\nAnother point to motivate the use of 0-1 loss that the author mentioned is when we have outliers (Masnadi-Shirazi & Vasconcelos, 2009). This makes sense and this is a famous argument to discourage the use of too steep loss functions, e.g., exponential loss. I think this motivation is fine but it is not directly related to label corruption because we do not add out-of-distribution data but rather the label noise. Furthermore, the authors did not inject any outliers in the experiments in my understanding. I think this is totally no problem because we are focusing on label noise here, but this makes the motivation about outliers less important when we are talking about label noise.\\n\\nI think the most important direction both in theory and experiments about the robustness to label noise of the 0-1 loss is that 0-1 loss satisfies a \\\"symmetric property\\\", i.e., \\\\ell(z)+\\\\ell(-z) = Constant for a margin-based loss function in binary classification. Under symmetric label noise, \\\"the minimizer of the expected symmetric noise risk (a risk that the label is corrupted by coin flipping noise) is identical to the minimizer of the clean risk (normal risk)\\\". Although it is not empirically but the expected version, it gives a good insight about the advantage of directly minimizing 0-1 loss under label noise. This is first pointed out by \\n\\n[1] Manwani et al.: Noise tolerance under risk minimization, IEEE Transactions on Cybernetics 43 (2013) \\n[2] Ghosh et al.: Making risk minimization tolerant to label noise Neurocomputing 160 (2015): 93-107. \\n\\n([1] focused on the 0-1 loss while [2] extended it to symmetric losses.)\\n\\nThen, it was extended to the multiclass loss by the following paper:\\n\\n[3] Ghosh et al.: Robust loss functions under label noise for deep neural networks. AAAI2017. \\n\\nThe advantage of symmetric losses is also discussed in this paper that the authors already cited in the symmetric noise experiment section. \\n\\n[4] van Rooyen et al.: Learning with symmetric label noise: The importance of being unhinged, NeurIPS2015\", \"the_advantage_of_the_symmetric_condition_and_0_1_loss_is_also_discussed_in_a_more_general_noise_scenario_and_more_evaluation_metrics\": \"[5] van Rooyen et. al: An average classification algorithm. arXiv:1506.01520, 2015\\n[6] Charoenphakdee et al.: On symmetric losses for learning from corrupted labels, ICML2019\", \"and_the_following_paper_that_was_also_cited_in_the_submitted_work_and_compared\": \"[7] Zhang and Sabuncu: Generalized cross-entropy loss for training deep neural networks with noisy labels, NeurIPS2018\\n\\nis also inspired by the robustness of the symmetric losses (including 0-1 loss). They argued that although the symmetric loss (MAE) for multiclass proposed by Ghosh AAAI2017 is robust, it is hard to train for challenging datasets, and they try to relevate this condition while making it easier to train. This paper outperformed [7] and I think it is clearer and better to build a story along this line.\\n\\nIn short, here is the key message why I think the current motivation does not feel right. When we have noisy labeled data, instead of motivating the use of 0-1 loss by suggesting that \\n\\n\\\"If we have clean labeled data, minimizing the \\\"adversarial\\\" ERM risk using \\\"clean\\\" labeled data yields the same minimizer as minimizing the \\\"standard\\\" ERM risk using \\\"clean\\\" labeled data\\\",\\n\\nI believe the story to motivate the robustness of 0-1 loss under label noise should be \\n\\n\\\"If we have noisy labeled data, minimizing the \\\"standard\\\" or \\\"modified\\\" risk using \\\"noisy\\\" labeled data yields the same minimizer as minimizing the \\\"standard\\\" ERM risk using \\\"clean\\\" label data\\\"\\n\\nThe latter statement corresponds to the literature I suggested. \\n\\nApart from the motivation raised by the authors, as we can see from this curriculum loss paper, NPCL nicely outperformed generalized cross entropy loss in [7], which is impressive.\\n\\n========================================================\\nDecision.\\nI strongly feel that motivating the noise robustness of 0-1 loss by discussing about the adversarial risk (Hu et al.) is misleading. Nevertheless, I feel the proposed method itself makes a lot of sense and I am impressed by the results. If the author can convince me that using the current motivation of the paper is suitable, I am happy to improve the score. Another way is to agree to modify the motivation part. Given the experiments were done, it is not to difficult to change the motivation of the paper. At this point, I have decided to give a weak reject. \\n\\n========================================================\", \"questions\": \"1. Is it straightforward to combine NPCL with Co-teaching/Mentornet/Co-teaching+?\\n2. Does the traditional theory about classification-calibration (Zhang, 2004, Bartlett+, 2006) can guarantee the Bayes-optimal solution if we use NPCL?\\n\\n========================================================\", \"minor_comments\": \"1. Page 9: Both our NPCL and Generalized Cross Entropy(GCE) << space missing between Entropy and (\", \"update\": \"I have read the rebuttal. Although I am still not fully convinced with the motivation of the paper and still doubting whether NPCL works well because of the given motivation, I still believe that the proposed NPCL should give a new perspective to deal with noisy labels. I like the idea of the paper. Thus, I change the score to Weak Accept.\"}"
]
} |
H1gdAC4KDB | Adversarially Robust Generalization Just Requires More Unlabeled Data | [
"Runtian Zhai",
"Tianle Cai",
"Di He",
"Chen Dan",
"Kun He",
"John E. Hopcroft",
"Liwei Wang"
] | Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is required to achieve adversarially robust generalization. In this paper, we theoretically and empirically show that with just more unlabeled data, we can learn a model with better adversarially robust generalization. The key insight of our results is based on a risk decomposition theorem, in which the expected robust risk is separated into two parts: the stability part which measures the prediction stability in the presence of perturbations, and the accuracy part which evaluates the standard classification accuracy. As the stability part does not depend on any label information, we can optimize this part using unlabeled data. We further prove that for a specific Gaussian mixture problem, adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Inspired by the theoretical findings, we further show that a practical adversarial training algorithm that leverages unlabeled data can improve adversarial robust generalization on MNIST and Cifar-10. | [
"Adversarial Robustness",
"Semi-supervised Learning"
] | Reject | https://openreview.net/pdf?id=H1gdAC4KDB | https://openreview.net/forum?id=H1gdAC4KDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2VAO0AhxO",
"B1ljO9uCYH",
"H1xM7hwjFH",
"rkguDO1iFS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723243,
1571879538599,
1571679257592,
1571645536115
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1436/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1436/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1436/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This work starts with a decomposition of the adversarial risk into two terms: the first is the usual risk, while the second is a stability term, that captures the possible effect of an adversarial perturbation. The insight of this work is that this second term can be dealt with using unlabelled data, which is often in plentiful supply. Unfortunately, the same ideas was developed concurrently and independently by several groups of authors.\\n\\nThe reviewer all agreed that this particular version was not ready for publication. In two cases, the authors compared the work unfavorably with concurrent independent work. I will note that the main bound somewhat ignores the issue of overfitting that the second term deals with via the Rademacher bound. Unless one assumes one has unlimited unlabeled data, could one not get an arbitrarily biased view of robustness from the sample. Seems like a gap to fill.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper considers the problem of adversarial robustness. The paper shows that (Theorem 1) robust generalization error can be bounded in terms of the standard generalization error and a stability term, that does not depend on the labels. The paper also shows that for a simple classification problem involving learning the separator for a symmetric 2 gaussian mixture data, we can solve this problem robustly without additional labeled examples. The paper suggests that we can use unlabeled data to improve the robust generalization. Towards this the paper regularizer on the unlabeled data, that promotes stability in the model prediction. The paper evaluates this on Mnist and Cifar showing the better performance of the proposed regularization over PGD adversarial training.\\n\\nThe Theorem 1 in this paper is a triangle inequality on the loss ,and the observation about splitting the robust generalization into standard generalization error and stability, is not particularly new. The earlier work Zhang et al., 2019b show a similar result in their paper. They even propose and experiment with a similar regularizer (see eqs 3 and 5 in Zhang et al., 2019b). The exact implementation while can be different between these two, the paper does not currently compare with this and there is no evidence to prefer this regularizer over the existing one.\\n\\nThe Gaussian setting considered in this paper is quite simple and the techniques developed there are particular to the symmetric 2 Gaussian mixture problem. Given the other parallel works studying the same setting, it is good to also include a comparison of the exact results (such as sample complexity) for this setup.\\n\\nOverall I find the contributions of this paper to be not sufficient and cannot recommend acceptance at this stage.\", \"minor\": \"The last line above theorem 4 and second line after eq 9 are written poorly.\\n\\n Zhang et al., 2019b https://arxiv.org/abs/1901.08573\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Paper summary: This paper seeks to improve robust generalization performance with the help of unlabeled data. The authors first consider the toy model presented in Schmidt et al. and show how the labeled sample complexity in the robust setting can be lowered to match the standard setting if sufficient unlabeled data is available. They then propose a practical algorithm to improve robust test accuracy and evaluate it on the MNIST and CIFAR datasets.\", \"comments\": \"The problem the paper seeks to address (bridging the generalization gap in the adversarial setting) is an important one, and the paper is clear and well-written.\\n\\nAs the authors discuss, there have been three (other) independent papers that tackle the same problem (which were accepted at NeurIPS). Even though I tried to evaluate this paper keeping in mind that it was written concurrently, I think it falls short in a couple of important aspects which make it hard to recommend acceptance. In particular:\\n\\n1. Unlike the other papers, the algorithm discussed in the theoretical section (which is able to reduce sample complexity by leveraging unlabeled data) is entirely different from the one used in practice on MNIST/CIFAR. It would make for a more compelling case if the algorithm used experimentally could also work on the toy model or vice versa (which is the case for Carmon et al. and Uesato et al.).\\n\\n2 . The empirical evaluation is not detailed enough and there is some inconsistency in the baselines. \\n\\n- In particular, the authors report that VAT attains poor robustness (<2.5% for both 5k and 10k labeled). However, Uesato et al. also benchmark against VAT in a very similar setting of 4k labelled CIFAR data points (with the same eps=8/255) and get ~32% accuracy (cf. Figure 1 from their paper). I could not find any difference between the two baselines except for the fact that Uesato et al. implement VAT with a KL divergence penalty (as suggested in the VAT paper) instead of cross entropy (as is used in this paper). This is somewhat concerning because based on the baselines reported in Uesato et al., the improvement of the approach proposed in this paper (which comes from doing 7 steps instead of 1 to find the adversarial example) are marginal. (Additionally, in this setting the approach of Uesato et al. gets robustness of about ~45% which is significantly better than ~33% reported in this paper.) \\n\\n- Moreover, this paper evaluates on much fewer benchmarks (only MNIST/CIFAR with few labeled examples) compared to the other papers (which also study for example SVHN and the impact of using unlabeled ImageNet on CIFAR robustness). \\n\\nThe overlap with concurrent work is unfortunate, and it makes it hard to evaluate this paper. However, my two main concerns are (1) inconsistency in baselines which cast some doubt on the improvements offered by the proposed approach, and (2) the fact that both the algorithm and the experimental evaluation seem to be a subset of that in concurrent work (especially Uesato et al.). Thus, I have to recommend rejection.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors study the sample complexity of adversarially robust learning with access to unlabeled samples. Theoretically, they consider the setting of Schmidt et al. 2018 (separating two class-conditional Gaussians) and present an algorithm which can learn a robust classifier with only a few labeled samples and a large number of unlabeled samples (circumventing the sample complexity separation of the original work). Then, the authors propose a modification of the VAT algorithm (Miyato et al. 2018) to train deep networks utilizing unlabeled samples. They find that, empirically, their algorithm achieves better performance compared to standard adversarial training on the labeled samples.\\n\\nOverall, the paper addresses an interesting problem, studying both a simple theoretical setting and a real-world empirical setting in which the authors achieve an improvement over prior work.\\n\\nUnfortunately, the paper is concurrent with two other works (which the authors acknowledge: Carmon et al. 2019, Uesato et al. 2019) which have already been accepted for publication at NeurIPS 2019. All of these works are very similar in spirit, proposing an algorithm for the theoretical setting of Schmidt et al. 2018 and an empirical algorithm for real-world settings. Moreover, these works improve over the current manuscript in a number of ways:\\n-- The algorithm proposed for the theoretical setting is more general and is essentially the same as the algorithm used for real-world dataset.\\n-- The experimental evaluation is significantly more extensive, performing additional ablations, and exploring the methods in more detail. The work of Uesato et al. 2019 is virtually a superset of the results in this manuscript.\\n-- Both works collect additional images from an unlabeled and uncurated dataset (Tiny Images) and show that they can utilize them using their proposed approach to improve the state-of-the-art robust accuracy on CIFAR10.\\n\\nTherefore, given that: a) the results in the current manuscript are essentially a subset of the results appearing in Carmon et al. 2019 and Uesato et al. 2019 and b) these works will have already been published at NeurIPS 2019, 4 months before ICLR 2020, I am afraid I need to recommend rejection.\"}"
]
} |
HJlvCR4KDS | Why Does the VQA Model Answer No?: Improving Reasoning through Visual and Linguistic Inference | [
"Seungjun Jung",
"Junyoung Byun",
"Kyujin Shim",
"Changick Kim"
] | In order to make Visual Question Answering (VQA) explainable, previous studies not only visualize the attended region of a VQA model but also generate textual explanations for its answers. However, when the model’s answer is ‘no,’ existing methods have difficulty in revealing detailed arguments that lead to that answer. In addition, previous methods are insufficient to provide logical bases, when the question requires common sense to answer. In this paper, we propose a novel textual explanation method to overcome the aforementioned limitations. First, we extract keywords that are essential to infer an answer from a question. Second, for a pre-trained explanation generator, we utilize a novel Variable-Constrained
Beam Search (VCBS) algorithm to generate phrases that best describes the relationship between keywords in images. Then, we complete an explanation by feeding the phrase to the generator. Furthermore, if the answer to the question is “yes” or “no,” we apply Natural Langauge Inference (NLI) to identify whether contents of the question can be inferred from the explanation using common sense. Our user study, conducted in Amazon Mechanical Turk (MTurk), shows that our proposed method generates more reliable explanations compared to the previous methods. Moreover, by modifying the VQA model’s answer through the output of the NLI model, we show that VQA performance increases by 1.1% from the original model. | [
"Image Captioning",
"Visual Question Answering",
"Explainable A.I",
"Beam Search",
"Constrained Beam Search"
] | Reject | https://openreview.net/pdf?id=HJlvCR4KDS | https://openreview.net/forum?id=HJlvCR4KDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pJMaaKC6sx",
"SyxclpUcsS",
"H1lLnhUqiH",
"rJgxc285jS",
"rJlJyunRKB",
"H1x8vBV6tS",
"S1gXpfnhFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723214,
1573706994305,
1573706925806,
1573706887687,
1571895255201,
1571796317939,
1571762875410
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1435/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1435/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1435/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1435/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper is good, with relatively positive support from the reviewers. However, there were also several legitimate issues raised, for example regarding the semantics of a negative answer and associated explanations. Though this paper cannot be accepted at this time, we hope the feedback here can help improve a future version, as all reviewers agree this is a valuable line of work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We are obliged to get your suggestion and advice, and it is an honor to have your assistance.\\n\\n1) I have a question about the NLI system. Since the NLI is a three-way classifier where the answers would be \\\"Entail\\\", \\\"Contradictory\\\", and \\\"Neutral\\\". What would the system do when the relationship is \\\"Neutral\\\"? For now, I think that it just give an answer of \\\"no\\\" but I am not sure whether it is correct. For example, in Fig. 3, it shows an explanation (premise) of \\\"something (not the vegetable) on a plate\\\" and the hypo is \\\"there are vegetables on the plate\\\". Since the hypo is not necessary to contradict the premise, the relationship should be neutral. It does not directly provide evidence of the answer.\\n\\n\\nFollowing the reviewer's thoughtful concern, we conducted additional experiments to confirm this. In this experiment, we set the VQA model\\u2019s answer as \\\"yes\\\" if the class probability of the NLI model for \\\"Entail\\\" is greater than \\\"Contradictory,\\\" even though the NLI result is \\\"Neutral.\\\"\\n\\nAs the reviewer pointed out, we found that the answer is often \\\"yes,\\\" even when there is little correlation between premise and hypothesis. For this reason, we use the answer as \\\"yes\\\" only if the NLI result is \\\"Entail,\\u201d as we first suggested in the paper.\\n\\np.s-\\nIn Fig 3, the premise is \\\"a slice of pizza is sitting on a plate.\\\" \\\"something (not the vegetable) on a plate\\\" is a sentence added to visualize which keyword is absent in the image.\\n\\n2) Since more data are involved in training the explanation system, the proposed methods are not fairly compared with the BAN method. It would be better to mention this detail in the paper. \\n\\nThank you for your advice. We mentioned the reviewer's comment in Table 3.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We are hugely thankful for your invaluable and thoughtful comments and support.\\n\\n1) However, the proposed approach also has noticeable weaknesses. It relies on external models or tools for natural language inference, and such inference does not take into account the visual context of the image. Also, the explanations generated from the proposed model only justify the answer but are not introspective, and they do not reflect the decision process of the target VQA model.\\n\\nWe agree with the issue the reviewer has pointed out. It's best to do inference using both visual and linguistic information at the same time, but unfortunately, we have not solved this problem yet. Therefore, we proposed a method of inference by using Visual and Linguistic information step by step. We also agree that explanations created in the proposed way are not introspective. For this reason, we have revised the content of this paper as follows: (the proposed method not only produces more introspective explanations, ... -> the proposed method not only produces more appropriate explanations, ...)\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We greatly appreciate your kind and detail review. We have tried our best to meet the comments. We will respond to your concerns below in the same order they were made.\\n\\n1) It actually generates relevant captions with respect to the question and answers instead of real explanations. It's true that correlated caption can sometimes serve as the explanation when they cover the same concept coincidently but, we can not use these captions to explain the reasoning process of VQA:\\n\\nThank you for your kind and detailed comment.\\nWe stated on page 2 of the manuscript that the proposed method produces introspective explanations. However, we agree that, as the reviewer pointed out, explanations created in the proposed way are not introspective. For this reason, we have revised the content of this paper as follows: (the proposed method not only produces more introspective explanations, ... -> the proposed method not only produces more appropriate explanations, ...).\\n\\n2) The technique novelty of the proposed paper is also limited; the major novelty is the VCBS, which seems very similar to CBS. The only difference is VCBS adds relaxed parameters, which seems no technique novelty.\\n\\nVCBS differs from CBS in that it uses loosening parameters and the number of satisfied constraints ( | y and C | ). Therefore VCBS helps to identify the keywords that cause the VQA's answer to be \\\"no.\\\" It is the first attempt to determine the cause of \\\"no,\\\" which is extremely hard for existing methods. \\n\\nWe also leveraged the NLI model and ConceptNet to create explanations that require common sense. For these aforementioned reasons, we think our proposed scheme is novel enough. \\n\\n3) Most annotations in Algorithm 1 is also not explained, making the readers hard to follow the actual content.\\n\\nThank you for your thoughtful comment. We have revised the paper to make it easier for readers to understand the meaning of each notation in Algorithm 1.\\n\\n4) The proposed model, although tied the VQA words with the explanation words, suffers the same problems as the PJ-X model, which didn't consider the VQA attention at all.\\n\\nAlthough the BUTD is used as an explanation generator, the method presented in this paper (i.e., VCBS and NLI) can also utilize the PJ-X or the Multi-VQAE as an explanation generator. To illustrate this, we conducted an additional experiment using Multi-VQAE as an explanation generator and included sample results in the Appendix.\\n As a result of replacing the explanation generator of our method to Multi-VQAE, we show that we can improve the explanations generated by the model (Multi-VQAE) while considering VQA attentions.\\n\\n\\n5) The experiment is also weak, considering the results is conduct on 100 samples, there might be significant variance.\\n\\nThank you for your thoughtful consideration. We experimented with 100 additional samples and added the results to the Appendix. We have confirmed that we get results similar to the previous experiments.\\n\\n6) It's interesting the compared approach is learned based on a different dataset, which makes the results harder to compare. The NLI model results are interesting, but for a more fair comparison, I would expect the proposed method compare with a model trained with VQA and coco caption dataset, such as VQA-E.\\n\\nThank you again for your sincere advice. We understand what the reviewer is worried about. We post additional results using the Multi-VQAE as an explanation generator in the Appendix. As a result of applying our method to Multi-VQAE, we can see that more appropriate explanations are generated\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Interesting results and analysis but lack of novelty and details.\\n\\nThis paper proposed a novel text explanation method which extracts keywords that are essential to infer an answer from question. The authors proposed VCBS based on CBS and use Natural language inference to identify the entailments when the answer is yes or no. Experiment results show better mean opinion score 100 random samples results compared with the previous method and better VQA binary classifications when flipping based on the NLI results. \\n\\nDifferent from [Park et. al. 2018], who collected the explanations, this paper directly use the coco captions dataset as the source for the explanations. One of my major concern about this paper is it actually generate relevant captions with respect to the question and answer instead of real explanations. It's true that correlated caption can sometimes serve as the explanation when they cover the same concept coincidently but, we can not use these captions to explain the reasoning process of VQA. \\n\\nThe technique novelty of the proposed paper is also limited, the major novelty is the VCBS, which seems very similar to CBS. The only difference is VCBS adds relaxed parameters, which seems no technique novelty. Most annotations in Algorithms 1 is also not explained, making the readers hard to follow the actual content. The proposed model, although tied the vqa words with the explanation words, it suffers the same problems as PJ-X model, which didn't consider the VQA attention at all. \\n\\nThe experiment is also weak, considering the results is conduct on 100 samples, there might be significant variance. It's interesting the compared approach is learned based on a different dataset, which makes the results harder to compare. The NLI model results are interesting but for a more fair comparison, I would expect the proposed method compare with a model trained with VQA and coco caption dataset, such as VQA-E.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"I thank the authors for their response. I would keep my score unchanged (i.e., 6 Weak Accept). \\n\\n-----------------------------------------------\", \"strengths\": [\"The paper enhances the beam search approach to generate explanations for answers to visual questions. The explanations are further used for verifying the yes/no answers.\", \"The paper constructs a VCBS algorithm with novelties in allowing soft constrains of the generated beams. Since I have not worked on the constrained beam search before, thus it is hard for me to measure the novelty of this method.\", \"The results of VQA 2.0 is pretty good. The accuracy of the Yes/No questions almost achieves the SotA systems.\"], \"weakness\": [\"I have a question about the NLI system. Since the NLI is a three-way classifier where the answers would be \\\"Entail\\\", \\\"Contradictory\\\", and \\\"Neutral\\\". What would the system do when the relationship is \\\"Neutral\\\"? For now, I think that it just give an answer of \\\"no\\\" but I am not sure whether it is correct. For example, in Fig. 3, it shows an explanation (premise) of \\\"something (not the vegetable) on a plate\\\" and the hypo is \\\"there are vegetables on the plate\\\". Since the hypo is not necessary to contradict the premise, the relationship should be neutral. It does not directly provide evidence of the answer.\"], \"comments\": [\"Since more data are involved in training the explanation system, the proposed methods are not fairly compared with the BAN method. It would be better to mention this detail in the paper.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a novel method for explaining VQA systems. Different from most previous work, the proposed approach generates textual explanations based on three steps. First, it extracts keywords from the question. Then an explanation sentence is decoded (based on an RNN image captioner) through the proposed Variable-Constrained Beam Search (VCBS) algorithm to satisfy the keyword constraints. Finally, 3) checking through linguistic inference whether the explanation sentence can be used as a premise to infer the question and answer.\\n\\nI would recommend for acceptance. The paper proposes an alternative approach to VQA explanations, together with a few supporting algorithms such as VCBS. It is potentially helpful to future work on textual explanations and explainable AI in general.\\n\\nAt a high level, it is ambiguous to decide what is a reasonable explanation for many \\u201cno\\u201d answers. For example, one usually cannot provide stronger justification than \\u201cthere is indeed no one\\u201d or \\u201cI don\\u2019t see anyone\\u201d to the question \\u201cIs there anyone in the room\\u201d with an answer \\u201cno.\\u201d The paper frames this explanation generation task as a linguistic inference task and checks entailment between the explanation and the question-answer pair. While it is debatable whether this is optimal, the proposed approach provides valuable insights on what constitutes a good explanation.\\n\\nHowever, the proposed approach also has noticeable weaknesses. \\n\\nIt relies on external models or tools for natural language inference, and such inference does not take into account the visual context of the image. Also, the explanations generated from the proposed model only justify the answer but are not introspective, and they do not reflect the decision process of the target VQA model.\"}"
]
} |
SyeD0RVtvS | DeepSFM: Structure From Motion Via Deep Bundle Adjustment | [
"Xingkui Wei",
"Yinda Zhang",
"Zhuwen Li",
"Yanwei Fu",
"Xiangyang Xue"
] | Structure from motion (SfM) is an essential computer vision problem which has not been well handled by deep learning. One of the promising trends is to apply explicit structural constraint, e.g. 3D cost volume, into the network. In this work, we design a physical driven architecture, namely DeepSFM, inspired by traditional Bundle Adjustment (BA), which consists of two cost volume based architectures for depth and pose estimation respectively, iteratively running to improve both. In each cost volume, we encode not only photo-metric consistency across multiple input images, but also geometric consistency to ensure that depths from multiple views agree with each other. The explicit constraints on both depth (structure) and pose (motion), when combined with the learning components, bring the merit from both traditional BA and emerging deep learning technology. Extensive experiments on various datasets show that our model achieves the state-of-the-art performance on both depth and pose estimation with superior robustness against less number of inputs and the noise in initialization.
| [
"Computer Vision",
"Bundle Ajustment",
"Structure from Motion"
] | Reject | https://openreview.net/pdf?id=SyeD0RVtvS | https://openreview.net/forum?id=SyeD0RVtvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"PJ-uOnwj5",
"HkgvEdjDoB",
"SyewKHsvsr",
"BJgsPzsPiB",
"HJg96xsvsS",
"S1e7O15zjr",
"Syx181Q7cB",
"rJgQN2PcFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723183,
1573529646832,
1573528958914,
1573528163038,
1573527745776,
1573195627285,
1572183878527,
1571613739050
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1434/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1434/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1434/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1434/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Main content: Physical driven architecture of DeepSFM to infer the structures from motion\", \"discussion\": \"\", \"reviewer_1\": \"well-motivated model with good solid experimental results. not clear about the LM optimization in BA-Net is memory inefficient\", \"reviewer_2\": \"main issue is the experiments could be improved.\", \"reviewer_3\": \"well written but again experimental section is lacking\", \"recommendation\": \"Good paper and results, but all 3 reviewers agree experiments could be improved. Rejection is recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"For Reviewer #3\", \"comment\": \"We thank the reviewer for the comments and appreciation. We have revised the paper according to the suggestions and would like to clarify as follows:\", \"q1\": \"In Sec. 3 the Authors write \\\"We then sample the solution space for depth and pose respectively around their initialization\\\". However in Sec 3.2 they write \\\"we uniformly sample a set of L virtual planes {dl} Ll=1 in the inverse-depth space\\\". In what way are the planes \\\"around their initialization\\\"? If the initial depth map spans over multiple orders of magnitude, will the planes be uniformly sampled between the minimum and maximum disparity of the initial map? If yes, it seems that the initial depth map is not really needed, just its minimum and maximum value is needed, but then how come the method can be applied iteratively with respect to depth?\\n\\nA1. Thank you for pointing this out. \\\"We then sample the solution space for depth and pose respectively around their initialization\\\" is a writing mistake and we have corrected it in our new version. Only the solution space for pose is sampled around initialization. We uniformly sample planes in the inverse-depth(disparity) space between a fixed minimum and maximum range. The initial depth is used for maintaining geometric consistency. \\n\\nThe depth, under such a situation, could still be improved through iterations. Since the pose is improved over the iteration, the depth cost-volume would be updated accordingly, and better depth can be inferred from the more accurate cost-volume.\", \"q2\": \"The Authors mention that depth maps are warped onto the virtual planes using differentiable bilinear interpolation. Is there a mechanism to protect from interpolating across discontinuities? If no, were bleeding edge artifacts observed?\\n\\nA2. We thank the reviewer for pointing out the potential problem of our warping method on the depth maps. Since depth maps often have discontinuities, we agree with Review #3 that differentiable bilinear interpolation may do damage to the geometry consistency and smooth the edges. We also updated our experiment results with nearest neighbor instead of bilinear interpolation for depth warping, and revised the corresponding results (Tab. 1-3) and figures in the paper. Notably, our results can get slightly improved by the updated nearest neighbour method inspired by the question asked by Reviewer#3.\\n\\nTo verify this, we added an experiment in Appendix C, which runs nearest neighbor sampling instead of bilinear interpolation. With nearest neighbor warping method, the performance of our model on DeMoN MVS dataset gains a slight boost with retraining. Here are the comparisons:\\nMVS dataset\\t\\t\\t L1-inv sc-inv L1-rel Rot Trans\\nOurs (bilinear) 0.023 0.134 0.079 2.867 9.910\\nNearest neighbor(retained)\\t 0.021\\t 0.129 0.076 2.824 9.881\\n\\nThis shows that nearest neighbor sampling is indeed more geometrically meaningful for depth. We updated the method to use nearest sampling and update the result accordingly. We also discussed the strengths and weaknesses briefly of each interpolation method in Appendix C.\", \"q3\": \"In the introduction, the Authors point that prior methods have trouble dealing with textureless, reflective or transparent approaches, but it's not clear form the paper where it addresses these cases, and if yes, what is the mechanism for that.\\n\\nA3. Empirically, learning based method may outperforms traditional feature matching methods on these situations since it relies on image priors. In addition, our method has geometry consistency between multiview depth maps as the input, which encourages local smoothness and consistency to some extent. In some textureless, reflective or transparent cases that feature matching methods does not work, our method gains extra information from the initial depth maps of other views by the depth consistency part of the cost volume. In Appendix D, Figure 8, some qualitative comparisons with COLMAP[1] are provided as an argument. We have updated our paper and show more visual examples in Appendix D, Figure 9.\", \"q4\": \"the implementation details section is a bit too high-level and does not contain enough details to reimplement the Author's technique.\\n\\nA4. Thanks for your suggestions, we will release code upon the acceptance. Furthermore, we have put more details about model architecture as in Appendix A Figure 4 and Figure 6.\\n\\n[1] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104\\u20134113, 2016.\"}",
"{\"title\": \"For Reviewer#2\", \"comment\": \"Thank you very much for your comments, which is very helpful for clarifying our contribution and improving the presentation of the paper. Please see the inline responses.\", \"q1\": \"The paper is easy to follow but the authors are expected to clarify the rationality in integration of the loss function. How the parameter of \\\\lambda_r, \\\\lambda_t, and \\\\lambda_r influence the performance. It would be better if the authors could present some analysis.\", \"a1\": \"There are in general two rules to follow when choosing the lambda for optimization: 1) the loss term provides gradient in similar numerical range, such that no single loss should dominate the training since accuracies in depth and camera pose are both important to reach a good consensus. 2) we found in practice the camera rotation has higher impact on the accuracy of the depth probably but not the opposite. This is presumably because that pose cost volume accumulate depth differences of all the pixels such that is more tolerant to the depth error. To encourage better performance of pose, we set a relatively large \\\\lambda_r. Note that all the losses are necessary to achieve good performance. On the validation data, some preliminary experiments by grid search values of each lambda, show that the performance of our model is not very sensitive to various values of lambda. Therefore we provide a combination of lambda that produces the model for our experiment, and presumably there could be other settings that may potentially further improve the performance. We have added some insight to Section 3.5 about the loss function.\", \"q2\": \"The authors are expected to make more comprehensive analysis with the state-of-the-art methods, and also analyze why some alternative methods outperforms the proposed methods in table I and table II.\", \"a2\": \"We thank the reviewer for the suggestion. We add more analysis with the state of the art in Section 4.2, especially about the case that other methods outperforms our method.\", \"q3\": \"The experiments in section 4.3 are also expected to be improved. It is difficult to draw a conclusion that the method is better than other ones based on such limited experiments.\", \"a3\": \"Thanks for this point. However, the main experiments and the conclusions are in Sec. 4.2; and thus Section 4.2 included much more insights and discussion of our model Vs. the other baselines in the revised version. In contrast, Section 4.3 lists the ablation study, where the purpose of experiments is to verify the necessity and sufficiency of some system design options of our model and demonstrate the behavior under controlled experiments, instead of comparing with other methods. Specifically, we show the performance of our method with different number of iterations, with and without pose cost volume, and different numbers of the input view. At the same time, we found that our method also outperforms other methods in some aspects. In Figure 2, the curves are going down, which means that our method can effectively reduce depth and pose error from DeMon. The solid curves are consistently lower than dashed curves, which means our pose cost volume outperforms Steinbr\\u00fccker et al. (2011) in pose estimation and further benefits depth estimation. In Figure 3, the blue curve is significant lower than orange curve, which means that our method is more robust in the situation with fewer views than COLMAP. Even though, the main purpose is not to compare to others but provide some analysis on important model components.\"}",
"{\"title\": \"For Reviewer #1\", \"comment\": \"We thank the reviewer for the comments and appreciation, and would like to answer the reviewer\\u2019s questions as follows:\", \"q1\": \"The authors claim that the LM optimization in BA-Net is memory inefficient and may lead to non-optimal solutions. It\\u2019s not clear to me that the proposed method can guarantee optimality any better. It\\u2019s also unclear if the proposed method is more memory efficient, since the authors only unroll 4 iterations of it.\", \"a1\": \"Thanks for pointing this out and sorry for the confusion! Here we don\\u2019t mean that our method can fix the optimality problem in any way. We wish to provide some of our analysis of the limitation of BA-Net, and hope our method could provide complementary perspectives to rethink the problem and mitigate the non-optimal issue in terms of performance with more ML component. In terms of number of iterations, our method does not have a restriction, since our iteration happens outside the neural network and acts as an incremental improvement. In contrast, BA_Net\\u2019s iteration is part of the LM optimization and it is inside the network. Thus if it unrolls more iteration steps, the memory cost will increase linearly. We have updated the paper for this.\", \"q2\": \"Show the test time behavior of the network when it is run with more iterations than it is trained with (say 10 or 20)\", \"a2\": \"Thanks for the suggestion! We added Table 4 in Appendix C that shows performance of the network with more iterations(from 2 to 20).\", \"q3\": \"It\\u2019s not made entirely clear whether the training back propagates through the update/construction of the pose and depth cost volumes.\", \"a3\": \"Gradients can back-propagate through cost volumes, and cost-volume construction does not affect any trainable parameters. We updated this point in the revised version.\", \"q4\": \"In equation 5, \\u201cx\\u201d should be \\u201ci\\u201d.\", \"a4\": \"Thanks for pointing out that! We have fixed the typo.\"}",
"{\"title\": \"Summarization of changes in our new version\", \"comment\": \"We thank all the reviewers for their insightful and constructive comments. We have revised the paper as suggested by the reviewers, and summarize the major changes as follows:\\n1\\uff0c In Introduction, we rewrote the sentences that discuss the LM optimization in BA-Net.\\n2\\uff0c In Page 3 Section 3 paragraph 2, We fixed a writing mistake. \\u201cWe then sample the solution space for depth and pose respectively around their initialization\\u201d -> \\u201cWe then sample the solution space for depth uniformly in the inverse-depth space between a predefined minimum and maximum range and camera pose around the initialization respectively.\\u201c We thank review #3 for pointing it out.\\n3\\uff0c In section 3.5, We added some insight to explain the rationality in integration of the loss function as required by review #2.\\n4\\uff0cWe fixed the error in equation 5: x -> i. Thanks to review #1 for pointing it out.\\n5\\uff0c In section 4, We updated our experiment results with nearest neighbor instead of bilinear interpolation for depth warping. We also added a sentence in section 3.2 which clarifies that we adopt nearest neighbor sampling for depth warping, instead of bilinear interpolation. Thanks to review # 3 for pointing the weakness of bilinear interpolation out.\\n6\\uff0c Section 4.2 was extended to include brief introduction of the state-of-art methods we compared with. In addition, more analysis required by review #2 about the results in table I and table II were added.\\n7\\uff0c In Appendix A, We added Figure 4 and Figure 6 which shows more details required by review #3 about the implementation. \\n8\\uff0c In Appendix C, We added Table 4 required by review #1 which shows the performance with more number (up to 20) of iterations.\\n9\\uff0c In Appendix C, we added an experiment that compares Bilinear Interpolation with Nearest Neighbor as the answer to review #3.\\n10\\uff0cIn Appendix D. We added figure 9 which shows qualitative comparisons with COLMAP (Schonberger & Frahm, 2016) on challenging materials, as a supplemental answer to review #3.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\n\\nThe authors propose a SfM model which integrates geometric consistency with a learned pose and depth network. An initial estimate of depth and pose are used to construct pose and depth cost volumes, which are then fed into a pose regression and depth refinement network, to produce a new set of cost volumes, and so on. In this manner, the pose and depth estimation are improved iteratively.\", \"strengths\": \"The proposed model is well motivated and shows strong performance and generalization ability on several datasets. There are convincing experiments to show the importance of the P-CV network.\", \"weaknesses\": \"The authors claim that the LM optimization in BA-Net is memory inefficient and may lead to non-optimal solutions. It\\u2019s not clear to me that the proposed method can guarantee optimality any better. It\\u2019s also unclear if the proposed method is more memory efficient, since the authors only unroll 4 iterations of it.\", \"other_comments\": \"It would be very interesting to see the test time behavior of the network when it is run with more iterations than it is trained with (say 10 or 20), especially since the depth error does not seem to have stopped decreasing at only 4 iterations.\\n\\nIt\\u2019s not made entirely clear whether the training backpropagates through the update/construction of the pose and depth cost volumes. \\n\\nIn equation 5, \\u201cx\\u201d should be \\u201ci\\u201d.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose a physical driven architecture of DeepSFM to infer the structures from motion. Extensive experiments on various datasets show that the model achieves the state-of-the-art performance on both depth and pose estimation. In general, the paper is clearly written but I still have several concerns.\\n1.\\tThe paper is easy to follow but the authors are expected to clarify the rationality in integration of the loss function. How the parameter of \\\\lambda_r, \\\\lambda_t, and \\\\lambda_r influence the performance. It would be better if the authors could present some analysis. \\n2.\\tThe experiments are rather insufficient. The authors are expected to make more comprehensive analysis with the state-of-the-art methods, and also analyze why some alternative methods outperforms the proposed methods in table I and table II. \\n3.\\tThe experiments in section 4.3 are also expected to be improved. It is difficult to draw a conclusion that the method is better than other ones based on such limited experiments.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper tackles Structure from Motion, one of the canonical problems in computer vision, and proposes an approach that brings together geometry and physics on one hand and deep networks on the other hand. Camera unprojection and warping (of depth maps and features) are used to build a cost volume onto hypothetical planes perpendicular to the camera axis. Similarly, various camera poses are sampled around an initial guess. A deep network regresses form the cost volume to a camera pose and a depth map. The method can be applied iteratively, using the outputs of the current stage as the initial guess of the next one. Training is supervised, and the the results are evaluated on multiple datasets.\\n\\nI am inclined to recommend accepting the paper for publication, because it addresses a canonical problem, outperforms the state of the art on multiple datasets and brings together geometry / physics and deep learning, which is IMO very a promising and underexplored direction.\\n\\nI found the method section a bit difficult to read though, and even after several readings I cannot get my head around it. Specifically, here are some issues that I hope the Authors could clarify.\\n\\n1. In Sec. 3 the Authors write \\\"We then sample the solution space for depth and pose respectively around their initialization\\\". However in Sec 3.2 they write \\\"we uniformly sample a set of L virtual planes {dl} Ll=1 in the inverse-depth space\\\". In what way are the planes \\\"around their initialization\\\"? If the initial depth map spans over multiple orders of magnitude, will the planes be uniformly sampled between the minimum and maximum disparity of the initial map? If yes, it seems that the initial depth map is not really needed, just its minimum and maximum value is needed, but then how come the method can be applied iteratively with respect to depth?\\n\\n2. The Authors mention that depth maps are warped onto the virtual planes using differentiable bilinear interpolation. Is there a mechanism to protect from interpolating across discontinuities? If no, were bleeding edge artifacts observed?\\n\\n3. In the introduction, the Authors point that prior methods have trouble dealing with textureless, reflective or transparent approaches, but it's not clear form the paper where it addresses these cases, and if yes, what is the mechanism for that.\\n\\nLastly, if the authors are not planning to release the code, the implementation details section is a bit too high-level and does not contain enough details to reimplement the Author's technique. For example, \\\"our network learns a cost volume of size L \\u00d7 W \\u00d7 H using several 3D convolutional layers with kernel size 3 \\u00d7 3 \\u00d7 3\\\" - more details about this network are needed, as well as the others in the paper.\"}"
]
} |
rylvAA4YDB | IsoNN: Isomorphic Neural Network for Graph Representation Learning and Classification | [
"Lin Meng",
"Jiawei Zhang"
] | Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the ‘node-orderless’ property. Normally, adjacency matrices will cast an artificial and random node-order on the graphs, which renders the performance of deep mod- els on graph classification tasks extremely erratic, and the representations learned by such models lack clear interpretability. To eliminate the unnecessary node- order constraint, we propose a novel model named Isomorphic Neural Network (ISONN), which learns the graph representation by extracting its isomorphic features via the graph matching between input graph and templates. ISONN has two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component utilizes a set of subgraph templates as the kernel variables to learn the possible subgraph patterns existing in the input graph and then computes the isomorphic features. A set of permutation matrices is used in the component to break the node-order brought by the matrix representation. Three fully-connected layers are used as the classification component in ISONN. Extensive experiments are conducted on benchmark datasets, the experimental results can demonstrate the effectiveness of ISONN, especially compared with both classic and state-of-the-art graph classification methods. | [
"Deep Learning",
"Graph Neural Network"
] | Reject | https://openreview.net/pdf?id=rylvAA4YDB | https://openreview.net/forum?id=rylvAA4YDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Vrn4yhXIMq",
"HJgVlS82jr",
"r1gPFVLhor",
"H1gqUmL3jS",
"r1eqjbUhor",
"rke32mHAYr",
"HyeRPECaFr",
"HJgiqXjrYB",
"rygjEnzpuS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798723152,
1573836012105,
1573835902839,
1573835602407,
1573835169765,
1571865523973,
1571837030252,
1571300242822,
1570741299036
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1433/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1433/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1433/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1433/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method to learn graph features by means of neural networks for graph classification.\\nThe reviewers find that the paper needs to improve in terms of novelty and experimental comparisons.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the comments and appreciation, and would like to answer the reviewer\\u2019s questions as follows:\\n \\nQ1. It, however, simply employs brute-to-force approach toward the graph isomorphism, lacking novelty:\\n \\nThe novelty of the proposed model lies in the isomorphic kernel methods instead of simply brute-to-force contract transform. Most existing works focus on the graph kernel with node labels and the kernels methods like WL or the kernels proposed in [a] only computes the similarities between pairwise graphs. Yet, in this paper, we are handling the graph without node labels. Moreover, we can not only compute the similarity between pairwise graphs but also learn subgraph templates. Our approach is simple, but it solves the isomorphism directly. Even though our approach requires high computation when $k$ is big, we find two alternative ways to avoid such a situation, keep the computation cost within an acceptable range.\\n \\n \\nQ2. Eq.(8) lacks theoretical justification and is far away from the sub-graph based representation:\\n \\nWe propose the fast version of IsoNN to deal with the high time cost when $k$ is big (i.e., k>4). We show the theoretical justification of Eq. (8) in the Appendix. We also add more descriptions at the end of section 7.1 to show that Eq. (8) can be an approximation of the optimal permutation matrix. In fact, if we find the optimal permutation matrix $\\\\mathbf{P}^* \\\\in \\\\{0, 1\\\\}^{k \\\\times k}$ directly (i.e., by Hungarian method), it will cost lots of time even though the learned features will be precise. However, if we relax the $\\\\mathbf{P}^*$ to $\\\\mathbf{P}^* \\\\in [0,1]^{k \\\\times k}$, the time cost will decrease rapidly with the degenerated features, and the performance is close to that of the original model (slow version). Thus, if you do not care about what kernel template will be learned, then Eq. (8) can be used. Otherwise, you can learn the precise features by applying multiple graph isomorphic feature extraction components if the kernel size is big.\\n \\nQ3. Are any constraints imposed on the kernel K for embedding the graph structure into K? Namely, the kernel K is required to exhibit the nature of the adjacency matrix of sub-graphs. It lacks description and/or discussion about the aspect.\\n \\nThe kernel $\\\\mathbf{K}$ is a learned template, containing the most contributing subgraph structure. The kernel template is used to calculate the matching score between the subgraphs and kernel templates, i.e., to see how similar between the subgraphs and the kernel templates. After the computation, we can also locate where the contributing subgraphs are. Since our model is a general model, we don\\u2019t impose any constraint on $\\\\mathbf{K}$ for now.\\n \\nQ4. The node-order information still exists in the classification layer (Sec. 4.2) since the FC classifier is directly applied to the (flattened) feature map (tensor) Q in which two axes are defined according to the node order in the graph. For accomplishing node-orderless classification, the global pooling such as GAP should be applied to the final feature map before the classifier layer.\\n \\nIn this paper, we claim that the proposed model will eliminate the node-order for subgraphs. Let\\u2019s say an extreme situation, if the subgraph is the whole graph, the node-order existing in the whole graph can be eliminated by the isomorphic layer. In addition, after the graph isomorphic feature extraction component, the feature tensor $\\\\mathcal{Q}$ only contains the matching scores between subgraphs and kernel templates, i.e., each element denotes the matching score of a subgraph. Thus, the two axes of $\\\\mathcal{Q}$ don\\u2019t represent the node-order, they only represent one possible subgraph order. When we flatten the tensor $\\\\mathcal{Q}$, the subgraph order is changed as well since the \\u201cflatten\\u201d operation will turn three axes (including the channel dimension) into one. Moreover, if any global pooling layer like GAP is applied, it will either degenerate the representation power of kernel templates or lose the precise features of the subgraphs.\"}",
"{\"title\": \"Response to Reviewer #1 (To be continued ... )\", \"comment\": \"Q5. I cannot fully understand how to stack the sub-graph based feature extraction (Sec.4.1) in a \\\"deep\\\" manner? After extracting the sub-graph representation first, the resulting matrix is just a feature map of c channels, not an adjacency matrix which contains the pair-wise relationships between nodes.\\n \\nThank you for pointing it out, we will also revise this part. Note that each graph isomorphic feature extraction component contains \\u201cgraph isomorphic layer + min pooling layer + softmax layer\\u201d, we clarify the deep model is the deep architecture of multi-layer feature extraction component, which is equivalent to \\u201c(graph isomorphic layer + min pooling layer + softmax layer) + \\u2026 + (graph isomorphic layer + min pooling layer + softmax layer)\\u201d. Let\\u2019s say we have 2 graph isomorphic feature extraction components. After the first graph isomorphic feature extraction component, we get the first feature tensor $\\\\mathcal{Q}_1$ and each element in $\\\\mathcal{Q}_1$ denotes matching score between one subgraph to one kernel template. Thus, we can also regard each element in $\\\\mathcal{Q}_1$ as a kernel template. Since we have $c_1$ channel in the first component, the second component will be used on every channel of $\\\\mathcal{Q}_1$. If the channel number of the second component is $c_2$, then the first dimension of the learned feature tensor $\\\\mathcal{Q}_2$ of the second component is $c_1 * c_2$. Similar to the first component, each element of $Q_2$ can represent the kernel templates in the second component. Because $\\\\mathcal{Q}_2$ is derived from $\\\\mathcal{Q}_1$, it is natural to combine the kernels learned by two components and the process can be illustrated in Figure 2. In addition, we also provide an example in the appendix to facilitate your understanding.\\n\\n\\n \\nQ6. The method is built upon the local kernel (K) over the adjacency matrix (A). Although it is invariant against the node order \\\"locally\\\" within the local kernel, the method cannot capture the sub-graph structures beyond the locally ordered nodes in A; \\\"locally\\\" ordered nodes in A which exhibits certain sub-graph can be easily spread \\\"globally\\\" via applying node permutation to A. Thus, the method is only applicable to the limited case that node orders of input graphs are \\\"roughly\\\" canonicalized. This paper completely lacks discussion nor analysis about such a limitation/assumption of the method regarding locality.\\n \\nIn this paper, the node-order is invariant locally. For your mentioned limitation, we are aware that it exists in the model. However, most subgraph-based models like CNN, WL, GIN all based on the adjacency matrix, which is also randomly ordered. Moreover, there are some works considering reordering $\\\\mathbf{A}$. Our model needs to handle such a variance, but we do believe this is not the major work of this paper. However, we will leave it for future work.\\n \\n \\nQ7. In the experiments, the classifier modules are different across the comparison methods.\\n \\nFor baseline models like AE, CNN, Freq, we do set the same classifier module as the proposed model. For SDBN, GCN, GIN, the main reason we keep the original setting is that they already had fined tuned by their authors and reached good results. We just want to make sure that we are comparing with the baseline models that have the best performance. \\n \\nQ8. As to the WL method, the performance of 52.4 on MUTAG in Table 1 is significantly inferior to 80.88 which is reported in [b].\\n \\nFor WL methods, we have different settings from [b]. To employ the WL kernel methods, the node label should be a known condition. However, our model is proposed for the graphs that do not have node labels. Thus, to have a fair comparison with all baselines, we do not use the node label (i.e., the atomic types of information of chemical compounds). So, to make a fair comparison with WL, we assign each node a unique label instead of the node type. Sorry for the unclear part, we will make changes to the revision.\\n \\nWe also appreciate the reviewer leave us these comments, and we updated the relevant parts already. \\n \\nHope our response has resolved your concerns. If there is any proposed question about this paper not resolved in our response, welcome to let us know and we are happy to discuss more with you.\\n \\n[a] Vishwanathan, S.V.N., Schraudolph, N.N., Kondor, R. and Borgwardt, K.M., 2010. Graph kernels. Journal of Machine Learning Research, 11(Apr), pp.1201-1242.\\n[b] Schlichtkrull M., Kipf T.N., Bloem P., van den Berg R., Titov I., Welling M. (2018) Modeling Relational Data with Graph Convolutional Networks. In: Gangemi A. et al. (eds) The Semantic Web. ESWC 2018. Lecture Notes in Computer Science, vol 10843. Springer, Cham\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for the comments and appreciation, and would like to answer the reviewer\\u2019s questions as follows:\\n \\nQ1. In the experiments, the authors report using different hyperparameters for each data set (e.g., k). I did not understand how these parameters were chosen since only training and testing sets were reported. I would like the authors to clarify how the model selection was performed.\\n \\nSince we have conducted experiments on the relatively small datasets, we try different parameters (i.e., k, c) to train several different models. According to the performances on the testing set of different models with different parameters, we select the model that has the best performance on the testing set. If there is a big dataset, we can split the dataset into training, validation and testing set, choosing the parameters according to the model performance on the validation set.\\n \\nQ2. Figure 1 and the details in Section 4 discuss a 1-layer isomorphic NN. The discussion in Section 4.3.2 discusses multi-layer feature extraction. If I understand correctly, this means to apply the graph isomorphic layer + min pooling + softmax several times, but this should be stated explicitly.\\n \\nWe are grateful for your advice. Indeed, multi-layer feature extraction means multiple graph isomorphic feature extraction components. We will take the output of the former feature extraction component as the input of the latter feature extraction component. Note that each feature extraction component contains the \\u201cgraph isomorphic layer + min pooling layer + softmax layer\\u201d, which means the deep architecture of the multi-layer feature extraction is \\u201c(graph isomorphic layer + min pooling layer + softmax layer) + \\u2026 + (graph isomorphic layer + min pooling layer + softmax layer)\\u201d. Since graph isomorphic layer is the main functional layer to learn subgraph features, we simply use the multi-layer for short in section 4.3.2. We will clarify it in section 4.3.2. In addition, we also provide an example in the appendix to facilitate your understanding.\\n\\n\\nHope our response has resolved your concerns. If there is any proposed question about this paper not resolved in our response, welcome to let us know and we are happy to discuss more with you.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the comments and appreciation, and would like to answer the reviewer\\u2019s questions as follows:\\n \\nQ1. There is no training involved here as no parameter is learned.\\n \\nActually, there are learnable variables in the graph isomorphic layer, which are the set of kernel templates $\\\\mathbf{K}_i$s. For example, assume we have one isomorphic feature extraction component, our proposed model will do the following steps:\\n\\u2022\\tIn the graph isomorphic layer, each kernel template $\\\\mathbf{K}_i$ will result in $k!$ feature matrix with $k!$ permutation matrices, where each element in the feature matrices represents the matching score of one subgraph to the corresponding kernel template with one possible permutation matrix.\\n\\u2022\\tPassing all $k!$ feature matrices for all kernel templates into the min-pooling layer in order to find the \\u201coptimal\\u201d features generated by the optimal node permutation for all kernel templates $\\\\mathbf{K}_i$s. \\n\\u2022\\tNext, to rescale the \\u201coptimal\\u201d features, we apply the softmax layer and get the features that related to the kernel variables to further recognize the subgraphs similar to the templates.\\n\\u2022\\tFeeding the final features that related to the kernel variables to the classifier, predicting the labels for graphs in the training set. \\n\\u2022\\tCalculating the cross-entropy loss based on the predicted labels and ground truth.\\n\\u2022\\tUsing the gradient descent algorithm and backpropagation to update the parameters in the classifier and the graph isomorphic layer, i.e., kernel variables $\\\\mathbf{K}_i$s.\\n \\n \\nThe novelty of the proposed model lies in the isomorphic kernel methods. Most existing works focus on the graph kernel with node labels and the kernels methods like WL or the kernels proposed in [a] only computes the similarities between pairwise graphs. Yet, in this paper, we are handling the graph without node labels. Moreover, we can not only compute the similarity between pairwise graphs but also learn subgraph templates. Our approach is simple, but it solves the isomorphism directly. Even though our approach requires high computation when $k$ is big ($k$>4), we find two alternative ways to avoid such a situation, keep the computation cost within an acceptable range.\\n\\n\\nQ2. Accuracies reported for MUTAG and PTC in Xu et al with GIN are much higher than the numbers here.\\n \\nIn Xu et al with GIN paper, they predict the graph label by utilizing the node label information for MUTAG and PTC according to their source code [a]. However, our model does not need additional node labels. To make a fair comparison, we hide the node label information. We also indicated this in section 5.1.2.\\n \\n\\nHope our response has resolved your concerns. If there is any proposed question about this paper not resolved in our response, welcome to let us know and we are happy to discuss more with you.\\n \\n[a] GIN source code: https://github.com/weihua916/powerful-gnns\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new neural network architecture for dealing with graphs dealing with the lack of order of the nodes. The first step called the graph isomorphic layer compute features invariant to the order of nodes by extracting sub-graphs and cosidering all possible permutation of these subgraphs. There is no training involved here as no parameter is learned. Indeed the only learning part is in the so-called classification component which is a (standard) fully connected layer. In my opinion, any classification algorithm could be used on the features extracted from the graphs.\\nExperiments are then given for the graph classification. I do not understand results of Table 1 as the accuracies reported for MUTAG and PTC in Xu et al with GIN are much higher than the numbers here.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method to learn graph features by means of neural networks for graph classification.\\nIn the proposed method, a graph is described by bag of sub-graphs and the sub-graph dictionary is learned through isomorphic matching.\\nThe authors present two approaches toward the isomorphic matching; one is a brute-to-force approach to check all the node permutations and the other is based on spectral decomposition toward efficient computation.\\nIn the experiments on the graph classification tasks using several benchmark datasets, the learned features by the proposed method exhibit favorable performance in comparison with the other graph-based methods.\\n\\nThis paper is leaning toward rejection because (1) the proposed method lacks novelty, (2) it contains technically imprecise parts and (3) the effectiveness is not fully validated in the experiments.\\nThe detailed comments are as follows.\\n\\n* The presented method belongs to the standard feature representation framework that describes graphs by bag of sub-graph templates (dictionary) [a], and this paper's contribution can be found in the way to learn sub-graph dictionary as in learning convolution kernels of CNNs; in contrast to CNN, the graph representation poses a challenging issue of \\\"isomorphism\\\". It, however, simply employs brute-to-force approach toward the graph isomorphism, lacking novelty. On the other hand, the alternative approach relaxes graph matching into Eq.(8) through spectral decomposition. But, it seriously degrades the characteristics of the permutation matrix P and thus the resulting score z does not exhibit a graph matching measure anymore. So, Eq.(8) lacks theoretical justification and is far away from the sub-graph based representation; I cannot understand what kind of features are actually extracted by Eq.(8).\\n\\n* Though the authors insist that the method retains the explicit graph structural information, are any constraints imposed on the kernel K for embedding the graph structure into K? Namely, the kernel K is required to exhibit the nature of adjacency matrix of sub-graph. It lacks description and/or discussion about the aspect.\\n\\n* The node-order information still exists in the classification layer (Sec.4.2) since the FC classifier is directly applied to the (flattened) feature map (tensor) Q in which two axes are defined according to the node order in the graph. This contradicts the authors' claim that the method is invariant to node ordering. For accomplishing node-orderless classification, the global pooling such as GAP should be applied to the final feature map before the classifier layer. In addition, I cannot fully understand how to stack the sub-graph based feature extraction (Sec.4.1) in a \\\"deep\\\" manner? After extracting the sub-graph representation first, the resulting matrix is just a feature map of c channels, not an adjacency matrix which contains the pair-wise relationships between nodes. It is unclear how to construct the deeper model by repeatedly applying the sub-graph template matching.\\n\\n* The method is built upon the local kernel (K) over the adjacency matrix (A). Although it is invariant against the node order \\\"locally\\\" within the local kernel, the method cannot capture the sub-graph structures beyond the locally ordered nodes in A; \\\"locally\\\" ordered nodes in A which exhibits certain sub-graph can be easily spread \\\"globally\\\" via applying node permutation to A. Thus, the method is only applicable to the limited case that node orders of input graphs are \\\"roughly\\\" canonicalized. This paper completely lacks discussion nor analysis about such a limitation/assumption of the method regarding locality.\\n\\n* In the experiments, the classifier modules are different across the comparison methods. The proposed method that is a feature extraction from graphs should be fairly compared with the other types of graph feature extraction methods in a consistent pipeline on basis of the identical classifier module. And, as to WL method, the performance of 52.4 on MUTAG in Table 1 is significantly inferior to 80.88 which is reported in [b].\\n\\n[a] Wale, N., Watson, I.A. and Karypis, G., Comparison of descriptor spaces for chemical compound retrieval and classification, Knowl Inf Syst (2008) 14:3, pp.347-375\\n[b] Schlichtkrull M., Kipf T.N., Bloem P., van den Berg R., Titov I., Welling M. (2018) Modeling Relational Data with Graph Convolutional Networks. In: Gangemi A. et al. (eds) The Semantic Web. ESWC 2018. Lecture Notes in Computer Science, vol 10843. Springer, Cham\", \"minor_comments\": [\"Improper citation format. Use \\\\citep and \\\\citet properly according to the context.\", \"This is related to the kernel methods of graph-kernel and string-kernel. It would be better to mention those related kernel functions for clarifying the contributions.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a neural network architecture to classify graph structure. A graph is specified using its adjacency matrix, and the authors prose to extract features by identifying temples, implemented as small kernels on sub matrices of the adjacency matrix. The main problem is how to handle isomorphism: there is no node order in a graph. The authors propose to test against all permutations of the kernel, and choose the permutation with minimal activation. Thus, the network can learn isomorphic features of the graph. This idea is used for binary graph classification on a number of tasks.\\n\\nGraph classification is an important problem, and I found the proposed solution to be quite elegant. The paper is mostly well written (it could use some proofreading, but the main ideas are explained well). Overall, I liked the idea and tend towards acceptance.\\n\\nIn the experiments, the authors report using different hyper parameters for each data set (e.g., k). I did not understand how these parameters were chosen, since only training and testing sets were reported. I would like the authors to clarify how model selection was performed.\\n\\nAlso, Figure 1 and the details in Section 4 discuss a 1-layer isomorphic NN. The discussion in Section 4.3.2 discusses multi-layer feature extraction. If I understand correctly, this means to apply the graph isomorphic layer + min pooling + softmax several times, but this should be stated explicitly.\"}",
"{\"comment\": \"The question mark in section 5.1.1 on page 7 for IsoNN-fast should refer to Equation (8).\\nLatex fails to generate that reference, and just wanna to clarify.\", \"title\": \"Equation reference in Section 5.1.1 for IsoNN-fast\"}"
]
} |
HklUCCVKDB | Uncertainty-guided Continual Learning with Bayesian Neural Networks | [
"Sayna Ebrahimi",
"Mohamed Elhoseiny",
"Trevor Darrell",
"Marcus Rohrbach"
] | Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning
and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to. | [
"continual learning",
"catastrophic forgetting"
] | Accept (Poster) | https://openreview.net/pdf?id=HklUCCVKDB | https://openreview.net/forum?id=HklUCCVKDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"eOHHzlmHKa",
"rylM6-AsoB",
"Bke__ZAosB",
"rygv8aTojr",
"BkxDC2pjiB",
"HJeE5n6ior",
"Byl4mBDl9B",
"SJlbynqTYS",
"S1ewYgHaKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723122,
1573802425872,
1573802351989,
1573801295194,
1573801166835,
1573801099980,
1572005147595,
1571822552945,
1571799166763
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1432/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1432/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1432/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"While prior work has shown the potential of using uncertainty to tackle catastrophic forgetting (e.g. by appropriate updates to the posterior), this paper goes further and proposes a strategy to adapt the learning rate based on the uncertainty. This is a very reasonable idea since, in practice, learning rate control is one of the simplest and most understood techniques to fight catastrophic forgetting.\\nThe overall approach ends up being a well-motivated strategy for controlling the learning rate of the parameters according to a notion of their \\\"importance\\\". Of course now the question is if this work uses a good proxy for \\\"importance\\\" so further ablation studies would help, but the current results already show a clear benefit.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to R2 -- Part 2\", \"comment\": \"2. \\u201cI am not sure why weighting the learning rate would be a good idea?\\u201d\\n\\nTo us, it seems a very natural and obvious choice. Decreasing the learning rate for important parameters decreases changing them which results in not forgetting previous tasks.\\n\\n=================================================\\n\\u201cHaving high uncertainty may increase the learning rate arbitrarily. Is there a constraint on the standard deviation? Does having a very high weight for learning rate not cause instability during optimization? I think the method would be very sensitive to the initialization of the standard deviation. \\u201c\\n\\nWe follow the initialization strategy used in the original Bayes-by-Backprop (BBB) framework (Blundell et al., 2015). The right initialization is important and we treat it as a hyperparameter which we find, as all other hyperparameters (see above) with the validation sets of the first two tasks. We added a paragraph Bayes-by-backprop (BBB) Hyperparamters: in Section A.2 in appendix in the revised pdf detailing this. When initialized correctly, \\\\rho is not exploding in our experiments with BBB.\\n\\n====\"}",
"{\"title\": \"Response to R2 -- Part 1\", \"comment\": \"We thank the reviewer for their comments and are happy to hear that they find our idea of using uncertainty interesting.In the following we address the individual comments:\\n=================================================\\n1. The reviewer is concerned that our paper \\u201cfails to justify the superiority of the method over other baselines\\u201d and misses and \\\"explanation of why this is the case\\u201d.\\n\\nWhile it is not fully clear to us how \\u201csuperiority\\u201d is defined we believe our approach has the following properties, which make it valuable and interesting:\\nOur approach is novel (R#1)\\nOur approach is \\u201csimple but effective\\u201d (R#3)\\nOur approach makes sense, as it follows Bayesian principles of uncertainty, which means the uncertainty is inherent to the model which we use to define importance. See Section 1, 4th paragraph \\u201cBayesian approaches to \\u2026\\u201d and Figure 1, for more on the motivation.\\nR1 states this \\u201cwork is highly significant\\u201d and is supported with an experimental evaluation \\u201cwith a very large number of baselines\\u201d\\nOur approach is *different* from prior work (as discussed extensively in Section 2); some aspects proposed in prior work are orthogonal to our work, such as usage of episodic memory or model growth; others are just different, e.g. many regularization based methods use an additional \\u201cexternal\\u201d importance parameter for each network parameter, while we exploit the \\u201cadditional\\u201d \\\\rho parameters inherently to Bayesian Neural Network without the need of an \\u201cexternal\\u201d importance parameter.\\n[we don\\u2019t claim our approach is \\u201csuperior\\u201d to all other prior work w.r.t. methodology; however, it is simple and well motivated and experimentally we find that our performance is very competitive (on par or better than prior work) on a broad set of experiments].\\n\\nOverall, given these aspects, we strongly believe our approach will be appreciated by the community.\\n\\n=================================================\\n\\u201cWhat are the drawbacks of EWC, VCL or HAT that the proposed method solves?\\u201d\\n\\nOur UCB is based on Bayesian neural networks and exploits their inherent uncertainty modeling to change the learning rate per parameter.\\n\\nHAT is regularization based but does not use a Bayesian Neural Network.\\nEWC is a Bayesian-inspired method but does not rely on Bayesian Neural Networks, i.e. it does not exploit the inherent uncertainty modeling Bayesian Neural Networks.\\nVCL uses Bayesian inference, in contrast UCB is based on Bayesian neural networks to use their predictive uncertainty to perform continual learning.\\n\\nSection 2, gives a detailed discussion to prior work, and we experimentally support the strength of our method in the paper. \\nAdditionally w.r.t. Bayesian continual learning methods: None of them have been applied on CNNs so we are the only work that have extended it to real world images in a long sequence of tasks. See also our challenging 8 task experiment.\\n\\n=================================================\\n\\u201cWhy using uncertainty to define importance works better than using online VI in VCL or fisher information in EWC?\\u201d\\n\\nWe believe this is because the uncertainty in Bayesian Neural Networks gives a good estimate for parameter importance used in continual learning.\\nThis is clearly different than VI in VCL or the fisher information in EWC and while we do not have a mathematical proof for being better (which might also be difficult), we find our experiments support our hypothesis that using uncertainty in Bayesian Neural Networks is a good idea.\\n\\n=================================================\\n\\u201c[...] it seems that the model was run a number of times and the best score was reported out of all those runs (especially because the improvement is only marginal).\\u201d\\n\\nWe like to highlight our very restrictive experimental setup we employ (in contrast to most prior work in continual learning). We only rely on the first two tasks and their validations set to tune hyperparameters, similar to the setup in (Chaudhry et al., 2019). (see \\u201cHyperparameter tuning\\u201d in Section 5.1). \\nWe do not report \\u201cthe best score\\u201c but the average over multiple runs in the main paper, and, in the appendix (section A.3) we also show standard deviation (Tables 8, 9, 10, 11).\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We thank the reviewer for his/her comments about our work. We reply to the comments in chronological order:\\n\\n1. We have already provided this ablation in Table 5 in the appendix in which we considered other variants for the weight importance: specifically, we look into regularizing \\\\mu and \\\\rho or both and explore if 1/\\\\sigma or |\\\\mu|/\\\\sigma is better for importance measurement. We find that highest accuracy and BWT is achieved by 1/\\\\mu for UCB and |\\\\mu|/\\\\sigma for UCB-P, but other variants don\\u2019t decrease the performance dramatically. We have moved it to the main text as Table 1 on page 7 in the revised version.\\n\\n2. We agree with the reviewer that the memory of the entire model should be taken into account. And we do that, as we detail in section A2. We make sure our UCB matches to the baselines w.r.t. the *total* number of parameters (the sum of \\\\mu and \\\\rho for UCB). In table 1b we also list the *total* number of parameters (For UCB-P the memory for the mask is not included, see also 4. below).\\nWe agree it might be a good option to compare methods by the total memory usage, when comparing regularization with episodic memory based models.\\nWhile there are reasons for and against episodic memory storage (e.g. potential privacy concerns, even when just storing representations), this is not the focus of this work and orthogonal to this work. In fact we believe our approach would benefit from episodic memory, especially in the challenging setting of single head and generalized accuracy (section 6). We leave the exploration of combining UCB with episodic memory to future work.\\nWe also cited the mentioned references in the updated draft in the related work, section 2 under \\u201cmemory-based methods\\u201d subsection.\\n\\n3. We agree with the reviewer and we have, compared to baselines, used *already* only half of the number of weights for UCB, as each weight consist of 2 parameters (see also reply to 2.). In section A.2 we have detailed this aspect; we ensured a fair comparison by matching the *total* number of learnable parameters.\\n\\n4. (Question 1 in comments): Table 1b: For UCB-P the #params match the total number of parameters initially, but also after training for the last task. The reason for this is that we do not prune the network anymore after training the last task (we would do that when the next task arrives). However, we like to note that UCB-P, by using a \\u201chard\\u201d binary mask for each task will use up more and more parameters for each new task it sees which it cannot free by pruning. So when arriving at the last task only relatively few parameters are remaining, the ability to further prune is thus limited.\\nWe detail our pruning procedure in Section 5.1, in the paragraph, \\u201cPruning procedure and mask size\\u201d where we explain what percentage of network is pruned at each time. We agree with the reviewer that pruning (UCB-P) is not efficient and in our experiments it yields lower performance compared to our soft regularization version (UCB) which we introduced as our main method. However, in case it is desired to recover the \\u201cexact same performance\\u201d post-pruning, one might consider using it because soft regularization methods are not zero-forgetting guaranteed. We also want to mention that pruning techniques are not really \\u201czero-forgetting\\u201d because the accuracy drop during pruning can be considered as forgetting as we do in this paper.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"We thank the reviewer for their positive feedback, and are happy that they appreciate the clarity, quality, novelty, and high significance of our work.\"}",
"{\"title\": \"See individual comments and revised pdf\", \"comment\": \"We thank all reviewers for their feedback and we replied to individual reviews directly. We also revised the pdf addressing concerns as discussed in the individual author responses.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a novel method for continual learning with neural networks based on a Bayesian approach. The idea consists in working with Bayesian neural networks, using the Bayes by back-prop approach in which a factorized Gaussian variational distribution is used to approximate the true posterior. To address the continual learning setting, the authors propose to multiply the learning rate of the mean parameters in the posterior approximation by the corresponding standard deviation parameter in the posterior approximation, while the learning rate for the variance parameters in the posterior approximation is not changed. The authors also consider a version of his method which freezes the mean and variance variational parameters when the signal to noise ratio is high. The proposed method is evaluated in exhaustive experiments, showing state-of-the-art results.\", \"clarity\": \"The paper is clearly written and easy to read. The method proposed is well described and it would be easy to reproduce.\", \"quality\": \"The proposed method is well justified and the experiments performed clearly illustrate the gains with respect to previous methods.\", \"novelty\": \"The proposed method is novel up to my knowledge. The methodological contributions do not seem very sophisticated, but the experiments show that the proposed method, despite being very simple, works very well in practice.\", \"significance\": \"The experiments show that the proposed method achieves state of the art results when compared with a very large number of baselines. This indicates that the proposed method will be relevant to the community. In my opinion, this work is highly significant.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"** post rebuttal start **\\n\\nAfter reading reviews and authors' response, I decided not to change my score.\\nI am happy with the author's response addressing my concerns (mainly about the fairness on the size of the model), so I recommend its acceptance. I believe it is a good addition to the community of continual learning.\\n\\n** post rebuttal end **\\n\\n\\n- Summary:\\nThis paper proposes to use a way to improve continual learning performance by taking \\\"Bayes-by-backprop\\\" method. They claim that the uncertainty can naturally be measured by estimating (log of) the standard deviation, and it is indeed useful to judge the importance of each learnable parameter. Experimental results on several benchmarks show that their method outperforms few state-of-the-art methods.\\n\\n\\n- Decision and supporting arguments:\\nWeak accept.\\n\\n1. The proposed method is simple but effective. However, It is still questionable whether \\\\sigma is the best measure of the weight importance. An ablation study with different choices of the importance measure (maybe \\\\mu can also be incorporated as well as \\\\sigma?) would be good to see.\\n\\n2. Survey and comparison with memory-based methods are limited. Though memory-based methods require some memory to keep the experience, the proposed method also requires additional memory for \\\\sigma; it essentially doubles the model capacity, assuming that \\\\sigma is solely for measuring the weight importance. In particular, when it comes to large-scale models, memory for storing some important experiences would be small compared to the memory to store the model.\\nHere are some papers about recently proposed memory-based methods, which are not cited:\\n\\nCastro et al. End-to-End Incremental Learning. In ECCV, 2018.\\nWu et al. Large Scale Incremental Learning. In CVPR, 2019.\\nLee et al. Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild. In ICCV, 2019.\\n\\n3. Comparison should include the model capacity as in Table 1(b). Again, compared to the conventional non-Bayesian model, half of the model capacity is used for computing \\\\sigma (uncertainty), I wonder it causes a performance drop when the model capacity is the same over all compared methods. If they used the same model architecture and just doubled the number of learnable parameters for \\\\sigma, then it is obviously unfair.\\n\\n\\n- Comments:\\n1. Pruning is not beneficial in terms of the performance. I hope to see some quantitative benefits obtained by introducing pruning. In Table 1(b), why doesn't pruning reduce the number of parameters?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"**** Post Rebuttal ****\\n\\nI have read the author's response and other reviewers' comments. In light of comments by other reviewers, I am increasing the score. The paper reports decent empirical results in some challenging settings which might be useful to the continual learning community. \\n\\n**** End ****\\n\\nThe paper presents a simple yet effective way to avoid catastrophic forgetting in a continual learning setting. The proposed approach is referred to as UCB - \\\"Uncertainty Guided Bayesian Neural Networks\\\". The main idea of the approach is to weight the learning rate of each parameter in the neural network by the standard deviation of its posterior distribution. This leads to regularizing parameters that are \\\"important\\\" to tasks seen earlier and thus avoiding forgetting. Results indicate an improvement over other baselines. However, I do not see any analysis of the method that explains this improvement. I do not recommend acceptance.\", \"cons\": [\"My main concern with the paper is that it fails to justify the superiority of the method over other baselines. The numbers reported in the paper do seem good, but I don't see an explanation of why this is the case. What are the drawbacks of EWC, VCL or HAT that the proposed method solves? Why using uncertainty to define importance works better than using online VI in VCL or fisher information in EWC? There is no discussion in the paper about that. Without such a discussion it seems that the model was run a number of times and the best score was reported out of all those runs (especially because the improvement is only marginal).\", \"I am not sure why weighting the learning rate would be a good idea? Having high uncertainty may increase the learning rate arbitrarily. Is there a constraint on the standard deviation? Does having a very high weight for learning rate not cause instability during optimization? I think the method would be very sensitive to the initialization of the standard deviation.\", \"Overall I think the idea of using uncertainties for continual learning is interesting. But from where it stands, I am not fully convinced that this method should do better than existing approaches.\"]}"
]
} |
HygrAR4tPS | On Empirical Comparisons of Optimizers for Deep Learning | [
"Dami Choi",
"Christopher J. Shallue",
"Zachary Nado",
"Jaehoon Lee",
"Chris J. Maddison",
"George E. Dahl"
] | Selecting an optimizer is a central step in the contemporary deep learning pipeline. In this paper we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. Our findings suggest that the metaparameter search space may be the single most important factor explaining the rankings obtained by recent empirical comparisons in the literature. In fact, we show that these results can be contradicted when metaparameter search spaces are changed. As tuning effort grows without bound, more general update rules should never underperform the ones they can approximate (i.e., Adam should never perform worse than momentum), but the recent attempts to compare optimizers either assume these inclusion relationships are not relevant in practice or restrict the metaparameters they tune to break the inclusions. In our experiments, we find that the inclusion relationships between optimizers matter in practice and always predict optimizer comparisons. In particular, we find that the popular adative gradient methods never underperform momentum or gradient descent. We also report practical tips around tuning rarely-tuned metaparameters of adaptive gradient methods and raise concerns about fairly benchmarking optimizers for neural network training. | [
"Deep learning",
"optimization",
"adaptive gradient methods",
"Adam",
"hyperparameter tuning"
] | Reject | https://openreview.net/pdf?id=HygrAR4tPS | https://openreview.net/forum?id=HygrAR4tPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"N3aWNtoM73",
"BylPg8onoB",
"HkxqYK9njS",
"rJeiYvthiS",
"r1xU0xF3jH",
"S1gPO1F3oH",
"r1gE7aRior",
"H1eIZjAsjB",
"H1xFiU0ijr",
"rJxDkLFosS",
"BJgXaSKojr",
"SkxtHSYoiS",
"BJxIMERcsS",
"H1xFI-CqsB",
"HyxovxAqjH",
"B1xCUZCEor",
"rylnXZC4sB",
"SyeqKgAVjS",
"HyeprxCNiH",
"HkeYMxCVsS",
"SklkvEVksr",
"BkgPIQ2yqr",
"rJgB9OBCtS",
"B1xmwzcntr",
"ryllFmFoYH",
"H1xd84_iYB",
"SkefKjvjYr",
"SkgTOIfUFB",
"rJxI4nbIFr",
"H1eXQbqEKH",
"B1ligSswdB",
"SJlH7btIdH",
"BkgEI_O8Or",
"r1gfeHUZ_H",
"SJgy9W0kuH",
"HJx1TZTk_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1576798723093,
1573856750778,
1573853570154,
1573848963233,
1573847245899,
1573846894970,
1573805339841,
1573804798501,
1573803680884,
1573783007512,
1573782971353,
1573782849159,
1573737486359,
1573736784637,
1573736547307,
1573343573543,
1573343524184,
1573343362494,
1573343300564,
1573343249183,
1572975703281,
1571959630748,
1571866765351,
1571754587362,
1571685239729,
1571681359809,
1571679097675,
1571329652709,
1571327022366,
1571229978944,
1570383091369,
1570308380961,
1570306123797,
1569969385807,
1569870214863,
1569866166718
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"~Liyuan_Liu2"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"~Frank_Schneider1"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"~Sachin_Rajoria2"
],
[
"ICLR.cc/2020/Conference/Paper1430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"~Matthias_Minderer1"
],
[
"~Liyuan_Liu2"
],
[
"~Boris_Ginsburg1"
],
[
"~Boris_Ginsburg1"
],
[
"ICLR.cc/2020/Conference/Paper1430/Authors"
],
[
"~Liyuan_Liu2"
],
[
"~Boris_Ginsburg1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper examines classifiers and challenges a (somewhat widely held) assumption that adaptive gradient methods underperform simpler methods.\\n\\nThis paper sparked a *large* amount of discussion, more than any other paper in my area. It was also somewhat controversial.\\n\\nAfter reading the discussion and paper itself, on one hand I think this makes a valuable contribution to the community. It points out a (near-) inclusion relationship between many adaptive gradient methods and standard SGD-style methods, and points out that rather obviously if a particular method is included by a more general method, the more general method will never be worse and often will be better if hyperparameters are set appropriately.\\n\\nHowever, there were several concerns raised with the paper. For example, reviewer 1 pointed out that in order for Adam to include Momentum-based SGD, it must follow a specialized learning rate schedule that is not used with Adam in practice. This is pointed out in the paper, but I think it could be even more clear. For example, in the intro \\\"For example, ADAM (Kingma and Ba, 2015) and RMSPROP (Tieleman and Hinton, 2012) can approximately simulate MOMENTUM (Polyak, 1964) if the \\u03b5 term in the denominator of their parameter updates is allowed to grow very large.\\\" does not make any mention of the specialized learning rate schedule.\\n\\nSecond, Reviewer 1 was concerned with the fact that the paper does not clearly qualify that the conclusion that more complicated optimization schedules do better depends on extensive hyperparameter search. This fact somewhat weakens one of the main points of the paper.\\n\\nI feel that this paper is very much on the borderline, but cannot strongly recommend acceptance. I hope that the authors take the above notes, as well as the reviewers' other comments into account seriously and try to reflect them in a revised version of the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re\", \"comment\": \"For the sake of completeness I mention that the revised version includes\\n\\n\\\"In particular, to approximate MOMENTUM with ADAM, one needs to choose a learning\\nrate schedule that accounts for ADAM\\u2019s bias correction.\\\" \\n\\nafter my original note about it in the review.\\n\\nIt is time stop this thread.\"}",
"{\"title\": \"It seems we mostly agree\", \"comment\": \"Our definition of optimizer inclusion (or specialization) was always: A includes B if A can approximate B arbitrarily well up to hyperparameter schedules. See Definition 1 for the formal statement. It seems that, under this definition, we are in agreement that Adam includes momentum (or momentum specializes Adam), since a learning rate schedule can be adjusted to remove bias correction for large epsilon.\\n\\nWe disagree with the original statement in the review regarding the claims of our paper --- we never, in any revision or rebuttal, claimed that Adam can approximate momentum without a learning rate schedule adjustment.\"}",
"{\"title\": \"Re\", \"comment\": \">> In fact, the observation that Adam can approximate momentum is such an uncontroversial observation that it is included as an exercise in an undergraduate course at the University of Toronto (co-taught by Jimmy Ba, one of the authors of the original Adam paper)\", \"my_first_comment_on_the_issue_already_clarified_it_well_enough\": \"\\\"First, I would like to note that the claim that SGD with momentum is a special case of Adam with large epsilon is technically wrong because Adam also includes the bias-corrected momentum estimates which SGD with momentum does not consider. It might seem like a small difference, however it is a form of learning rate schedule which most users of Adam are not aware of. In practice, however, Adam with large epsilon can approximate SGD with momentum. Just don't claim the equivalent since it is not there.\\\" \\n\\ninstead \\\"Adam approximately equivalent to momentum SGD\\\" as the course you mentioned says.\"}",
"{\"title\": \"Re\", \"comment\": \"We also have experiments with ResNet-50 and Transformer which we do not consider models of \\u201clittle interest - small networks\\u201d.\"}",
"{\"title\": \"We include the bias correction term as b_{t+1} in Table 1\", \"comment\": \"As stated in the appendix, we use TensorFlow\\u2019s AdamOptimizer implementation, which differs trivially from the algorithm described in the Adam paper. Regardless, since our inclusion proof takes beta2=0, it applies equally well to both the TensorFlow AdamOptimizer and the algorithm in the Adam paper. We do not \\u201cdrop the bias correction term\\u201d in either our experiments or our proof (see b_{t+1} in Table 1).\\n\\nIn fact, the observation that Adam can approximate momentum is such an uncontroversial observation that it is included as an exercise in an undergraduate course at the University of Toronto (co-taught by Jimmy Ba, one of the authors of the original Adam paper), see Question 2(b) of http://www.cs.toronto.edu/~rgrosse/courses/csc421_2019/homeworks/hw2.pdf . We do not consider this observation to be a contribution of our paper; rather, our contribution is showing that this observation predicts relative optimizer performance under a realistic tuning protocol and budget.\"}",
"{\"title\": \"Re\", \"comment\": \"From ICLR 2016: https://arxiv.org/pdf/1604.07269.pdf\"}",
"{\"title\": \"Re\", \"comment\": \">> We provide a proof in Appendix A that Adam includes Momentum under the definition of optimizer inclusions in Section 3. If you are saying the proof is wrong, please provide a counterexample or point out a mistake in the proof so we can understand your point.\\n\\nIt does not deal with Adam but with YourAdam where you drop the bias correction term. \\n\\n>> Even if one considers the same optimizer with a different learning rate schedule to be a different optimizer, our proof still correctly shows that for an arbitrary schedule A, there exists a schedule B such that Adam-with-schedule-B is equivalent to Momentum-with-schedule-A. But we do not believe it is useful to consider the learning rate schedule as \\u201cpart of the algorithm\\u201d since, in practice, people use all kinds of different schedules with Adam (or any of the other popular algorithms).\\n\\nAfter I said that the bias correction term can be viewed as a learning rate schedule, you started to use it as an argument that then there is no difference since any learning rate schedule can be used. However, I meant that the bias correction term can be viewed as a learning schedule not that there is an exact translation. In fact, there is no as you can see in Algorithm 1 of https://arxiv.org/pdf/1412.6980.pdf where the two momentums also interplay with epsilon. Thus, it is only in approximation one can view it as a learning rate schedule. \\nYou don't just change the learning rate of Adam to endup with momentum SGD, you would also need to remove the bias correction term. If a practitioner would use some learning rate decay for Adam, then this decay would be on top of the bias correction effect and not instead of it.\"}",
"{\"title\": \"Re\", \"comment\": \">> The idea that adaptive gradient methods generalize worse than non-adaptive methods is a widely-held belief in our community\\n\\nIt was shown that it is partially due to the use of L2 regularization and not weight decay. The use of L2 and not weight decay is yet another thing that would make a difference between Adam and momentum SGD, i.e., Adam with L2 does not translate to SGD with L2 due to the adaptive part of Adam. You would need AdamW instead. \\n\\n>> If tuning (epsilon, alpha0/epsilon) vs (epsilon, alpha) was the only reason we got good results with adaptive gradient methods and thus the only reason they performed better than non-adaptive methods\\n\\nSee the reply of Sachin Rajoria. \\n\\n>> Despite the reviewer\\u2019s skepticism about our search spaces and tuning protocol, our test errors (including Momentum\\u2019s and plain SGD's) are better than previous optimizer comparisons (Wilson et al., 2017, Schneider et al., 2019).\\n\\nYour target error rates for CIFAR-10 is 7% this is something that people used to show for ICLR 2015. WideResnets published in 2016 already had about 4% error in baseline settings. In other words, the regime that you show the results for is of little interest - small networks or networks not trained long enough.\"}",
"{\"title\": \"Our budget was not small compared to previous work and test error shows we tuned well\", \"comment\": \"We ran 100 trials on most workloads with the exception of 50 for both ResNet-50 on ImageNet and Transformer on LM1B (we repeated all these experiments multiple times to get error bars). This is more trials than standard practice for hyperparameter tuning on these workloads.\\n\\nWe get better test error than the optimizer comparisons of Wilson et al. 2017, Schneider et al. 2019, and our ImageNet results are better than Goyal et al., 2017.\\n\\nCan you provide a reference that compares optimizers on any of our workloads and uses dramatically more tuning trials?\"}",
"{\"title\": \"Re\", \"comment\": \"We provide a proof in Appendix A that Adam includes Momentum under the definition of optimizer inclusions in Section 3. If you are saying the proof is wrong, please provide a counterexample or point out a mistake in the proof so we can understand your point.\\n\\nEven if one considers the same optimizer with a different learning rate schedule to be a different optimizer, our proof still correctly shows that for an arbitrary schedule A, there exists a schedule B such that Adam-with-schedule-B is equivalent to Momentum-with-schedule-A. But we do not believe it is useful to consider the learning rate schedule as \\u201cpart of the algorithm\\u201d since, in practice, people use all kinds of different schedules with Adam (or any of the other popular algorithms).\"}",
"{\"title\": \"Getting better results is a good thing, not a bad thing\", \"comment\": \"The idea that adaptive gradient methods generalize worse than non-adaptive methods is a widely-held belief in our community. Our paper points out that this conclusion is theoretically unlikely as well as untrue under a protocol that anyone can implement.\\n\\nIf tuning (epsilon, alpha0/epsilon) vs (epsilon, alpha) was the only reason we got good results with adaptive gradient methods and thus the only reason they performed better than non-adaptive methods, then we would be happy to present that as a major contribution of our paper since this is a simple trick that anyone can do. However, we find that the primary cause of the confusion around whether adaptive gradient methods perform worse than non-adaptive methods is whether or not epsilon is tuned, which is what we emphasize in the paper.\\n\\nDespite the reviewer\\u2019s skepticism about our search spaces and tuning protocol, our test errors (including Momentum\\u2019s and plain SGD's) are better than previous optimizer comparisons (Wilson et al., 2017, Schneider et al., 2019).\"}",
"{\"title\": \"Re\", \"comment\": \">> Only the adaptive optimizers *have* an epsilon parameter that needs to be searched, so this cannot be unfair to SGD or Momentum since they already benefit from not having to tune the hyperparameter at all.\\n\\nYes, they have an epsilon and it is treated in a particular way. If you would optimize it without performing any transformation such as alpha0/epsilon you would get worse results and that would affect your conclusion. Now, you come up with a transformation and you get better results. What was the purpose of the transformation if not to get better results = if not to affect your conclusion about similarities in performance of Adam and momSGD. \\n\\n>> Taking the process yet further, we doubled the trial budget for all optimizers and re-ran these equal-width-search-space experiments. \\n\\nWith the curse of dimensionality at hand and *when* the search space dimensionality is large, random search or grid search will happily consume 10x budget without deriving much.\"}",
"{\"title\": \"Re\", \"comment\": \"The language used is not the main problem. The main problem is that the results and conclusions are affected by this choice of a small budget and bad hyperparameter optimizer.\"}",
"{\"title\": \"Re\", \"comment\": \">> Our paper is correct in stating that SGD with momentum is a special case of Adam in the limit of large epsilon because alpha is allowed to depend on the step number.\\n\\nIt is not correct because you can't go from one to another without changing another parts of the algorithm, here, the bias correction or learning rate schedule.\"}",
"{\"title\": \"We have incorporated these suggestions into the latest version\", \"comment\": \"Thank you for the review. We believe we have incorporated all suggestions into the latest revision of the manuscript. Regarding the effect of network structure on hyperparameter choices, we agree that this is an important point. All aspects of the workload likely affect the best hyperparameter configurations, at least to some extent. It is an interesting question whether there is more structure in these effects, and we leave that question to future work.\\n\\nRegarding the inclusion relationships, it is certainly true that tuning, say, RMSProp much more thoroughly than Adam will make RMSProp get better results if neither optimizer has been close to optimally tuned. But as we tune all optimizers more and more carefully, the theoretical inclusion relationships will become the dominant effect. That still doesn't tell us what will happen between RMSProp and Adam since they don't include each other, but it does tell us what will happen between Momentum and Adam. At some point, given a particular family of learning rate schedules, it will no longer be possible to improve Momentum with additional tuning (at least on the test set; we can overfit the validation set just by trying more and more random seeds). \\n\\nAlthough a crucial point of our paper is that tuning protocols matter a lot and there may not be a way to be completely fair when comparing different optimizers, we are not saying that nothing can be learned from empirical comparisons. If the reader is willing to accept our particular parameterization of the learning rate schedules, we believe our conclusions will not change as we use more and more tuning trials in our setup. Indeed, the results of our additional experiments in response to reviewer #1 show that although we can continue to reduce validation error slightly by narrowing our search spaces and/or running more trials, we cannot reduce our test error for ResNet-32 on CIFAR-10 with more tuning, and our conclusions remain the same regardless.\\n\\nWe hope that we have adequately addressed all the concerns in this review and that the reviewer will consider raising their score accordingly. Please let us know if you believe additional issues remain with the newest version.\"}",
"{\"title\": \"We have incorporated your feedback into a revised version of the manuscript\", \"comment\": \"We thank the reviewer for their encouraging feedback. We incorporated this feedback into a new revision, which we believe is much stronger.\\n\\nWe agree with both Reviewers 2 & 3 that Section 3 could have been more concise and informative. We trimmed and restructured this section, removed Algorithm 1, and moved the update rule definitions into the main body.\\n\\nGiven that at least 2/3 of the reviewers find the term \\\"hyperparameter\\\" clearer than \\\"metaparameter\\\" we have adopted that language in the latest revision of our paper.\\n\\nWe hope that we have adequately addressed all the concerns in this review and that the reviewer will consider raising their score accordingly. Please let us know if you believe additional issues remain with the newest version.\"}",
"{\"title\": \"Clarifying the contribution of our paper (response to 2nd paragraph of review #1)\", \"comment\": \"To facilitate discussion, we have responded to the three paragraphs of review 1 in separate threads. The 2nd paragraph seems to contain the reviewer\\u2019s primary concerns, so we focus on that first. Our latest revision should resolve the other issues mentioned by the review.\\n\\n- - - Summary - - - \\n\\n1. We do not think the concerns raised in the 2nd paragraph are reasonable, nor is it reasonable to accuse us of acting in bad faith after we exceeded the standards of transparency and care in the literature (e.g. by reporting preliminary search spaces as well as final ones).\\n\\n2. The reviewer was concerned that search spaces with nominally larger learning rate ranges for SGD and Momentum might be unfair relative to Adam, even though Adam faces a higher dimensional search problem.\\n\\u25cf Although part of the point of our work is that we don't think any search spaces are completely fair, we do not think our original search spaces were unreasonable or biased our conclusions. Nevertheless, we ran additional experiments on CIFAR10 designed to, if anything, give non-adaptive optimizers an advantage and confirmed that using the \\\"same\\\" ranges wouldn't change our conclusions.\\n\\n3. The reviewer was concerned that because we decided to search (epsilon, alpha0/epsilon) instead of searching (epsilon, alpha), we somehow unfairly penalized the non-adaptive optimizers because the former parameterization of the search space is more efficient for Adam in our experience.\\n\\u25cf Our choice here is akin to log-transforming 1-momentum when tuning Momentum or log transforming learning rate. Only the adaptive optimizers *have* an epsilon parameter that needs to be searched, so this cannot be unfair to SGD or Momentum since they already benefit from not having to tune the hyperparameter at all.\\n\\n- - - Full response - - - \\n\\nOne of the central points of our paper is that any empirical comparison of optimizers depends on the tuning protocol. No protocol we are aware of can guarantee fairness between optimizers with incommensurate search spaces -- and yet, empirical optimizer comparisons are crucial for developing new optimizers and guiding practitioners training neural networks. To our knowledge, no other study has highlighted how these comparisons are sensitive to the tuning protocol and which hyperparameters are tuned. Regarding our own results, we make clear in Section 5 that we should only expect our detailed findings to hold for similar workloads under similar protocols.\\n\\nAlthough it is always difficult to guarantee fairness when tuning over optimizers with incommensurate search spaces, this is still true for protocols that attempt to use the \\\"same\\\" search space for Adam and Momentum (Adam\\u2019s learning rate parameter is more closely related to (learning rate)/(1 - momentum), so the optimal ranges are almost always different between the two optimizers). Why should practitioners tie one hand behind their backs and only search a set of alpha values when tuning Adam that are the same as the set of learning rate values they search when tuning SGD? Given the importance of tuning protocols, practitioners must decide which protocol most closely captures their own when deciding which optimizer comparison is most relevant to them. Since our protocol produces results that exceed the performance from other optimizer comparisons in most cases (see Figures 3 and 4), we expect that readers will prefer using something similar to our tuning protocol.\\n\\nIn light of reviewer 1\\u2019s concerns about our CIFAR-10 experiments, and to further support our claim that all optimizers were well tuned, we ran additional experiments with ResNet-32 on CIFAR-10. We ran an experiment with SGD, Momentum, and Nesterov with a search space where all learning rate and momentum ranges had the same width as the ranges for Adam's similar hyperparameters (ignoring that they are not necessarily the same units). We centered these new ranges on the best points from the original search, making the new comparison, if anything, unfair to the adaptive methods. Although further narrowing the search space about the best validation error reduces the mean validation error, our conclusions do not change from what was reported in the paper. Taking the process yet further, we doubled the trial budget for all optimizers and re-ran these equal-width-search-space experiments. Again, although the best validation error improved slightly for all optimizers, test error did not change much, and we see no clear evidence that the inclusion relationships are violated. See imgur.com/a/j38e1HP.\\n\\nUltimately, it is extremely difficult to judge whether one of two incommensurate search spaces provides some sort of advantage -- again, a key point in our paper -- but we firmly believe our results are robust and that our protocol did not unrealistically bias our conclusions in favor of any one optimizer over another.\"}",
"{\"title\": \"Technicalities in approximating SGD with momentum using Adam\", \"comment\": \"Our paper is correct in stating that SGD with momentum is a special case of Adam in the limit of large epsilon because alpha is allowed to depend on the step number. We have added an extra note in the paper to make this clearer. Adam\\u2019s bias correction does indeed act as a form of learning rate schedule (as the review pointed out), but we can always choose the schedule for Adam\\u2019s alpha to exactly match an arbitrary learning rate schedule of SGD with momentum. This schedule is listed explicitly in Appendix A.\\n\\nOf course, practitioners cannot tune Adam over all possible learning rate schedules nor over arbitrarily large values of epsilon, so it is not guaranteed that a practitioner will actually find the hyperparameter values that allow Adam to match or exceed the performance of SGD with momentum. This is the point of our paper! We demonstrate that even if Adam is tuned over finite values of epsilon and linear learning rate schedules, it always matches or exceeds the performance of SGD with momentum in the workloads we considered.\"}",
"{\"title\": \"Insufficiency of 16 trials\", \"comment\": \"Thanks for pointing out the overly strong language about the efficiency of the final search spaces we found. We have revised the manuscript to avoid these overly strong claims and, since it is not a crucial part of our argument, moved the figure in question to the appendix.\"}",
"{\"title\": \"Thanks for the reference : -)\", \"comment\": \"Thanks for pointing out the reference : -)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper provides an empirical comparison of a set of first-order optimization methods for deep learning models. Those optimizers include stochastic gradient descent, momentum method, RMSProp, Adam, Nesterov, and Nadam, which arguably covers all popular variants used in the literature. Although it is not the first empirical study on this topic, its conclusion differs slightly. The conclusion is a rather intuitive one: With proper parameter search, the 'richer', more powerful optimizers tend to work better, regardless of the downstream tasks.\", \"pros\": [\"Intuitive results with a well designed workloads and experiments. For practitioners that want to start their own hyperparameter search, the workloads and setups are likely to be useful.\"], \"cons\": [\"I am not entirely convinced that the inclusion relationship is indeed a major cause or indicator of different optimizers' performance. There is no theoretical justification; Empirically, if one takes two optimizers equally rich and tunes one of them more intensively, one should expect a better performance, too.\"], \"suggestions\": [\"I think at least the basic definitions of different optimizers should be given in the main text. Otherwise, readers without detailed knowledge of all these optimizers cannot follow the paper. For example, the paper starts talking about the taxonomy of the optimizers with their corresponding hyperparameters in Section 3.2 before giving any functional form of the optimizers.\", \"I would suggest the authors to follow the convention and use the term \\\"hyperparameter\\\" rather than \\\"metaparameter\\\". The readers of this paper are not primarily Bayesian, there is really no need to divert from the convention. Besides, the term \\\"Bayesian hyperparameter tuning\\\" is widely used even.\", \"I wonder to which extent the network structures impact the choice of the hyperparameter (e.g., CNN vs. RNN).\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"First, I would like to note that the claim that SGD with momentum is a special case of Adam with large epsilon is technically wrong because Adam also includes the bias-corrected momentum estimates which SGD with momentum does not consider. It might seem like a small difference, however it is a form of learning rate schedule which most users of Adam are not aware of. In practice, however, Adam with large epsilon can approximate SGD with momentum. Just don't claim the equivalent since it is not there. \\n\\nI have some difficulties understanding the contribution of the paper. For example \\n\\\"When tuning all available metaparameters under a realistic protocol at scales common in deep learning,\\nwe find that more general update rules never underperform their special cases.\\\"\\nIn practice you do adjust hyperparameter search spaces to fit your conclusions, e.g., \\\"We found that searching over (epsilon, alpha0/epsilon) was more efficient than searching over (epsilon, alpha).\\\" Again, this alone invalidates your experimental setup since you biased it in order to fit your conclusion: \\\"was more efficient\\\" was found after running some prior experiments. \\nAnother situation where your experimental setup is unfairly tuned is when you used different hyperparameter ranges for similar hyperparameter, e.g. see D.2 for ResNet-32 on CIFAR-10 where 6 orders of magnitute difference was used for the initial learning of Momentum and 3 orders of magnitude difference for the initial learning rate of Adam. Similarly, there is a difference of 10x for ImageNet experiments. \\n\\nThe paper suggests that 16 experiments is enough to produce good results. First, one should not forget the special arrangements (see above) done for hyperparameter search space. Second, for any person working in black-box optimization it is clear that 16 experiments is next to nothing. It should give you something good in 1D, possibly in 2D if your search range is narrow. This is absolutely nothing in larger dimensions (providing that your benchmark in not super trivial and your hyperparameter search space is not absolutely boring when you already narrowed it around the optimum). After 16 evaluations you get pretty bad settings for most algorithms.\", \"update\": \"The paper uses a naive hyperparameter optimizer and runs it for a very small budget. The latter likely affects the conclusion of the paper that different training algorithms perform similarly. The authors seem to accept it by mentioning that this is the case for their tuning protocol/budget. \\n\\nIf we would like to compare different training algorithms, we should optimize them on a set of problems using 2-3 state-of-the-art hyperparameter optimizers. Then, we should study how the best seen solutions so far and their robustness change as a function of the computation budget (the maximum budget should be large enough). Then, one would see that the results are not that different for small budgets (a boring result) and somewhat different for larger budgets. Showing only the boring part seems more misleading than useful. \\n\\nUpdate#2:\\nAs I mentioned in my review, Adam with large epsilon is not equivalent to momentum SGD but only approximates the latter. This is because the original Adam has a bias correction term and even if the same *global* learning rate schedule is used both for Adam with large epsilon and momentum SGD, they are not equivalent. In order to obtain the exact equivalence, one would need to either\\n1) drop the bias correction term of Adam and thus modify the algorithm in order to satisfy the claimed equivalence\\nor \\n2) set a particular learning rate *for each batch pass* of Adam to simulate the effect of the bias correction term, this leads to a large number of hyperparameters - as many as the number of batch passes, this is intractable (the setup of the authors does not optimize such batch-wise hyperparameters, they are defined by a global scheduler as a function of batch/epoch index). \\nIf you avoid these modifications, then you can't claim the equivalence but only an approximation. If you don't have the equivalence of the two approaches and so momentum SGD is not a particular case of Adam, then the following sentence from the abstract is false: \\\" As tuning effort grows without bound, more general optimizers should never underperform the ones they can approximate (i.e., Adam should never perform worse than momentum)\\\". Again, strictly speaking, it is false that \\\"Adam should never perform worse than momentum\\\" because momentum SGD is not a particular case of the original Adam *unless* you drop the bias correction term or simulate it with tons of hyperparameters, one learning rate value per batch pass. Any global learning rate schedule *used for both* algorithms will not solve the issue because the bias correction term will remain. If you don't modify the learning rate schedule of Adam but only of momentum SGD, then you basically adjust your SGD by moving some part of Adam in it to claim the equivalence of the two, such actions can make pretty much every second algorithm equivalent to another. \\n\\nMy main concern is described in the first Update. It is trivial that a more general optimizer is capable to perform at least as good as its particular case. What is not trivial is to clarify the interplay of computational budgets spent on hyperparameter tuning vs number of hyperparameters vs performance over time.\"}",
"{\"title\": \"Regarding DeepOBS\", \"comment\": \"Full disclosure: We are the authors of DeepOBS [Schneider et al., ICLR 2019]. We are _not_ assigned as reviewers to this paper. Since this paper contains several statements about our work, we nevertheless would like to clarify some aspects of our work to the benefit of the community and the reviewers.\\n\\nWe built DeepOBS as a tool for researchers who develop new deep learning optimizers, to allow them to efficiently test their new method on realistic but feasible architectures and compare to baselines. DeepOBS offers a set of test problems, benchmarks, and automated evaluation procedures. One of our goals was to encourage such authors to make their new methods practically useful. We thus also chose the benchmarks to reflect the kind of effort an applied end-user of deep learning might realistically invest in parameter-tuning.\", \"to_be_clear\": \"It was never a core goal of DeepOBS to argue in favor or against any particular optimizer. The DeepOBS paper only offered benchmarks on SGD, Adam and momentum SGD, as these remain among the most popular ones among practitioners. We expected all three of these to be beaten handily by newer methods (not because they are bad, but because newer methods aren\\u2019t interesting if they don\\u2019t even beat the most common competitors). In fact, everyone is invited to contribute their own optimizers to the DeepOBS benchmark, we happily accept pull-requests to\", \"our_repo_at_https\": \"//deepobs.github.io and add such results to the leaderboard.\\n\\nIt is correct that for our analysis, we treated the $\\\\beta_1$, $\\\\beta_2$, and $\\\\epsilon$ parameters of Adam as constant because, as the present authors note themselves, this is common practice. The present paper argues that Adam should always dominate (SGD) because, when all these are treated as free parameters, Adam actually contains (momentum) SGD as a special case (Adam then has 4 parameters, compared to SGD's single one). This is an interesting point. If it turns out that these four parameters can indeed be tuned as efficiently as the paper argues (i.e. that the same search budget for SGD and 4-parameter-Adam leads to better performance for the latter), we would be very happy to add this (arguably new) optimizer to our benchmark.\\n\\nOne problem we see in this regard is that the present paper pre-specifies different search domains for each of the benchmark problems. For example, for FashionMNIST, the authors decided to use very small (i.e. the usual) values for Adam\\u2019s epsilon (Table 7), but for ImageNet they searched only large values of epsilon (Table 19) (in that latter case, the update rule of Adam is arguably closer to momentum SGD than to Kingma\\u2019s & Ba\\u2019s original Adam). So the paper actually labels a different algorithm as `Adam\\u2019 for each separate test problem. The authors explain these choices in Section 4 by stating that\\n\\n> [\\u2026] the provenance of each search space is difficult to trace exactly. In some cases, our search spaces were informed by published results or prior experience with particular models and optimizers.\\n\\nThis stance is difficult to reconcile with the notion of a benchmark. We believe a benchmark should be indicative of the performance the evaluated method would have on _new_, not previously explored problems, because this is the situation practitioners actually face. To the user, an optimizer consists of an update rule and a space of (not just a number of) tunable hyperparameters. If the space of good hyperparameters differs from one benchmark to the next then, for a new problem, the true search-space really is the union of all these spaces. How would a user otherwise know how to choose the search space on a domain-model-combination they have never faced before?\\n\\nWe want to emphasize that we very much welcome this work, including its constructive criticism of DeepOBS, because we strongly believe that the current practice of empirical comparisons of deep learning optimizers is flawed. While DeepOBS certainly does not solve all problems, we think that it is a good step in the right direction. We would be very happy to work with the authors to see whether their insights could make their way into a future version of DeepOBS.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents experimental data supporting the claim that the under aggressive hyper-parameter tuning different optimizers are essentially ranked by inclusion --- if the hyper-parameters of method A can simulate any setting of the hyper-parameters of method B then under aggressive hyper-parameter tuning A will dominate B. One way to achieve this rather trivially is to do a hyper-parameter search for B and then set the hyper-parameters of A so that A is simulating B. But the point here is that direct and feasible tuning of A with dominate B even in the case where A has more hyper-parameters and where hyper-parameter optimization of A would seem to be more difficult. An important conclusion is that without loss of generality one can always use Adam even in vision where SGD is currently the dominant optimizer used in practice. Another important conclusion is that quasi-random hyper-parameter optimization is quite effective.\\nI find the claims to be intuitively plausible and I find the empirical results compelling.\\n\\nJust a couple minor complaints. First,\\\"Hyper-parameter\\\" please not \\\"meta-parameter\\\". I find the attempt to overturn standard usage inappropriate. Second, I found most of section 3 uninformative. I don't think algorithm 1, or the definition of a first order optimizer adds anything to the paper. Inclusion can be easily defined in complete generality for any parameterized algorithm.\"}",
"{\"title\": \"typo\", \"comment\": \"\\\"large epsilon (1e-8)\\\" ->\\\"large epsilon (1e-3)\\\"\"}",
"{\"title\": \"Previous detailed comparison\", \"comment\": \"Previously a more thorough comparison in my opinion has been carried out in https://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization.pdf and appendix D: https://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization-supplemental.zip\\n\\nAcross multiple domains and models, large epsilon (1e-8) has been compared to default epsilon (1e-8) in Adam. Another interesting thing noted was often use of large epsilon slowed down initial progress, so one should do super quick early termination when doing parameter search.\"}",
"{\"comment\": \"Metaparameters were popularized in the evolution computation community in the 80ies for defining and optimizing parameters of GAs/EAs. This was before MacKay et al. popularized hyperparameters in the early 90ies. I don't think that the use of the term \\\"hyperparameter\\\" outside of its original domain takes away anything from the Bayesian community. Since thousands of papers in ML and optimization define things like \\\"all the configuration parameters and optimizer parameters we tune using validation error when training neural networks\\\" as hyperparameters, it might be a little bit too late and more confusing than useful to redefine back as metaparameters.\", \"title\": \"but\"}",
"{\"comment\": \"We use the term \\\"metaparameter\\\" because \\\"hyperparameter\\\" has a specific technical meaning in Bayesian machine learning as a parameter that controls a prior distribution over other parameters. Although admittedly a pedantic point, most uses of \\\"hyperparameter\\\" in the deep learning literature to describe quantities such as learning rates are technically incorrect. We believe the Bayesian usage of the term should prevail here and we should use the more generic term \\\"metaparameter\\\" to refer to all the configuration parameters and optimizer parameters we tune using validation error when training neural networks. Radford Neal and other prominent Bayesian machine learning researchers have made this complaint for years.\", \"title\": \"\\\"Hyperparameter\\\" is not the most technically correct term\"}",
"{\"comment\": \"Why do you use the term \\\"metaparameter\\\" instead of the more common \\\"hyperparameter\\\"?\\n\\nI'm not familiar with the term \\\"metaparameter\\\", but Wiktionary suggests that a metaparameter is a \\\"parameter that controls the value of one or more others\\\", whereas hyperparameter is \\\"a parameter whose value is set before the learning process begins\\\". The latter seems to be more generally appropriate for the parameters discussed in the paper, even though some of them may qualify as metaparameters.\\n\\nIf the distinction is important, it might be worthwhile to comment on it in the paper (sorry if you already do and I missed it). If it's not important, it may be best to stick with the more common term to avoid confusion.\", \"title\": \"Hyperparameters vs metaparameters?\"}",
"{\"comment\": \"Hmmm, personally, I think the major contribution of this paper, as suggested by the title, is the empirical comparison. Since setting a larger epsilon is briefly mentioned in existing literature (e.g., in the doc for f.train.AdamOptimizer), I don't think the author will choose to claim this as the novelty. Still, to the best of my knowledge, it is the first time to have a detailed empirical study on the effect of this parameter.\", \"title\": \"Adam-with-large-epsilon is not novel but it may be the first time to have the detailed comparison\"}",
"{\"comment\": \"Maybe the paper novelty is in suggesting a new way how you can morph Adam into SGD with momentum? For example you can start training with small epsilon (Adam ) and increase epsilon during training to 100 (SGD), similarly to Padam?\", \"title\": \"E-Adam?\"}",
"{\"comment\": \"Adam is fundamentally based on the idea of adaptation of first moment by second moment estimation, and the original motivation for epsilon was just to avoid dividing by zero. It would be very interesting to see how large is the value of second moment comparing to epsilon, especially when you use large epsilon. From hyper-parameter search point of view, large epsilon unify Adam and SGD with momentum into one search. But I wonder, if there is a gain in checking whole epsilon range vs just doing hyper-parameter search for Adam and SGD separately?\", \"title\": \"\\\"Adam. What\\u2019s in a name?\\\"\"}",
"{\"comment\": \"We agree that there are some subtle questions here. First, and most important from our perspective, should researchers tune epsilon over ranges that include large epsilon values? Our results strongly suggest the answer is yes! We found that starting with Adam and tuning all its metaparameters is an effective way to proceed. On some workloads, a larger epsilon is optimal, while on other workloads, like Transformer, a smaller epsilon is optimal. Since we can't predict the best value of epsilon ahead of time, we need to tune it. Our related work section mentions several examples where other researchers have found relatively large values of epsilon to be useful, and as Liyuan Liu points out, larger values are common in reinforcement learning.\\n\\nSecond, can we still call Adam with very large values of epsilon \\u201cAdam\\u201d? This is an inherently difficult question to answer. As we show in Appendix A (and as you also pointed out), Adam continuously approximates Momentum in the limit as epsilon approaches infinity. At which finite value should Adam be referred to as Momentum? This should depend on the objective function being optimized. In our paper, we refer to the algorithm from Kingma and Ba (2015) with any finite positive epsilon as \\u201cAdam\\u201d.\", \"title\": \"\\\"A rose by any other name would smell as sweet\\\"\"}",
"{\"comment\": \"Good question, I have the same doubt for a long time...\\n\\nPersonally speaking, setting epsilon to a large value makes the resulting algorithm different from the vanilla Adam. Intuitively, its effect is the same with calculating the arithmetic average between the raw 2rd moment and a prior. If the arithmetic average is changed to be the geometric average, the resulting algorithm is PAdam [1]. Therefore, Adam-with-large-epsilon is more like a variant of Adam. \\n\\nAt the same time, in some cases, people refers 'Adam with a large epsilon' as 'Adam' (e.g., it seems that using a large epsilon is quite common & useful in reinforcement learning). In these cases, it is not treated as a variant...\\n\\nOverall, I am not sure whether we should refer 'Adam-with-large-epsilon' as a new variant, or as 'Adam' after parameter tuning. \\n\\nChen, Jinghui, and Quanquan Gu. \\\"Closing the generalization gap of adaptive gradient methods in training deep neural networks.\\\" arXiv preprint arXiv:1806.06763 (2018).\", \"title\": \"On the epsilon value\"}",
"{\"comment\": \"The epsilon values used fin Adam and Nadam are very large . For example epsilon used in ResNet-50 experiments are [10^-2; 10^2] and for Nadam [10^3;10^7]. Such high epsilon effectively disable the normalization of 1st moment by second moment, and algorithm becomes as SGD with momentum. This clearly contradicts to the whole idea beyond Adam. Can you still call it Adam?\", \"title\": \"Is Adam with large epsilon still Adam?\"}"
]
} |
B1xBAA4FwH | On Evaluating Explainability Algorithms | [
"Gokula Krishnan Santhanam",
"Ali Alami-Idrissi",
"Nuno Mota",
"Anika Schumann",
"Ioana Giurgiu"
] | A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community. Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial. In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations. These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities. We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients. Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works. | [
"interpretability",
"Deep Learning"
] | Reject | https://openreview.net/pdf?id=B1xBAA4FwH | https://openreview.net/forum?id=B1xBAA4FwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7CKNyNrAk7",
"SJxA9XP3ir",
"Hke_3jWhir",
"BJgz9iZ3jB",
"BJxTMj-hor",
"Hyl21jZ3oH",
"ByeP5jZy5r",
"SJx1EXq0YH",
"SklqelnnKB",
"BJxYPItUYr",
"BklPydLsdS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798723063,
1573839765903,
1573817263584,
1573817225679,
1573817109007,
1573817060500,
1571916686539,
1571885862768,
1571762161998,
1571358305145,
1570625503032
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1429/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1429/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1429/AnonReviewer3"
],
[
"~TING_TING_SUN1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes metrics for comparing explainability metrics.\\n\\nBoth reviewers and authors have engaged in a thorough discussion of the paper and feedback. The reviewers, although appreciating aspects of the paper, all see major issues with the paper. \\n\\nAll reviewers recommend reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes to the paper\", \"comment\": \"We would like to thank the reviewers for their valuable feedback.\\nThe following are the changes to the paper that we have made in response to their comments.\\n\\n1. Added experimental results on the effect of thresholding on the masking process.\\n2. Added experimental results on the effect of grey background as compared to our proposed masking technique.\\n3. Added additional equations to clarify how our metrics are computed.\\n4. Added more more images in the appendix that exemplify our proposed masking used in the correctness metric.\\n5. A clearer explanation of how the consistency metric is evaluated.\\n6. More in-depth discussion into why confidence and correctness are not redundant. \\n7. Minor rewrites to fix grammar and typos.\\n8. Added additional citations to related works.\\n\\nWe hope that these address the reviewers' concerns and that they would reconsider their evaluation of our work.\\n\\nRegards,\\nThe Authors\"}",
"{\"title\": \"Response to ReviewerIII - pt 2\", \"comment\": \"Efficacy of confidence vs. number of pixels comparisons\", \"response\": \"While the two numbers do come from the same experiment, we would like to point out that they are complementary to each other. Correctness gives us a coarse, distribution level view, i.e, how well the explainer performs on average. Confidence on the other hand, also informs us about the per-instance behaviour. In some instances, especially in the medical diagnosis domains, per-instance behaviour is more important than a distribution-level statistic. Additionally, if method A and B have similar correctness scores, Confidence and Entropy (reported in table. 3) gives us more in-depth information about the per-sample performance of these methods, allowing us to differentiate between the performance of both models \\n\\n\\nThanks, \\nThe Authors\", \"difference_between_confidence_and_correctness\": \"\"}",
"{\"title\": \"Response to ReviewerIII - pt 1\", \"comment\": \"We would like to thank the reviewer for their valuable comments. We address their concerns below.\", \"not_compared_with_prior_work\": \"\", \"response\": \"These metrics are widely used in classification and ranking tasks. One can indeed view the task of generation of explanation as being analogous to the retrieval task (as noted in [1]). Following this formulation, we try to map the changes in accuracy we see with our normal masking and inverse masking experiments to these metrics which are well understood by the community. We do acknowledge that the mapping is not one-to-one and thus label our metrics with a \\u201cpseudo\\u201d prefix.\", \"the_combined_images_are_still_out_of_distribution\": \"\", \"references\": \"[1] Samek et.al, Evaluating the visualization of what a Deep Neural Network has learned\"}",
"{\"title\": \"Response to ReviewerII\", \"comment\": \"Thank you for your kind words and the succinct summary of our paper. We would like to address your comments one by one.\", \"comment_1\": \"Proposed masking generates image that are still out of the data distribution.\", \"response\": \"We have added experimental results for this configuration in the appendix. You can find them in table 10. We find that using the grey background does not solve the issue either. \\n\\n\\nWe hope these explanations address your concerns satisfactorily.\\n\\nRegards,\\nThe Authors\", \"comment_2\": \"Concerns about information leakage from background image used in masking.\", \"comment_3\": \"Concerns about how consistency is evaluated\", \"comment_4\": \"Effect of grey masking instead of black masking\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"We would like to thank the reviewer for their valuable insights. Please find below our responses addressing your concerns.\\n\\nComment 1. Lacks Technical Novelty\", \"response\": \"In addition to gradient based methods like GradCAM, SmoothGrad and Integrated Gradients, we also evaluated our metric suite on LIME, the most widely used perturbation based explanation method for our comparison. The other perturbation methods can be similarly evaluated as well. \\n\\nWe would like to emphasize that our work focuses on proposing a comprehensive suite of metrics to evaluate explainers and performing a comparative study using common explainers to show the suite\\u2019s behaviour. An exhaustive comparison of every explainer is unfortunately out of the scope of this paper but we do consider it as an important future work that can be collaboratively taken up by the XAI community. \\n\\n\\nThanks\\nThe Authors\", \"we_would_like_to_emphasize_our_major_contributions\": \"1. We identify issues with current masking procedures as proposed in other papers\\n2. We propose a cost-effective masking technique that doesn\\u2019t require retraining of the underlying classifier\\n3. We identify the one-dimensionality of current research into evaluating explainers, proposing other important components. \\nWe also further show that when performing a comprehensive evaluation, there is no one clearly better explainer and thus practitioners need to be careful about which explainer they choose.\", \"comment_2\": \"Confidence is redundant relative to correctness.\", \"comment_3\": \"Inverse Saliency is already proposed\", \"comment_4\": \"Effect of Thresholding on results\", \"comment_5\": \"Evaluating Perturbation based methods\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Dear TING TING SUN,\\nThank you for your kind words and your comments. \\n\\nYou are right in pointing out that the masked images could still be out of distribution. We hypothesized (in Sec. 3.2.1) that our masking provides samples that are closer to the data distribution than those with a black / blank background. \\n\\nWe have further calculated the Inception Score[1] and FID[2] of the samples produced by our method and the black background. These results will be included in an update during the rebuttal phase. \\n\\nHere's the table of the scores we computed. \\n\\nInception Score\\nExplainer\\t Black background Our Method\\nInteg. Grad \\t 21.01\\t\\t 89.1564\\nSmoothGrad\\t \\t24.4578\\t 137.6948\\nGradCAM \\t\\t231.4284\\t\\t 428.9047\\nLIME \\t\\t\\t60.2835\\t\\t 137.9088\\n\\n\\nFID Score\\nExplainer\\t Black background Our Method\\nInteg. Grad \\t 108.8858\\t \\t66.0593\\nSmoothGrad \\t 91.0726\\t\\t46.9035\\nGradCAM 409.6676\\t\\t1.0368\\nLIME\\t\\t 62.2233\\t\\t32.9805\\n\\n\\nAs you can see, in both the metric, our method outperforms the simple blank pixel background.We see from these results that, indeed our method produces inputs closer to the data distribution.\\n\\nRegarding your note on using rotations as a sematically invariant transform while evaluating consistency, our decision to include the rotation was motivated by the fact that real-world objects under rotation remain unchanged to an human observer (expect for rare cases like the digit 9). This decision was not influenced by whether neural networks were invariant to the transformation or not.\\n\\nFurthermore, we only take those inputs for which the prediction of the underlying classifier does not change under the transformation as described in Sec. 3.2.2. Additionally, we also showed that the classifier accuracy on the entire dataset doesn't change significantly under the rotations we consider (Please see Table. 1 in section 4).\\n\\nFinally, we would like to re-emphasize that this is an evaluation towards the explainability algorithms. In fact, we keep the underlying classifier unchanged under all the experimental setups. This allows us to factor out the any influence that pathologies of the underlying classifier might have on the observed results.\\n\\nRegards,\\nThe Authors\\n\\n[1] Improved Techniques for Training GANs https://arxiv.org/abs/1606.03498\\n[2] GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium https://arxiv.org/abs/1706.08500\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"--------- AFTER rebuttal\\n\\n1) \\\"We identify issues with current masking procedures as proposed in other papers\\\"\\n\\nOne of the major issues with the current masking procedure is that the resulting image is out of the data distribution. Even though your method achieved high accuracy in Table 2 for correctness, the generates images is still out of the data distribution. \\n\\n2) \\\"We propose a cost-effective masking technique that doesn\\u2019t require retraining of the underlying classifier\\\"\\n\\nThe authors compared against zero and gray masking for correctness. None of those masking methods require retraining of the underlying classifier. It is not clear, which previous masking technique required retraining of the underlying classifier?\\n\\n3) We also further show that when performing a comprehensive evaluation, there is no one clearly better explainer and thus practitioners need to be careful about which explainer they choose.\\n\\nThis is an observation made upon through exploratory analysis and is not a technical novelty.\\n\\n4) \\\" Confidence on the other hand, also informs us about the per-instance behavior.\\\"\\n\\nThe confidence measures the change in probability assigned to the ground truth class. Table 3 should also show the variance in the confidence to understand the instance-level behavior.\\n\\nThe experiments given in the paper, it looks like confidence and correctness are positively correlated. An example of the model where they are not positively correlated will help the reader understand the importance of each of these terms.\\n\\n5) \\\" Effect of Thresholding on results\\\"\\nThank you for the explanation and new experimental results.\\n\\n\\n\\n\\n------------------------- BEFORE rebuttal\\nThe paper proposed different metrics for comparing explainers based on their correctness (ability to find most relevant features in an input, used in prediction), consistency (ability to capture the relevant components while input is transformed), and the confidence of the generated explanations. To evaluate correctness, the authors proposed to study the change in the classification accuracy of the target model, under a perturbed dataset where the most relevant regions (as given by explainer) of the image is preserved and the remaining content is replaced with non-informative backgrounds for the target class. For consistency evaluation, the authors proposed to apply transformations like rotation, translation and flip that doesn\\u2019t semantically change the input image. For confidence evaluation, they compared the prediction performance on the original image, masked image (only salient regions) and inverted masked image (only non-salient regions). \\n\\nMajor\\n\\u2022\\tThe paper lack technical novelty.\\n\\u2022\\tThe confidence component looks redundant and can be incorporated in the correctness component.\\n\\u2022\\tThe inverse saliency map idea is already proposed in \\u201cEvaluating the visualization of what a deep neural network has learned\\u201d for evaluating saliency maps. There the authors gradually replace the most salient regions with random noise and observe a decrease in prediction accuracy.\\n\\u2022\\tMost of the saliency maps producing methods, generate continuous maps. For making, we need to convert the continuous map to binary by using a threshold. An analysis of choosing different values as threshold is missing. By choosing an appropriate threshold, the size of the most salient region can be controlled. Thus, although Grad-Cam spread saliency over large area, we can use a higher threshold to define the binary mask.\\n\\u2022\\tGrad-CAM, integrated grad and smooth grad are all gradient-based saliency maps. There are perturbation-based saliency maps, which aims to find most salient regions such that removing those regions produce a maximum drop in prediction accuracy. Example \\u201cInterpretable Explanations of Black Boxes by Meaningful Perturbation.\\u201d, \\u201cObject detectors emerge in deep scene cnns\\u201d . An evaluation of such methods is missing.\", \"minor\": \"\\u2022\\tThe text in the figures has very small font size and is not readable.\\n\\u2022\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"See post-rebuttal updates below!\\n\\nSummary\\n---\\n\\n(motivation)\\nThere are lots of heat map/saliency/visual explanation approaches that try to deep image classifiers more interpretable.\\nIt's hard to tell which ones are good, so we need better ways of evaluating explanations.\\nThis paper proposes 3 such explanation evaluation metrics, correctness, consistency, and confidence.\\n\\n(approach - correctness)\\nAn explanation is correct if it highlights enough of an image for a classifier to tell the correct class with only the highlight parts of the image.\\nThe default way to evaluate on only highlighted portions is to set the non-highlighted bckground to black/grey.\\nInstead, this method finds images with the same ground truth class which the classifier scored the lowest of all such images, forming a low-confidence baseline.\\nIt copies the background from one of these images instead of using a black/grey background\\nto try and put the masked image back into the distribution of images from the ground truth class.\\nThis style of masking is used to compute correctness.\\n\\n(approach - consistency)\\nAn explanation is consistent if it is invariance w.r.t. a number of mostly semantically invariant transformations.\\nThese include small affine transformations, horizontal flips, vertical flips, and adding noise.\\n\\n(approach - confidence)\\nAn explanation is confident if the masked images it produces still have high condidence under the classifier.\\nMasked images are produced as for correctness, by copying a distractor from the same class into the background.\\n\\n(experiments)\\nThe experiments compare existing explanations (LIME, Grad-CAM, Integrated Gradients, SmoothGrad) using the proposed metrics.\\n1. Correctness: Classifiers have higher accuracy on explanation-masked images than on images they were least confident on (the ones used to fill in the background).\\n2. Grad-CAM is most correct, followed by SmoothGrad, Integrated Gradients, and LIME.\\n3. Consistency: Grad-CAM explanations are most resilient to the proposed transformations with Integrated Gradients, SmoothGrad, and LIME being successively less invariant.\\n4. Confidence: Explanation-masked images have higher scores for their ground truth class than the low-confidence baseline images.\\n5. Hyperparameter variations in the correcness/confidence metrics mostly preserve the ranking of methods, though the absolute values of performance do change substantially.\\n\\n(conclusion)\\nThe paper concludes that Grad-CAM is usually the best of the methods tested according to the new metrics and that LIME is the worst.\\n\\n\\nStrengths\\n---\\n\\nI really like the related work section. It could be a valuable resource going forward.\\n\\nI like the research direction of this paper very much. I think that enumerating a suite of complementary benchmarks is a good way to measure explanation quality because we can only come up with benchmarks that capture a small part of what we want so far.\\n\\n\\nWeaknesses\\n---\", \"i_see_some_major_conceptual_flaws_with_these_metrics\": \"* In section 3.1 it seems like the first reasons that normal masking failed is not solved by the proposed approach. The generated images are still out of distribution because the \\\"foreground\\\" and the \\\"background\\\" don't match.\\n\\n* I'm concerned about the low-confidence distractor images used in the background. They are from the same ground truth class as the high confidence images they are pasted into the background of, correct? The correctness metric is supposed to capture whether or not an explanation highlights all the class-relevant content in an image and no more. However, information that the explanation did not highlight (the background) can inform the classifier of the ground truth class because the background came from an image of that class (even if a low confidence one). This is especially true because the relevant objects might be in differrent positions in the two images. Thus it could be that the explanation did not highlight informative content but the classifier still gets the corresponding masked image correct because of the background. How often does this happen?\\n\\n* Consistency is supposed to measure \\\"the ability of the explainer to capture the relevant components\\\" under semantically invariant transformations.\\nThe reported metric is mimized when the explanation is the same before and after a variety of transformations.\\nIf this were the case then at least one of them must be wrong in the sense that it would not have captured some relevant components\\n(unless perhaps it just highlighted everything and was thus useless).\\nBecause of the transformation (e.g. 15 degree rotation) the relevant components would have been at a different position, but the best explanation according\\nto this metric would have been at the same position. Thus this metric seems to reward explanations for not capturing relevant components.\\n\\n\\nParts I Didn't Understand:\\n\\n* In section 3.1, I don't understand the second reason that masking failed. In what sense is masking made meaningless? How is that sense different from the out of distribution concern from the first point?\\n\\n\\nMissing Details / Presentation Weaknesses:\\n\\n* Missing reference to [1] which provides more metrics.\\n\\n* The meaning of confidence is different than it normally is and this may be confusing.\\nNeural networks should be well calibrated, not necessarily confident (in the commonly used sense of [3]).\", \"minor_flaws\": \"* Masking by replacing the background with grey (i.e., the bias of the first conv layer) rather than black is more common (e.g., [2] and Grad-CAM). A grey background negates the bias. It's not clear that the background should cancel the bias, but it would be nice to compare to both grey and black masking in Table 7.\\n\\n\\n[1]: Adebayo, Julius et al. \\u201cSanity Checks for Saliency Maps.\\u201d NeurIPS (2018).\\n[2]: Zeiler, Matthew D. and Rob Fergus. \\u201cVisualizing and Understanding Convolutional Networks.\\u201d ECCV (2013).\\n[3]: Guo, Chuan et al. \\u201cOn Calibration of Modern Neural Networks.\\u201d ICML (2017).\\n\\n\\nFinal Evaluation\\n---\\n\\nThis paper relies solely on theoretical arguments to show its metrics capture meaningful information. Empirically, it only shows that the proposed metrics can differentiate between some popular explanations. It does not empirically show that the differentiation is meaningful (e.g., by measuring agreement with human judgement). This by itself isn't a problem. However, above I detailed significant flaws in the theoretical justification for the metrics, so I can't recommend these metrics (this paper) on either a theoretical or an empirical basis.\", \"quality\": \"Per above, I do not think the arguments/evidence in the paper support its conclusions.\", \"clarity\": \"The paper could be clearer, but can be understood without too much effort.\", \"originality\": \"These metrics are new enough, being novel variations on prior approaches.\", \"significance\": \"If I was convinced the metrics made sense then I would guess this paper would be very impactful. As is, I don't think it will have much impact.\\n\\nThe quality of the paper is my reason for the low rating. I'm interested to see whether what others think to make sure I've understood the paper correctly and analyzed it accurately. If my understanding is incorrect I could definitely raise my rating.\\n\\nPost-Rebuttal Evaluation\\n---\\nAfter reading the other reviews and the author responses and taking a brief look at the updated paper I still think this paper should be rejected.\\n\\nThe authors' response to my comments clarified my understanding of the consistency metric. Now I understand it and think it is a useful metric.\\n\\nHowever, I did not find clarification about the confidence or correctness metrics, though I agree they are not redundant. They still don't really quite make sense to me. This puts me in about the same position as R3, who also doubts those metrics. In the end, this leaves my initial evaluation essentially unchanged. I still recommend rejection because the paper relies on a theoretical understanding of what makes confidence and correctness metrics useful and that understanding is not provided.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the interesting question of comparing the deep network visualization algorithms quantitatively. Several metrics are proposed, including correctness, consistency and confidence.\\n\\nI like the notion of consistency, where an explainer should produce the same explanation under transformations of the image that does not change its \\u201csemantic content\\u201d. \\n\\nHowever, I am confused or unconvinced by several arguments made in the paper, and if the authors can clarify them I am willing to increase my review. I think the major issue is that most metrics are justified with flimsy arguments, not compared with prior work, and do not lead to consistent ranking of the models.\", \"correctness\": \"I am not convinced by the correctness evaluation for several reasons\\n\\n1. The combined image is still out of distribution, and it is unclear why this is better compared to e.g. using a white background. \\n2. Does it favor visualization methods with a blob-shaped saliency map vs. scattered dots shaped saliency map? Does it favor methods with a larger salient region? For example, just from visual appeal, I do not think smoothgrad is worse than gradCAM, but the number says otherwise. I think the arbitrariness of this metric makes the numbers hard to believe. \\n3. If the original image is already incorrectly classified (since they are the ones where the classifier assigns the lowest probability) it is hard to imagine that adding random background can make the performance worse.\\n\\nTherefore, it is also unclear what to make of the numbers in e.g. Table 2. There are so many metrics, precision, recall, F1, and none of them seem particularly well justified. They also do not rank the model in the same way. Which result should a practitioner believe?\", \"confidence\": \"I am not sure the confidence vs. number of pixels comparison are useful. Across all methods, it seems to be more pixels -> increased confidence, which is unsurprising. I think the results are only useful if one method pareto dominates another, which is not what is observed in the experiments.\\n\\nI do not understand the difference between confidence and correctness. It seems like both measure how well a model can predict the correct class given only the salient region. For example, if method A has higher confidence and lower correctness compared to method B, what does that mean? Under which situation should one choose method A over method B?\"}",
"{\"comment\": \"Hi! Nice work on evaluation. I have a few questions.\\n\\nIn section 3.2.1, you use a special method to generate a masked dataset. You claim that these masked images belong to the data distribution. But in my opinion, some masked images of Figure 1 are still unnatural. Is it possible that these masked images are also out of the data distribution, just like the images with empty pixels?\\n\\nIn the experiments of evaluating consistency, you use rotations as semantically invariant transforms. But neural networks are not invariant to rotations. Is this an evaluation towards explainability algorithms on rotation robustness?\", \"title\": \"Some questions\"}"
]
} |
r1lHAAVtwr | Deep Hierarchical-Hyperspherical Learning (DH^2L) | [
"Youngsung Kim",
"Jae-Joon Han"
] | Regularization is known to be an inexpensive and reasonable solution to alleviate over-fitting problems of inference models, including deep neural networks. In this paper, we propose a hierarchical regularization which preserves the semantic structure of a sample distribution. At the same time, this regularization promotes diversity by imposing distance between parameter vectors enlarged within semantic structures. To generate evenly distributed parameters, we constrain them to lie on \emph{hierarchical hyperspheres}. Evenly distributed parameters are considered to be less redundant. To define hierarchical parameter space, we propose to reformulate the topology space with multiple hypersphere space. On each hypersphere space, the projection parameter is defined by two individual parameters. Since maximizing groupwise pairwise distance between points on hypersphere is nontrivial (generalized Thomson problem), we propose a new discrete metric integrated with continuous angle metric. Extensive experiments on publicly available datasets (CIFAR-10, CIFAR-100, CUB200-2011, and Stanford Cars), our proposed method shows improved generalization performance, especially when the number of super-classes is larger. | [
"learning",
"regularization",
"distributed parameters",
"deep",
"inexpensive",
"reasonable solution",
"problems",
"inference models",
"deep neural networks",
"hierarchical regularization"
] | Reject | https://openreview.net/pdf?id=r1lHAAVtwr | https://openreview.net/forum?id=r1lHAAVtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zTUF0GNypS",
"HkxHNcwwjB",
"Sye7vbvwjr",
"Byx2GZDDsS",
"H1l11ZDwsH",
"Hklvx0P-9S",
"Hyg-cD3ycB",
"H1eHQGojFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723034,
1573513772776,
1573511514821,
1573511443807,
1573511383347,
1572072942733,
1571960712875,
1571693084548
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1428/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1428/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1428/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1428/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1428/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1428/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1428/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a hierarchical diversity promoting regularizer for neural networks. Experiments are shown with this regularizer applied to the last fully-connected layer of the network, in addition to L2 and energy regularizers on other layers. Reviewers found the paper well-motivated but had concerns on writing/readability of the paper and that it provides only marginal improvements over existing simple regularizers such as L2. I would encourage the authors to look for scenarios where the proposed regularizer can show clear improvements and resubmit to a future venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"For All Reviewers\", \"comment\": \"We appreciate reviewers for their valuable reviews and constructive feedback. We have addressed all of individual review's comments. Here, we summarize responses to reviewers' comments. Firstly, we have revised the manuscript rigorously for a readable experimental section: we have clarified settings of compared methods, revised result tables, and rearranged an order of paragraphs. We have revised other sections too. Secondly, We respond to the comment that the proposed method shows marginal performance improvement. We note that our proposed method showed performance improvement systematically on all datasets. Over datasets, the amount of performance improvement seems dependent on the quality of hierarchical information. Especially, the performance improvement is significant (more than 3%) on CUB200 dataset. More improvement is shown if more well-defined hierarchical information is used. Another reason is that the regularization is incrementally applied to the baseline, i.e. baseline, baseline+l2 (weight decay) + \\u2018E\\u2019 with proposed metrics + \\u2018H\\u2019 with proposed hierarchical regularization.\"}",
"{\"title\": \"Response to the review from AnonReviewer1\", \"comment\": \"Thanks for your valuable review and constructive feedback!\\n\\nQ1. Improve readability.\", \"a\": \"[A,B] are quite useful references for hierarchical learning research. Our method is focused on parameter regularization while hyperbolic function based networks [A, B] are focused on representation learning. As hierarchical representation learning via hyperbolic spaces in our method would be a very useful strategy, we will apply it in a future work. We revised related works and conclusion sections by adding these references and related explanation.\"}",
"{\"title\": \"Response to the review from AnonReviewer3\", \"comment\": \"Thank you for the supportive review.\\n\\nQ1. several popular normalization\", \"a\": \"We appreciate the reviewer for suggesting this. Parsing of hierarchical label from the datasets is another task required much time. We planned to use ImageNet, unfortunately, parsing was not ready before the submission. We are currently preparing to apply ImageNet in the experiment.\"}",
"{\"title\": \"Response to the review from AnonReviewer2\", \"comment\": \"Thanks for your valuable review and constructive feedback!\\n\\n1. Response to the overall feedback\", \"datasets_and_performance_improvement\": \"Thanks for a supportive review to the proposed approach.\\n- In the experiment section, we conducted visual classification using four datasets not only CIFAR-10 and CIFAR-100, but also CUB200 and Cars. The improvement by hierarchical regularization with different metrics showed a different trend along datasets. Over datasets, the performance improvement seems dependent on the quality of hierarchical information. Especially, the performance improvement is significant (more than 3%) on CUB200 dataset. CUB200 has a well-defined hierarchical information which is categorized by an expert as mentioned in the manuscript. And this dataset has more superclasses are defined compared to the other dataset.\\n\\n- We observed that weight decay (L2 norm of the weights in the gradient descent setting) is very powerful, validated in [Zhang et al 2019]. In addition to this l2, our proposed hierarchical regularization is applied only to the FC layer whereas l2 and E (energy minimization) are applied over all layers. This could be one of reason why huge improvement is not shown in some datasets.\\n\\n[Zhang et al. 2019] \\\"Three Mechanisms of Weight Decay Regularization\\\", Guodong Zhang and Chaoqi Wang and Bowen Xu and Roger Grosse (ICLR 2019)\\n\\n- We revised the corresponding experimental paragraphs rigorously. For example, we added \\\"Baseline setting\\\" paragraph for clarifying comparisons. Responses in detail are shown below.\\n\\n2. Responses to the detailed feedback\\nQ1. Writing improvement. e.g., clearly stating regularization equations E, H, L2.\", \"a\": \"It is mainly because of l2-norm regularization. To clarify it, we revised Table 1, by indicating \\\"with or without 'l2'\\\". As mentioned above, l2 regularization is a weight decay term which realizes an effective learning rate. The weight decay term seems quite effective on datasets on CIFAR-10 and CIFAR-100 which consist of small image size (32 x 32) compared to larger size dataset (CUB200 and Cars, 224 x 224). Furthermore, as CIFAR-10 has ten classes only, a regularization term seems quite effective on this condition.\\n\\nNote that we have revised an order of tables, Table 3 and 4 became Table 5 and 6 respectively, and vice versa.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"## Summary\\nThe paper tackles the problem of promoting diversity in the weights of deep neural networks. The problem is interesting and useful. The paper argues that hierarchical learning and hyper spherical learning are important in addressing this problem. The paper provides experiments on CIFAR-10 and CIFAR-100 where the improvements of using such regularization is visible but not sufficiently significant.\\n\\n## Contribution of the paper\\n1. The paper proposed a regularization to training neural networks with discrete angular distance metric on the weights.\\n2. The paper shows improved performance on CIFAR-10 and CIFAR-100.\\n\\n## Overall feedback\\nI found the paper is well motivated and the proposed approach to be interesting. But I found the experimental validation a bit confusing. The improvments of the proposed approach also seems quite marginal. The contribution of different regularization terms is not understood clearly as well. So I am leaning towards rejection.\\n\\n## Detailed feedback and questions for rebuttal\\n1. The writing could be improved significantly. I had a hard time to find exactly what different regularization terms are, e.g., E, H, L2. The paper could be more clear by clearly stating these regularization equations.\\n2. Please capitalize \\\"eq. (1)\\\" to \\\"Eq. 1\\\".\\n3. It seems there are three regularization E, L2 and H. But different tables show different combinations. For example, Table 1 has E and E+l2 while Table 2 has E and E+H and Table 3 has only E+H. Can you provide full results on all datasets on E, E+l2, E+H? Without seeing the full results it is hard to draw any conclusions.\\n4. Please correct the text \\\"resnet-100\\\" to \\\"resnet-110\\\" assuming you are using resnet-110.\\n5. It seems E+H improves marginally over E. Can you elaborate the explanation about it?\\n6. Why E+l2 improves so much (+2%) on CIFAR10?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a regularization strategy motivated with principles of hierarchical, hyperspherical and discrete metric learning. Through regularization of as designed in level-wise, group-wise with the hierarchy of network, in their experiments with classification dataset, better performance are achieved with various distance.\", \"pros\": \"\", \"1\": \"The paper is also related with several popular normalization strategies such as weight normalization/standardization, group/batch normalization. It would be more convincing that some comparison could be performed against these strategies.\", \"2\": \"There would be better to show its performance using larger dataset such as ImageNet or COCO detection.\", \"cons\": \"\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a hierarchical regularization framework based on hierarchical hyperspheres. In particular, the paper tackles the problem of diversity promoting learning. Following (Liu et al., 2018), pairwise distances between parameters on hyperspheres are used in the regularization framework.\", \"the_topology_of_the_parameter_space_is_reformulated_with_multiple_hypersphere_spaces_which_are_each_defined_by_two_parameters\": \"the centroid of a sphere and its surface vector. Multiple strategies involving hierarchical hyperspherical structures are proposed in Section 3 (continuous and discrete).\\nThe relevance of the proposed method is experimentally demonstrated on different computer vision datasets.\", \"i_vote_for_reject_for_the_following_reasons\": \"- The paper is hard to read in general. Although the method section is understandable, its readability could be improved because each method currently just looks like a succession of equations. The paper also does not really give an intuition of why (or what contexts) one of the proposed regularizers would be better than the others. \\n- The reported (test accuracy) scores do not seem significantly better than the l2 baseline: none of the reported scores beats the l2 baseline by at least 1 percent, and it is unclear how that difference is measurable. How many splits/different initalizations were used? Why not give standard deviation over different test splits? etc... Given the fact that the improvements do not seem significant compared to a single baseline, a proper evaluation with standard deviation should be provided.\\n- Although the paper cites (Liu et al., 2018) as motivation for their framework, why does the proposed method does not compare to the other related work (i.e. works by Xie)?\\n\\nAs a side note, the paper seems to motivate the use of multiple spherical spaces to represent hierarchies. Recent work in machine learning has shown the advantage of using hyperbolic geometry [A,B] to represent trees, hence hierarchies. \\n\\n\\n[A] Nickel and Kiela, Poincar\\u00e9 Embeddings for Learning Hierarchical Representations, NIPS 2017\\n[B] Ganea et al., Hyperbolic Neural Networks, NeurIPS 2018\\n\\n\\n======= after the rebuttal\\n\\nI have read carefully the updated manuscript, other reviews and rebuttal.\\nMy score does not change since the proposed method does not seem to improve much compared to a simple weight decay (baseline + l2 regularization). The motivation of using the method for a very small improvement is not convincing. The \\\"well-defined hierarchical information which is categorized by an expert as mentioned in the manuscript\\\" can also be exploited by hyperbolic representations and should then also be compared (as baseline).\"}"
]
} |
SkxV0RVYDH | Versatile Anomaly Detection with Outlier Preserving Distribution Mapping Autoencoders | [
"Walter Gerych",
"Elke Rundensteiner",
"Emmanuel Agu"
] | State-of-the-art deep learning methods for outlier detection make the assumption that anomalies will appear far away from inlier data in the latent space produced by distribution mapping deep networks. However, this assumption fails in practice, because the divergence penalty adopted for this purpose encourages mapping outliers into the same high-probability regions as inliers. To overcome this shortcoming, we introduce a novel deep learning outlier detection method, called Outlier Preserving Distribution Mapping Autoencoder (OP-DMA), which succeeds to map outliers to low probability regions in the latent space of an autoencoder. For this we leverage the insight that outliers are likely to have a higher reconstruction error than inliers. We thus achieve outlier-preserving distribution mapping through weighting the reconstruction error of individual points by the value of a multivariate Gaussian probability density function evaluated at those points. This weighting implies that outliers will result overall penalty if they are mapped to low-probability regions. We show that if the global minimum of our newly proposed loss function is achieved,
then our OP-DMA maps inliers to regions with a Mahalanobis distance less than delta, and outliers to regions past this delta, delta being the inverse Chi Squared CDF evaluated at (1-alpha) with alpha the percentage of outliers in the dataset. Our experiments confirm that OP-DMA consistently outperforms the state-of-art methods on a rich variety of outlier detection benchmark datasets. | [
"Anomaly detection",
"outliers",
"deep learning",
"distribution mapping",
"wasserstein autoencoders"
] | Reject | https://openreview.net/pdf?id=SkxV0RVYDH | https://openreview.net/forum?id=SkxV0RVYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"OGiTNr_XP",
"HJgfgdccsS",
"S1l4S4y9jS",
"SyxGNNRFiS",
"ryehE_265r",
"HygKPD2hqr",
"SyxflCHBtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798723007,
1573722090322,
1573676091864,
1573671978400,
1572878387787,
1572812641318,
1571278314276
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1427/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper1427/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper1427/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes an outlier detection method that maps outliers to low probability regions of the latent space. The novelty is in proposing a weighted reconstruction error penalizing the mapping of outliers into high probability regions. The reviewers find the idea promising.\\nThey have also raised several questions. It seems the questions are at least partially addressed in the rebuttal, and as a result one of our expert reviewers (R5) has increased their score from WR to WA. But since we did not have a champion for this paper and its overall score is not high enough, I can only recommend a reject at this stage.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your feedback and insight from your experience in this domain. Unfortunately, we do not believe we can edit the name of the paper on this submission site in order to remove the reference to \\\"anomaly\\\" detection. However, all references to \\\"anomalies\\\" have been changed to \\\"outliers\\\" within the actual document. To address your other points:\", \"question_1\": \"\\u201cAuthors have not done a good survey on existing outlier detection methods. Eg:...\\u201d\\n\\nOUR RESPONSE. Thank you for alerting us about these particular methods. As you have suggested, we will add these three papers to our related work section. \\n\\nWe have added the method used \\u201cAdversarially Learned One-Class Classifier for Novelty Detection\\u201d (ALOCC) as one of the methods we compare our OP-DMA against. While the authors have released their implementation of ALOCC, this method was designed to work with image data. As we did not wish to drastically change the architecture of this method in order to work with non-image data, as added a single dense layer to the beginning of the network that takes in input data of any shape and outputs the data in a shape that can be accepted by the following convolutional layer. Our analysis shows that we outperform ALOC in all but 1 of the dataset we tested. \\n\\nAdditionally, we have now added into our comparative study an additional recent state-of-the-art deep outlier detection method [1]. We have chosen this method not only because it is also a deep neural network method, but also in addition because this work had in fact already compared their method against several datasets we also have evaluated OP-DMA against. \\n\\nAs you can see in our experimental result table in Table 2, OP-DMA outperforms this state-of-the-art method on all ODDs datasets besides 2 datasets.\", \"question_2\": \"\\u201cThis is the first time I'm seeing the OOD dataset used in the paper. Have other works published their results on this dataset? Can they be included in your paper?. If not, consider reporting results on standard datasets used in papers (B). I believe reporting results on at least two datasets is necessary to demonstrate the generalizability of the method\\u201d\\n\\nOUR RESPONSE. We believe that there may be some misunderstanding here. Namely, OOD is not a data set, but instead a benchmark repository that archives and makes available a rich variety of distinct (labeled) data sets for outlier analysis research. It is indeed a popular benchmark used by related work systems focussing on outlier research in their experimental studies.\\nFurther, we note that we have tested on 11 different datasets all coming from this repository, not just 1. While all of the datasets came from the ODD repository, each of the 11 datasets we worked with are unique real-world data sets. Also, as stated above, we have added in a comparison to this select recent deep outlier detection method [1] which had already been evaluated on several of the datasets we used also from this ODD repository.\\n\\n[1] Generative Adversarial Active Learning for Unsupervised Outlier Detection\\nYezheng Liu, Zhe Li, Chong Zhou, Yuanchun Jiang, Jianshan Sun, Meng Wang & Xiangnan He\\nIEEE Transactions on Knowledge and Data Engineering (TKDE 2019)\"}",
"{\"title\": \"Thank you\", \"comment\": \"We thank you for your detailed feedback and time on our work. Your insights are very much appreciated. We have fixed the typos, incorrect references to figures and added in reference to AAEs in our experimental section.\\n\\nQUESTION 1. \\u201cthe novelty here is to enforce that on the latent space in the context of a variational auto-encoder. I am not sure if, from anomaly detection perspective, this is any better than simply using the reconstruction score. Why go the VAE route at all?\\u201d \\n\\nOUR RESPONSE. To address your question, the reason the distribution mapping framework is important is that it allows us to calculate the likelihood of each point in the latent space, which can be leveraged to weight the reconstruction error of each point by its likelihood. The reason we do this weighting rather than just directly using this reconstruction error is that in standard autoencoders the average reconstruction error for outliers and the average reconstruction error for inliers often tends to converge to the same value. This is in fact what we show experimentally in Figure 4 (a). \\n\\nAs can be seen, the autoencoder initially has a higher reconstruction error for outliers than inliers, but iit is quickly able to reproduce both inliers and outliers roughly equally well before converging. Our OP-DMA solution, on the other hand, succeeds to maintain this difference in reconstruction error throughout the training process. We accomplish this by weighting the reconstruction loss for outliers by their likelihood in the latent space. By performing distribution mapping and making the distribution of the latent space match a prior distribution for which the PDF is known and tractible, we can calculate the likelihood of each point in the latent space.\\n\\nAlso, just to be clear, we indeed use a WAE architecture rather than a VAE solution in our work as you can see in Section 3. The reason for this is that, unlike a VAE, the WAE encourages the latent representations as a whole to match the prior, as also detailed in our Section 3.\", \"question_2\": \"\\u201cIs there a possibility that assuming a single multi-variate Gaussian, as a prior, is too restrictive? Could it result in a high false alarm rate as well?\\u201d\\n\\nOUR RESPONSE. A multivariate Gaussian prior for the latent space of VAEs and WAEs has been shown to be sufficient to represent data in well in a variety of domains including those characterized by very complicated data, including images [1] and text [2].\", \"question_3\": \"\\u201cIn most score-based algorithms, the anomaly score is computed without assuming any prior knowledge about the contamination proportion. However, in the case of OP-DMA, the contamination parameter is used to train the auto-encoder that scores the data.\\u201d\\n\\nOUR RESPONSE. No, this observation is incorrect.\\nThe autoencoder of OP-DMA does ***NOT*** use the contamination parameter. Only the EllipticEnvelope method requires the contamination parameter, in order to fit a robust covariance estimate to the encoded data. This means that one could switch out another algorithm for the anomaly detection step, such as OC-SVM, that does not require the contamination parameter and nothing about the OP-DMA network training procedure would change. \\n\\nStill, to address your concern, we have now added an additional sensitivity analysis of OP-DMA on the Satellite dataset to the Appendix of our paper. This analysis shows that the performance of EllipticEnvelope on the encoded data is robust as long as the contamination percentage is not grossly underestimated. \\n\\nQUESTION 4\\u201cAdditional comparison with other non-distribution mapping state-of-the-art models such as LOF, oc-SVM, KNN would give a clearer idea of the performance.\\u201d\\n\\nOUR RESPONSE. As you have requested, we have now added additional experiments with the state-of-the-art models including the ones you have suggested, namely, OC-SVM and LOF, into our paper. The results are shown in Table 2. It is apparent from the results that indeed our OP-DMA solution consistently outperforms these state-of-the-art methods in all but 2 of the datasets.\", \"question_5\": \"\\u201cThe set H in theorem 3 has not been defined\\u201d\", \"our_response\": \"Thank you for alerting us to this. We have added an arrow from the divergence to the loss function in Figure 3.\\n\\n[1] Pu, Yunchen, et al. \\\"Variational autoencoder for deep learning of images, labels and captions.\\\" Advances in neural information processing systems. 2016.\\n\\n[2] Yang, Zichao, et al. \\\"Improved variational autoencoders for text modeling using dilated convolutions.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\", \"question_4\": \"\\u201cIn figure 3, in the training process, the authors have describe to add the divergence between the latent and prior distribution to the loss function, however, nothing like this is clearly shown in the figure.\\u201d\"}",
"{\"title\": \"Thank you\", \"comment\": \"OUR RESPONSE. Thank you for your time spent reviewing our paper. We have fixed the typos and incorrect figure/table references that have have pointed out in your review.\", \"question_1\": \"\\u201cFig. 2 is also a bit confusing, since it seems like the order in which the diagrams appear should be swapped. Indeed, the text also refers to fig. 2 (b) before (a). The text just below fig. 2 also refers to Figure 1, but I think it should be 2? \\u201c\\n\\nOUR RESPONSE. Thank you for alerting us to this mistake. We have corrected the references to the figure in the paper, so that the text now refers to the correct subfigure.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes an improved extension of the Wasserstein auto-encoder for anomaly detection. The novelty is in proposing a weighted reconstruction error that penalizes the mapping of data with high reconstruction errors (mostly anomalies) into high probability regions. The idea being that an outlier would have a higher reconstruction error, and hence should be mapped to low-probability region of the latent distribution.\", \"experimental_results\": \"As a distribution mapping auto-encoder model, OP-DMA outperforms the deep learning based state-of-the-art models in the same domain.\", \"overall_assessment\": [\"The authors have a nice idea of forcing the latent mappings of inputs to correlate with their reconstruction error. Overall, the method is promising, but I have the following concerns:\", \"Using the reconstruction error as an anomaly score has been explored many years ago (check replicator neural networks), the novelty here is to enforce that on the latent space in the context of a variational auto-encoder. I am not sure if, from anomaly detection perspective, this is any better than simply using the reconstruction score. Why go the VAE route at all?\", \"Is there a possibility that assuming a single multi-variate Gaussian, as a prior, too restrictive? Could it result in a high false alarm rate as well? I guess this could be answered by more experimental results on richer data sets (even synthetic is fine).\", \"In most score based algorithms, the anomaly score is computed without assuming any prior knowledge about the contamination proportion. However, in the case of OP-DMA, the contamination parameter is used to train the auto-encoder that scores the data. This might result in an optimization that is very specific to the parameter setting. I strongly recommend a sensitivity analysis to study the robustness of the model against different values of contamination parameter.\", \"Performance on synthetic data-set has not been presented. The set H in theorem 3 has not been defined.\", \"Additional comparison with other non-distribution mapping state-of-the-art models such as LOF, oc-SVM, KNN would give a clearer idea of the performance. This is important, because in my past experience, non-deep learning methods give much better results on the benchmark data sets that the authors have evaluated their method on. In fact, a comparative analysis (See - https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0152173) gives a very nice comparison. However, since the authors provide results using Avg F1-score, instead of AUC curve, it was not possible to compare them myself.\", \"In figure 3, in the training process, the authors have describe to add the divergence between the latent and prior distribution to the loss function, however, nothing like this is clearly shown in the figure.The references of figures in the text are either out of place or incorrect. Figure 1(a) and (b) in reference to the text are incorrect. Figure 2 is the misleading figure as it doesn't illustrate the anomaly detection process. Figure 3 has not been mentioned anywhere in the text. The authors have mentioned the comparison of their method with Wasserstien and variational auto-encoders in the text, while in table 2 and 4, AAE has also been shown as one of the method for comparison, which is never mentioned or described in the text.\", \"Typo in caption of Figure 1 and the first line of section 3.3\", \"Overall, I am hesitant to recommend the paper before cross-checking the issue with contamination proportion and learning more about how a VAE framework is indeed important for anomaly detection.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"This work proposes an outlier detection method based on WAE framework. WAE is trained to ensure that 1) latent distribution follows a prior distribution 2) weighted reconstruction error is low where prior PDF is used to weight the reconstruction error.\\n\\nPositives\\n------------\\n1.I liked the intuition behind the proposed method and I felt its worth exploring. Paper points out that in previous works, there is no mechanism to prevent outliers from getting mapped to high probability areas in the model [21]. Authors claim that their method will over-come this issue. I believe this is the core contribution of the paper.\\n\\n2.I agree with authors point that WAE is a better choice than VAE for outlier detection because, former \\\"encourages the latent representations as a whole to match the prior \\\". \\n\\n3.I agree training with a distributional divergence loss along with a weighed reconstruction loss will be helpful for learning a robust representation (as outliers in the training dataset would be assigned a lower weight). \\n\\n4.Authors have compared the performance of their method on OOD dataset where they have compared against three other baseline methods, where they obtain better performance in majority of cases.\\n\\n\\nNegatives\\n--------------\\nA. Outlier detection and anomaly/novelty detection are two very different problems. Outliers are 'bad eggs' coming from the same class as normal data. On the other hand, anomaly/novelty are unexpected data possibly coming from other classes. This is the taxonomy followed by majority of works. In my understanding this work is about 'outlier detection'. I hope authors will use the term 'outlier detection' consistency through the paper.\\n\\nB. Authors have not done a good survey on existing outlier detection methods. Eg:\\n I. Chong You, Rene Vidal, Provable Self-Representation Based Outlier Detection in a Union of Subspaces, CVPR 17\\n II. Yan Xia, Xudong Cao, Fang Wen, Gang Hua, Jian Sun, Learning Discriminative Reconstructions for Unsupervised Outlier Removal, ICCV 15\\n III. Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, Ehsan Adeli, Adversarially Learned One-Class Classifier for Novelty Detection, CVPR 18. (they have experiments on outlier detection)\\n\\nC. Although existing methods have not explicitly stated the problem identified in (1) above, their proposals are indirectly solving this problem. Therefore, authors should have compared with papers listed in (B) to demonstrate the effectiveness of their method for a meaningful comparison.\\n\\nD. This is the first time I'm seeing the OOD dataset used in the paper. Have other works published their results on this dataset? Can they be included in your paper?. If not, consider reporting results on standard datasets used in papers (B). I believe reporting results on at least two datasets is necessary to demonstrate the generalizability of the method.\\n\\nOther comments\\n------------------------\\na. What is the dimensionality used in the latent space? I believe a larger latent space may be required in modeling more complex data such as images. Is the weighting mechanism effective when a very large latent space is used due to the curse of dimensionality. \\n\\nb. I don't think the synthetic dataset experiment is giving any interesting insights. This space is better used if an additional dataset is used instead.\\n\\nIn conclusion, I like the idea presented in this paper; however, I believe experimental results needs to be improved significantly to demonstrate the effectiveness of the proposed method. I cannot recommend to accept the paper in its present condition.\", \"post_rebuttal\": \"Authors have partially addressed my concerns. In light of new experiments provided, I'm changing my decision to weak accept.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a novel outlier detection approach, based on Wasserstein auto encoders.\\n\\nUnfortunately, I cannot comment on the overall scientific contribution of the paper, as I simply do not possess the expertise to judge it accurately. I will rely on the judgement of the other reviewers, whom I hope will have more experience and will better know the literature. I will report on a few issues with aspects related to the presentation below.\\n\\nIn fig. 1, the (a) and (b) should probaby appear below each diagram. \\\"on trained\\\" is repeated twice in the caption. \\n\\nIn fig. 2, the WAE acronym is defined only much later in the text. \\n\\nFig. 2 is also a bit confusing, since it seems like the order in which the diagrams appear should be swapped. Indeed, the text also refers to fig. 2 (b) before (a). The text just below fig. 2 also refers to Figure 1, but I think it should be 2? \\n\\nIn sec. 4.2, the text mentions table 2 when it should really be table 1. Also, table 1 should appear before table 2 in the body. \\n\\nIt looks as if the symbols () and [] are inverted? All references are referred to with () and text within parentheses (e.g. references to figures) have [].\"}"
]
} |
HJxN0CNFPB | Ladder Polynomial Neural Networks | [
"Li-Ping Liu",
"Ruiyuan Gu",
"Xiaozhe Hu"
] | The underlying functions of polynomial neural networks are polynomial functions. These networks are shown to have nice theoretical properties by previous analysis, but they are actually hard to train when their polynomial orders are high. In this work, we devise a new type of activations and then create the Ladder Polynomial Neural Network (LPNN). This new network
can be trained with generic optimization algorithms. With a feedforward structure, it can also be combined with deep learning techniques such as batch normalization and dropout. Furthermore, an LPNN provides good control of its polynomial order because its polynomial order increases by 1 with each of its hidden layers. In our empirical study, deep LPNN models achieve good performances in a series of regression and classification tasks. | [
"polynomial neural networks"
] | Reject | https://openreview.net/pdf?id=HJxN0CNFPB | https://openreview.net/forum?id=HJxN0CNFPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KFaHMBgQjm",
"rygKXhcXjB",
"B1lof9qXsS",
"Hkgi3Mcmsr",
"B1lRcpQ79H",
"HyxHu42TFH",
"Bkl5UHQ3YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798722980,
1573264417236,
1573263891293,
1573262002878,
1572187541628,
1571828845256,
1571726674464
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1426/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1426/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1426/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new type of Polynomial NN called Ladder Polynomial NN (LPNN) which is easy to train with general optimization algorithms and can be combined with techniques like batch normalization and dropout. Experiments show it works better than FMs with simple classification and regression tasks, but no experiments are done in more complex tasks. All reviewers agree the paper addresses an interesting question and makes some progress but the contribution is limited and there are still many ways to improve.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3:\", \"comment\": \"Thank you for your feedback. We address your concerns as follows.\\n\\n1. \\\" LPNNs perform similar to the vanilla FMs and PNNs, as well\\\". We politely disagree with this statement. Comparing to FM, our model has better performances on 4 out of 5 regression tasks (comparing the mean error only) and all 7 classification tasks. Comparing to PNN, our model has better performance on all 5 regression tasks (comparing the mean error only) and 5 out of 7 classification tasks. \\n\\n2. shape of matrices: the \\\"shape\\\" a matrix means the *size* of the matrix. \\n\\n3. the feature vector x: x is always the feature vector, and there is no x^l. Your latter statement is correct: V^l is always applied to the feature vector x. Using x is one key design of the model: we use the input vector x to create non-linearity as the activation (please see Eq.(2)), so the input x is indeed used in every layer. \\n\\n4. vector norm: \\\\|x \\\\| is the norm of x, and \\\\|x \\\\|^l is the l-th exponential of the norm.\\n\\n5. matrix norm: the matrix norm is induced by the vector norm (2-norm in our case). We will make this clear in the submission. For your reference, the matrix W has norm, \\\\|W \\\\| = sup { \\\\|Wx \\\\| / \\\\|x\\\\|, x \\\\neq 0}. \\n\\n6. standard deviation: we have used one test split from every dataset (some datasets provide the test split). We can estimate the standard deviation of an error rate of a trained model by the Central Limit Theorem: sqrt((1 - error_rate) * error_rate / test_size). The sizes of test splits from all datasets are over 10,000 except the size of the letter dataset is 4,500, so the standard deviations are all small and often neglectable. The standard deviations of our model are (0.0013, 0.0032, 0.0002, 0.0025, 0.0039, 0.0008, 0.0008). The standard derivations of other methods should be similar. \\n\\n7. novelty and superiority of the proposed LPNN: compared to factorization models (FM and PK), LPNN has the feedforward structure and can be trained with standard techniques (e.g. batch normalization and dropout). LPNN also includes FM and PK as special cases. Compared with PNN, LPNN has a controllable order and can be better trained. It is also a multilinear function while PNN is not. \\n\\nFinally, we would like to summarize our contribution again. First, we propose the new activation, product activation, which leads to a new polynomial model with the feedforward architecture. Then the new model can be trained with standard training techniques. Second, our method connects the feedforward structure with factorization models. Particularly, our model covers two previous factorization models as special cases. Third, we have shown a few nice properties of the proposed model: it is multilinear, and its smoothness is similar to standard feedforward neural networks.\"}",
"{\"title\": \"Response to Reviewer 2:\", \"comment\": \"Thank you for your insightful comments. We address your concerns as follows.\\n\\n1. Thank you for pointing the tensor train paper to us. We have no intention to omit this citation. We don't view or claim the chain factorization of LPNN as our main contribution. Our first contribution is the new activation, the *product activation*. The factorization of LPNN is a consequence of this activation. Our second contribution is the connection between the feedforward architecture and the factorization. We show the factorization in the paper to provide readers an understanding besides the understanding from the feedforward structure.\\n\\nActually, the tensor train paper provides further support for our work. With the method in our paper, we may have an activation that leads to a model corresponding to the tensor train, then we can learn such a model with standard training techniques (e.g. dropout and batch normalization).\\n\\n2. smoothness proof: the bound is consistent with the bound in [1]. However, we need to do the proof again because the new activation does not meet the assumption in [1]. There are some advanced techniques in [1] to further tight the smoothness bound. We will consider applying these techniques to our model.\\n\\n3. multiconvex vs multilinear: yes, it is better to say the model is multilinear -- we will correct it.\\n\\n4. batch normalization and dropout: we want to make a point that batch normalization and dropout are beneficial for training a polynomial model (equivalently the LPNN factorization). The empirical investigation *does* show the performance improvement from dropout or batch normalization or both. In Table 3 and Table 4, errors in the last column are generally larger than the errors in the first three columns. Without batch normalization, the network sometimes does not converge (the large error rates at col 2 & 4, row 4 of Table 4). \\n\\n5. citation: yes, we will include the citation to the tensor train paper.\\n\\nFinally, we hope our explanations clarify some of your concerns. We would like to politely ask you to reconsider the originality of the paper.\", \"citation\": \"1. Aladin Virmaux and Kevin Scaman. Lipschitz regularity of deep neural networks: analysis and\\nefficient estimation. In Advances in Neural Information Processing Systems, pp. 3835\\u20133844,\\n2018.\"}",
"{\"title\": \"Response to Reviewer 1:\", \"comment\": \"Thank you for your summary of our work! We address your concerns as follows.\\n\\n1. Yes, we will include time complexity in the next version of the submission. The network has a similar structure to the feedforward neural network, so the analysis is similar. We'd like to include a brief analysis here. Suppose d is the largest of the number of hidden units and the number of features (d = max(d_0, ..., d_L)), B is the batch size, and M is the number of training iterations. Then in the forward computation, each layer takes time O(d^2) to do the two matrix-vector multiplications and the element-wise product. In the backward propagation, each layer takes time O(d^2) to compute the derivatives with respect to (W, V) and also propagate the derivative to the previous layer. Overall the training time is O(M*B*L*d^2). The test time for a single instance is O(L*d^2).\\n\\n2. I guess \\\"complex tasks\\\" in the comment means learning tasks on images, audios, etc. (correct me if I am wrong). It is substantially difficult to devise a *polynomial* model and still match the SOTA performance achieved by CNN or other complex models. In this submission, we want to take the first solid step: making a feedforward network also a polynomial function with a controllable order. Our experiments show that the LPNN matches standard feedforward networks in performance (slightly worse in regression tasks). With this purpose, we have used datasets that are commonly used for benchmarking feedforward neural networks. We would also like to politely point out that papers of baseline methods (FM, PK, and PNN) all have evaluated their models on \\\"simple\\\" datasets.\\n\\n3. We do have a plan to extend this polynomial learning model to include convolutional operations. However, most existing models with other structures (e.g. CNNs or RNNs) have non-linear operations. The naive combination will not produce a polynomial model. Note that the polynomial model is one of the main purposes of this submission. We will explore different extensions of the proposed model to the future.\\n\\nFinally, we want to emphasize a little more of the importance of devising polynomial models. As we have discussed in our introduction section, a polynomial model is an important tool for the theorists to understand non-linear models, so the improvement of polynomial models is well justified.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors a new type of polynomial neural networks LPNN that can have an arbitrary polynomial order. The network has a feedforward structure which provides a good control of its polynomial order. Empirical study shows that deep LPNN models achieve good performances in regression and classification tasks. In general, the paper is clearly written by addressing an interesting problem but I still have several concerns.\\n1.\\tThe authors are expected to analyze the time complexity in terms of both theoretical and experimental analysis since the time cost the one of the major limitation of PNN. \\n2.\\tThe experimental section is rather weak since the authors only report results in some simple data. The authors are expected to report more complex tasks to show its effectivenss. \\n3.\\tIt would be interesting if the authors could show whether the algorithm can integrate with other structure such as convolutional operator to cope with the image classification, and other relevant tasks.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes Ladder Polynomial Neural Networks (LPNNs) that use a new type of activation primitive -- a product activation -- in a feed-forward architecture. Unlike other polynomial architectures that grow in the order exponentially with network depth, the proposed approach gives explicit control over the order and smoothness of the network output and enables training with standard techniques.\\n\\t\\t\\t\\t\\nThe proposed architecture is closely related to a decomposition of a k\\u2019th order multivariate polynomial function\\n[T, x^{\\\\otimes k}] = \\\\lambda^\\\\top (A x \\\\odot A x \\\\odot \\u2026 \\\\odot A x)\\t= \\\\lambda^\\\\top (A x)^{\\\\odot k}\\nwhere T is a symmetric tensor of polynomial coefficients and [\\\\cdot,\\\\cdot] denotes contraction. This is a shallow (one layer architecture) and sometimes referred as a Waring decomposition.\\n\\nIn this paper, the authors propose a specific chain factorization of the polynomial (Eq 5 in the paper), where they write the factors recursively, that they name as a ladder polynomial neural network. \\n\\nh^\\\\ell = (W_\\\\ell h^{\\\\ell-1} \\\\odot V^{\\\\ell} x)\\n\\nThe ladder architecture is very closely related to tensor trains (https://epubs.siam.org/doi/10.1137/090752286). I found it surprising and somewhat alarming that this literature is not being cited as these methods are also quite well known in deep learning.\\n\\nI like the smoothness analysis of section 3.1 -- the proof is quite easy to follow and direct. I would be quite surprised if this result would not be known in the literature in some other form but I don\\u2019t recall seeing it. On the other hand it seems to be inevitably very loose for a deep ladder network unless the network models the zero function. It would have been a valuable addition to the experimental section, if this bound would have been illustrated numerically on synthetic examples.\\n\\nIn 3.2, The authors say that the objective is multiconvex -- I would argue that it is multilinear (apart from the regularization term, that is later introduces). The observation in 3.3, that batch-normalization or dropout can be used for this model is perhaps tangential to the main argument. These is investigated in the experimental section but I don\\u2019t see a clear conclusion. The section in 3.4 must include links to tensor decompositions beyond factorization machines.\\n\\t\\t\\nOverall, I think the paper has some merit and could be interesting for some readers, despite the fact that the contribution is not very original and the treatment could be improved in many ways.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This work introduces a new polynomial feed-forward neural network called Ladder Polynomial Neural Network (LPNN). Theoretical results show that LPNNs generalize vanilla PNNs and FMs. In the experimental analyses, LPNNs perform similar to the vanilla FMs and PNNs, as well.\", \"In the statement \\u201cV has a shape of (d, d0) if u has d entries\\u201d, what do you mean by shape, more precisely?\", \"x is a feature vector fed to a neural network as an input. I assume that it is given in the input layer l=0, according to the notation. Then, should x^l be used instead of x in equations (4), (5), (6), (9), and the other corresponding statements? Otherwise, is V^l applied to the input vector x at each layer l?\", \"What do \\\\| \\\\|^2, \\\\| \\\\|^l, \\\\| \\\\|^{l+1} denote?\", \"How do you define the norm \\\\| \\\\| for matrices, more precisely?\", \"Please provide standard deviation/variance of classification error of different models in Table 2.\", \"Please clarify novelty and superiority of the proposed LPNNs compared to the vanilla and state-of-the-art methods in theory and practice. For this purpose, I suggest to further analyze and compare convergence and generalization properties of LPNNs with the sota in theory and practice.\"], \"after_the_rebuttal\": \"Authors responded some of my questions. However, I still consider that the contribution of the paper is limited, and should be improved for a clear acceptance. Therefore, I keep my rating.\"}"
]
} |
SJgmR0NKPr | Training Recurrent Neural Networks Online by Learning Explicit State Variables | [
"Somjit Nath",
"Vincent Liu",
"Alan Chan",
"Xin Li",
"Adam White",
"Martha White"
] | Recurrent neural networks (RNNs) allow an agent to construct a state-representation from a stream of experience, which is essential in partially observable problems. However, there are two primary issues one must overcome when training an RNN: the sensitivity of the learning algorithm's performance to truncation length and and long training times. There are variety of strategies to improve training in RNNs, the mostly notably Backprop Through Time (BPTT) and by Real-Time Recurrent Learning. These strategies, however, are typically computationally expensive and focus computation on computing gradients back in time. In this work, we reformulate the RNN training objective to explicitly learn state vectors; this breaks the dependence across time and so avoids the need to estimate gradients far back in time. We show that for a fixed buffer of data, our algorithm---called Fixed Point Propagation (FPP)---is sound: it converges to a stationary point of the new objective. We investigate the empirical performance of our online FPP algorithm, particularly in terms of computation compared to truncated BPTT with varying truncation levels. | [
"Recurrent Neural Network",
"Partial Observability",
"Online Prediction",
"Incremental Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=SJgmR0NKPr | https://openreview.net/forum?id=SJgmR0NKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ahHW2xn_S",
"HyxtIoqnor",
"BJx0at9nor",
"Hyg5UOInjH",
"SyxwiITsoS",
"Bkl8xW2_oB",
"H1l-QdmPjr",
"BJxcb6KLsS",
"S1xrhnFIjS",
"HkeOFnFIoS",
"SkeYFd2EqS",
"SyeisMCatH",
"H1lOzsLcFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798722952,
1573854033376,
1573853638322,
1573836881775,
1573799583030,
1573597421547,
1573496856814,
1573457153861,
1573457068704,
1573457024347,
1572288641385,
1571836578849,
1571609360487
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1425/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes an alternative to BPTT for training recurrent neural networks based on an explicit state variable, which is trained to improve both the prediction accuracy and the prediction of the next state. One of the benefits of the methods is that it can be used for online training, where BPTT cannot be used in its exact form. Theoretical analysis is developed to show that the algorithm converges to a fixed point. Overall, the reviewers appreciate the clarity of the paper, and find the theory and the experimental evaluation to be reasonably well balanced. After a round of discussion, the authors improved the paper according to the reviews. The final assessments are overall positive, and I\\u2019m therefore recommending accepting this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Sequetial MNIST and PTB\", \"comment\": \"I'm a bit skeptical of the results mentioned about PTB and Sequential MNIST.\\n\\nFor PTB as well as sequential MNIST, can authors report their baselines results (i.e when you do full back propagation ?). I just want to make sure, baselines are not \\\"faulty\\\". \\n\\nFor reference, authors can see result in Zoneout (https://arxiv.org/abs/1606.01305) or Recurrent Batch Normalization paper (https://arxiv.org/abs/1603.09025). Thanks.\"}",
"{\"title\": \"Further comments\", \"comment\": \"I would like to thank authors for their informative response, as well as for improving presentation in the paper. Some of my concerns regarding the baselines, however, remain.\\n\\nQuoting the authors' response: \\n\\n> In fact, FPP w/o state updating is like using T-BPTT: T steps of back-prop-through time are computed, without the state-loss and by starting from a given state (in this case, whatever the state was at that time).\\n\\nIn this is the case, I don't understand by FPP w/o state update is not considered as the main baseline, especially given that, quoting the paper, it is similar to a published \\\"Stored State T-BPTT\\\" method. The comparison between FPP and FPP w/o state update take place in separate figure and only using toy task. It is therefore very hard to understand which percentage of the improvement that FPP brings upon T-BPTT comes just from using the buffer, and not from updating the states in it. Besides, in Figure 6 cross-entropies for StochasticWorld are much higher than those in Figure 3(b). What's different between these experiments?\"}",
"{\"title\": \"Thanks.\", \"comment\": \"Thanks for running those experiments. This definitely helps to improve the paper.\"}",
"{\"title\": \"Preliminary results of UORO\", \"comment\": \"We would like to provide some preliminary results of UORO (memory-1 rank-1 UORO). We use the same experimental setting as Figure 3 in our paper. First, we found that UORO is less sample efficient than FPP. The first column of the table below shows the average performance over 5k steps for CycleWorld and 10k steps for StochasticWorld. The second column shows the average performance over 50k steps in both tasks. In CycleWorld, UORO does not learn well during the first 5k steps, while FPP can achieve reasonable accuracy (see Figure 3). We also found that UORO does not perform well in stochastic tasks (e.g. StochasticWorld).\\n\\n default steps 50k steps\\nCycleWorld 9.17% 3.70%\\nStochasticWorld 0.663 0.662\\n\\nWe will update the results (including Sequential MNIST and PTB) into the final version of our paper.\"}",
"{\"title\": \"Updated pdf\", \"comment\": \"The pdf is updated now with the changes we mentioned in the comment!\"}",
"{\"title\": \"Quick reply.\", \"comment\": \"\\\" [1] provides an interesting approach to obtain an unbiased approximation of the RTRL update, and will be added to the list of other such related approaches. [2] introduces a local, perhaps more biologically plausible variant of RTRL. However, [2] introduces some strong assumptions (like disregarding non-leading order terms, linearity of the RNN) in the analysis of their learning rule, and it is rather unclear if the update rule will lead to a stationary point of any objective. \\\"\\n\\nI agree with this.\\n\\n\\\"We also chose two real datasets with reasonably long dependencies back in time: Sequential MNIST and character-level prediction of the Penn Treebank. \\\"\\n\\nIts hard to actually argue that character level prediction task of PTB actually requires \\\"long-dependencies\\\". Probably a better task would be doing character level prediction for Text8 dataset.\\n\\n\\\"we thought of it as an apples-to-oranges comparison, UORO would nonetheless provide a baseline comparison to this other class of methods which would absolutely strengthen the results.\\\"\\n\\nI appreciate. :)\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank Reviewer 3 for pointing out the errata and clarity issues; we will make the necessary corrections. We will update the PDF by November 12. We will add the pseudocode into section 3 (with mini-batch processing), we will make the legend in figure 4 consistent, we will improve the explanation of non-overlapping T-BPTT in section 5, and we will explain the CycleWorld environment in section 5.\\n\\nThe main concern of Reviewer 3 was about baselines. We would like to clarify that our problem setting is online prediction, where data constantly arrives in a stream, rather than offline training from a fixed batch of data. The T-BPTT and no-overlap T-BPTT algorithms are adapted to our online problem setting: where only the loss from the last time step are back-propagated, which is standard for training RNNs online. Even though language modeling can often be done offline, it nonetheless serves as a useful realistic task to evaluate FPP and other algorithms under online training. \\n\\n> \\u201dA very interesting and absolutely necessary baseline is FPP without state updates, but for such a baseline the loss comparing s_t and s_{i-T} should be disabled. Was this done?\\u201d\\nYes, this was done. \\u201cFPP without State Updates\\u201d does not include the quadratic penalty term that compares \\\\hat{s}_i and s_{i - T}. \\n\\n>\\u201dthe paper must clear show that updating the states in the buffer allows to get same performance with smaller T, compared to the best possible baseline that also uses the buffer but does not update states in it.\\u201d\\nWe agree. We do believe the comparison to FPP w/o state updating provides this role. Potentially the baseline Reviewer 3 feels is missing is running T-BPTT on the buffer, as if it was a fixed dataset (i.e., offline). In fact, FPP w/o state updating is like using T-BPTT: T steps of back-prop-through time are computed, without the state-loss and by starting from a given state (in this case, whatever the state was at that time). We in fact tried a few other strategies of starting from random states rather than stored states, or periodically updating states in the buffer so that start states were less arbitrary; these choices did not improve performance. It is of course possible that another approach using a buffer, that does not update (or even store) state variables, could be developed that outperforms FPP. However, such an approach is not obvious and would be a novel algorithm. We cannot and do not claim to have definitely demonstrated that one must maintain and update state variables. We do nevertheless claim that the baseline of FPP w/o state updating provides evidence of the importance of state updating. \\n\\nThere are a couple of other comments about future directions, which we appreciate! One of our goals is to take advantage of parallelization with FPP, and so Reviewer 3\\u2019s point about increasing B is well-taken. Reviewer 3 also raises an interesting point about updating intermediate states. This is equivalent to setting T=1 and updating consecutive transitions in one mini-batch. There are multiple ways to sample transitions (e.g. sampling consecutive transitions, prioritized sampling or uniform sampling) and perform updates (e.g. choice of T, B and M), given the sound approach for training an RNN with a buffer. These are promising strategies to try, but are additional after first understanding the basic idea; we therefore picked the simplest strategies to show in this paper. A next step is to further investigate how much we can improve performance by more effective sampling approaches, and by further investigating the effects of T, B, and M.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"We would like to thank Reviewer 1 for their valuable comments and literature suggestions. We will add discussion of the two referenced papers into our work. [1] provides an interesting approach to obtain an unbiased approximation of the RTRL update, and will be added to the list of other such related approaches. [2] introduces a local, perhaps more biologically plausible variant of RTRL. However, [2] introduces some strong assumptions (like disregarding non-leading order terms, linearity of the RNN) in the analysis of their learning rule, and it is rather unclear if the update rule will lead to a stationary point of any objective.\", \"simulation_problems\": \"There are potentially two concerns here. The first is the reasons for selecting the tasks, and whether they really test if FPP can perform well. The second is understanding the performance, relative to baselines other than T-BPTT. We address both below.\\n\\nOur primary goal in selecting tasks was to allow a controlled experiment where we changed the length of dependencies, to investigate the ability of FPP to capture long-term dependencies in comparison to T-BPTT, on both simpler domains with relatively fewer confounding factors and on realistic datasets. We chose two synthetic problems and two real datasets towards this goal. In CycleWorld, the ability to predict the observation bit is directly linked to how far back the agent is able to remember. StochasticWorld is one level more sophisticated: the agent is required to remember two independent observations from the past which probabilistically influence the present prediction target.\", \"we_also_chose_two_real_datasets_with_reasonably_long_dependencies_back_in_time\": \"Sequential MNIST and character-level prediction of the Penn Treebank. Our character-level prediction task is similar to the character-level prediction task in the UORO paper, but is performed on a real instead of synthetic dataset. We also chose these two because they are common datasets for testing RNNs (see the citations of the Real Datasets subsection in section 5).\\n\\nFor the second potential concern, we agree that UORO would be useful as a baseline for a completely different approach, based on RTRL. We will first justify why we did not include initially, and then detail our current efforts in implementing the UORO baseline. We had focused our empirical investigation on T-BPTT for two reasons. First, the computation can be made comparable between T-BPTT and FPP, and the methods are similar in their simplicity. One of our primary goals is to develop simple approaches to train RNNs. Second, T-BPTT remains the standard algorithm for training RNNs, and with sufficiently large T can perform very well. For this reason, we could simply increase T to ensure we reached good performance and then compare for that T and smaller. UORO, and other algorithms that approximate RTRL, are typically more complicated and expensive;they are also relatively new and so not yet as standard. \\n\\nHowever, though we thought of it as an apples-to-oranges comparison, UORO would nonetheless provide a baseline comparison to this other class of methods which would absolutely strengthen the results. We are currently implementing the UORO baseline; we will attempt to include it in the revision for the author rebuttal period, and if we cannot finish it in time will include it in a final paper. \\n\\n[1]: Mujika et al., \\u201cApproximating Real-Time Recurrent Learning with Random Kronecker Factors\\u201d\\n[2]: Murray, \\u201cLocal online learning in recurrent networks with random feedback.\\u201d\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"We would like to thank Reviewer 2 for the comments and questions. The biggest concerns seem to be about (1) the strength of the theoretical results, (2) the choice of the key hyperparameter of FPP, and (3) comparing with the strongest baselines in the field.\\n\\n(1): It is in general difficult to make claims about the quality of an arbitrary stationary point; this difficulty also applies to the vanilla RNN objective. We do partially characterize the stationary points of the FPP objective in terms of recovering stationary points of the RNN objective, in Theorem 2. There are also some theoretical and empirical results on the ability of SGD to converge to local minima (for example, [1, 2]), and avoid getting stuck in saddle points. Because we use SGD, we can rely on such results to suggest we may similarly converge to local minima. The current theory, though, focuses on the first key claims: that the algorithm converges to stationary points (which is non-trivial and is not obviously true of the T-BPTT algorithm) and that there is a connection between the stationary points of FPP and the original RNN objective. \\n\\n(2): In the Appendix, we plot the sensitivity curve for lambda (Fig. 6). This figure shows that FPP is not particularly sensitive to lambda, though it does also indicate that we could have tuned lambda and gotten even better performance. We opted to show performance in the main body for this default value of lambda = 1 across all experiments, which we actually chose before even seeing these sensitivity plots. In general, the performance of FPP was insensitive to lambda across all our experiments. This is one of the benefits of FPP: we do not require extensive hyperparameter tuning is and yet we achieve consistent stability benefits compared with T-BPTT. We will refer more explicitly to these sensitivity curves in the main body to clarify this point. \\n\\n(3): We do not claim to introduce a model that outperforms the state-of-the-art on any particular dataset (e.g. on GLUE). SOTA performance requires SOTA algorithms, SOTA meta-parameter tuning, SOTA implementations, and at times specialized hardware. This paper is about a new algorithm for training RNNs. Based on our results we expect many high-performance systems could be built on top of FPP (perhaps even SOTA on some of these data-sets). But this is well beyond the scope of this paper. Think of this paper as introducing a new algorithm, neither a complete learning system nor a SOTA claim. Certainly there is room for all three types of papers in ICLR as their contributions are very different. Our main contribution was to introduce a novel approach for RNN training and show that it is more robust than T-BPTT. Future work in adapting FPP to more modern RNN architectures would be interesting, but is not in the scope of this paper. \\n\\n[1] Choromanska, et al., \\u201cThe Loss Surfaces of Multilayer Networks\\u201d\\n[2] Lee et al., \\u201cGradient Descent Only Converges to Minimizers\\u201d\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors reformulate the RNN training objective to explicitly learn the state vectors, and propose an algorithm called Fixed Point Propagation (FPP Algorithm 1). The authors motivate the derivation of FPP in Section 3, provide some theoretical convergence results in Section 4, and demonstrate experiment results in Section 5.\\n\\nIn general, this paper is interesting and well written. The experiment results in Section 5 seem to be very strong. However, I am not familiar with the relevant literature, thus, I am not sure if the authors have compared with the strongest baselines in this field.\", \"i_think_the_paper_suffers_from_the_following_limitations\": \"1) Theorem 1 shows that the FPP algorithm converges to a stationary point. However, this result seems to be too weak. Can we say something about the stationary point? Is it a local minimum under some conditions?\\n\\n2) In the experiments, the authors choose \\\\lambda=1. My understanding is that \\\\lambda is a key meta-parameter of FPP. Please do a better job in justifying this choice.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes an alternative to the truncated back-propagation through time (BPTT) algorithm for training RNNs. An online setting is assumed, which I understand as training RNN as data arrives and not storing too much of the arriving data (although notably the proposed method uses a buffer). The proposed Fixed-Point Propagation algorithm works as follows. It maintains a buffer of the last N RNN states that can be updated. From this buffer at every time step it samples two states that are T steps apart from each other (s_i and s_{i - T}). The RNN is run for T steps starting from s_{i - T}. A loss function is constructed that takes into account the output loss at time i as well as the mismatch between s_i and the new state constructed by running the RNN. The states s_i and s_{i-T}, as well as the RNN parameters are updated based on this loss function.\\n\\nThe novel idea of the paper is therefore a modifiable state buffer for the RNN states. The goal is better computational efficiency than that of T-BPTT. \\n\\nThe paper is mostly clearly written, but I think it is absolutely necessary to move Algorithm 1 to the main text, as well as to add the mini-batch processing (B) and multiple updates (M) to it. This pseudocode was very instrumental for me to understand the algorithm. I confess that I did not read the theory; I don\\u2019t think it\\u2019s super relevant because in practice convergence to fixed-point will require too many updates. \\n\\nThe empirical comparison with T-BPTT is substantial, but the waters are muddied a bit by imprecise presentation of baselines. For example, when T-BPTT is used for e.g. language modelling, it doesn\\u2019t make sense for back-propagate the loss from only the last time step, losses from all time-steps can be back-propagated together. Was this done in T-BPTT and/or FPP? Does T-BPTT use the 100 step buffer somehow? NoOverlap T-BPTT is not explained very well. A very interesting and absolutely necessary baseline is FPP without state updates, but for such a baseline the loss comparing s_t and s_{i-T} should be disabled. Was this done?\\n\\nIn short, the paper must clear show that updating the states in the buffer allows to get same performance with smaller T, compared to the best possible baseline that also uses the buffer but does not update states in it. I am not sure this case is clearly made at the moment.\\nOne further direction authors could explore is that using a very small T but larger B could be more computationally inefficient because parallel computations would be used instead of sequential ones. Besides, from a practical viewpoint and I think it could make sense to also update intermediate states, and not just s_i and s_{i-T}.\", \"other_remarks\": [\"the legend in Figure 4 is a dashed line, but the curves in the plots are dotted\", \"the max. cycle length in CycleWorld is not clearly explained in the text, the name CycleWorld is not properly introduced\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"\", \"background\": \"The authors consider the problem of training RNNs in an online fashion. The authors note that RNNs are trained using BPTT, which prevents them from being trained in an online fashion.\\u00a0 There have been various approximations which has been proposed which are based on RTRL or approximations to RTRL, as current approximations based on RTRL has high computational complexity.\", \"proposed_method\": \"The authors propose to learn the state of the RNNs explicitly by improving the prediction accuracy at each time step as well as predicting the \\\"next\\\" state of the RNN.\\u00a0The authos note that the constraint of predicting the next state\\u00a0 is a fixed-point formula for the states underthe given RNN dynamics.\", \"clarity_of_the_paper\": \"The paper is clearly written.\", \"related_work\": \"Most of the relevant related work has been covered in the paper and discussed. I like it. These two related work could also be cited. Here, authors approximate the RTRL with random\\u00a0kronecker factors.\", \"https\": \"//www.biorxiv.org/content/10.1101/458570v1\", \"experiment_section\": \"The authors evaluate the proposed method on both synthetic as well as real experiments.\", \"simulation_problems\": \"The authors use simulation problems to note the robustness of the proposed method to increasing termporal delay in online learning. These tasks show the soundness of the proposed method. Its actually difficult to tell how the proposed method is performing because of the selection of tasks. It might be more interesting to choose same tasks as in UORO paper (https://arxiv.org/abs/1702.05043) and it could be another \\\"baseline\\\" for the proposed method.\", \"ablations\": \"I liked the fact that the authors consider conducting experiments without state updating, as it could also be due to using a large buffer rather than explicitly optimizing for the prediction objective.\", \"positive\": \"The proposed method could be interesting for learning the state representation for policy gradient RL methods (specifically POMDPs) as the proposed method can leverage use of mini-batches, as well as multiple-updates which is a crucial ingredient to make best use of data collected by the agent interacting with the environment.\"}"
]
} |
rygfC0VKPS | Improved Modeling of Complex Systems Using Hybrid Physics/Machine Learning/Stochastic Models | [
"Anand Ramakrishnan",
"Warren B. Jackson",
"Kent Evans"
] | Combining domain knowledge models with neural models has been challenging. End-to-end trained neural models often perform better (lower Mean Square Error) than domain knowledge models or domain/neural combinations, and the combination is inefficient to train. In this paper, we demonstrate that by composing domain models with machine learning models, by using extrapolative testing sets, and invoking decorrelation objective functions, we create models which can predict more complex systems. The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling. | [
"Composition",
"extrapolation",
"boosting",
"autocorrelation",
"systematic errors"
] | Reject | https://openreview.net/pdf?id=rygfC0VKPS | https://openreview.net/forum?id=rygfC0VKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"uzhmydBe0i",
"rkequUYRFB",
"rye7ynORtB",
"B1gffMK5YS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798722921,
1571882610152,
1571879898641,
1571619338228
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1423/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1423/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper1423/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All reviewers agree that the paper is to be rejected, provided strong claims that were not answered. In this form (especially with such a title) it could not be published (it is more of a technical/engineering interest).\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present a new hybrid models that incorporates traditional models with neural networks by boosting and reducing the residuals in stages. They show 3 different boosting schemes: sequential boosting, parallel boosting and cyclical boosting (inspired by variational autoencoders). Furthermore the authors present the new Ljung-Box (LJB) loss function which reduces autocorrelation. They test on 2 simulated systems and on a physical DC Motor and put an emphasis on testing their result on extrapolated (unseen) data.\", \"strengths\": [\"The authors combine traditional models with deep learning approaches.\", \"The models can extrapolate to unseen data better than conventional deep learning approaches.\", \"The results show that the authors found a loss function that can reduce autocorrelation.\"], \"weaknesses\": [\"The paper lacks an exact explanation for every model that was used. This is made worse by a lack of consistency, e.g. the model Physics-Boost-Dense doesn't explain which of the 3 boosting schemes is used.\", \"The authors propose a new Ljung-Box (LJB) loss function. It contains hyper parameter L \\\"which should be larger than possible correlations\\\". It is not mentioned how to find such a value or what the value for the experiments for. They show that this loss function can reduce autocorrelation, but it comes at the expense of higher RMSE. It is not clear in what scenario this is useful.\", \"The methodology is difficult to follow. E.g. there is a mathematical explanation in 3.1, but not for 3.2 or 3.3. In Fig. 1(1) and Fig. 1(2), they incorporate a traditional model, but not in Fig. 1(3). Later they show results for a cyclic method that incorporates a physical model, but it's not clear how that was done.\", \"The authors claim that they are using \\\"a new method for creating testing data\\\". However, testing on extrapolating data is not a new method.\"], \"additional_comments\": [\"Equation (2): I assume the hat should only be above f, not the entire f(x). If not, I don't see this symbol explained or used anywhere else in the paper.\", \"h_t in equation (3) and (4) are not explained in the text\", \"Fig. 1(1) and Fig. 1(3) are not mentioned in the text\", \"Figure 3 and 4 are hard to understand. Instead of State 0/1/2, it should say angle / angular velocity / acceleration. There should be a table to accompany this figure, so that it's easier to compare the models. As is, it's very hard to compare e.g. Physics-Ensemble-Dense with Dense-Ensemble-RNN.\", \"There are plenty of typos and a general lack of clarity. E.g. this sentence: \\\"The alternative of performing end-to-end of all the models at once is an significant alternative.\\\"\", \"Add reference for original Ljung-Box test\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents approaches for combining neural network (NN) with non-NN models to predict behavior of complex physical systems.\\nI found this paper very hard to read, with a lot of details and key pieces of information missing or vaguely stated. Following are some specific instances in no particular order:\\na) From the results it seems that often approaches using \\u2018cyclic-boost\\u2019 perform the best. However the description of cyclic boost in Section 3.3 lacks any precise description of what the cyclic approach does. For instance, the NN that predicts the inputs from model outputs, how is that used and under what objective is that learnt?\\nb) The main claim of the paper is that combining physics models with NNs is better than NNs alone, esp. when tested on \\u2018extrapolative data\\u2019. However, if cyclic-boost is the best approach, the paper should include results with cyclic-boost-RNN combination, these do not seem to be there to assess value add of the physics models in the best performing system.\\nc) Section 3.2 states a parallel ensemble of NN and non-NN models has the advantage of finding \\u2018global optimum\\u2019 \\u2026 this is a strong statement and needs to be demonstrated.\\nd) The description of new loss function in Section 4 is unclear. For instance, what is \\u2019n\\u2019 in (7)?\\ne) In Section 3.1 I\\u2019m assuming that \\u2018h_t\\u2019 denotes the model being trained in iteration \\u2019t\\u2019, and some of these are NN whereas others are domain models?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper conducts several experiments to compare the extrapolative predictions of various hybrid models (sequential, ensemble, and cyclic), which compose physical models, neural networks and stochastic models.\\n\\nUnmodeled dynamics is a bottleneck for model learning, model-based reinforcement learning and sim-to-real transfer. This paper tries to tackle this important research challenge. However, this research is at its early-stage and the results are preliminary. I would not recommend accepting the paper at this time, and encourage the authors to continue this promising research direction. First, the conclusion of the paper is not clear to me. Which is the best way to compose models? Is it physics+RNN+cyclic LJB loss? It would be helpful to put all the visualizations of the results together and organize it in a way that can easily reveal conclusions across different experiments. If the conclusions are inconsistent across different experiments, detailed analysis and explanations are expected. Second, it is not sufficient to just analyze the regression errors of different hybrid models for an ICLR paper. The paper would be much stronger if the it could demonstrate that with the more accurate hybrid model, model-based learning performs better or transfer learning is more straightforward. For example, it could change Section 6.2 to control a double inverted pendulum using only a rough physics model of a single pendulum. Similarly, it could augment Section 6.3 to control a real inverted pendulum with a DC motor with large backlash.\\n\\nIn addition, I have a few questions/comments about the clarity of the writing:\\n1) Are \\\\theta in eq. (1) the parameters of the physical system, such as mass, length of the rod for the inverted pendulum case?\\n2) I am not familiar with LJB loss function. If it is commonly-used, please add a reference. If not, more explanations would be helpful. Is LJB loss function used alone or combined with the MSE loss?\\n3) Is there any particular reason that cos\\\\theta and sin\\\\theta are chosen as states for Inverted Pendulum instead of \\\\theta? In Double Pendulum, \\\\theta is used.\\n4) In DC Motor, \\\"This input, output pair data was fit to a linear state space model, (i.e. a physics model)...\\\" Is the output linear to the states (\\\\theta, \\\\dot{\\\\theta}) or linear to the physical parameters (inertia, friction, etc.)? In previous examples, such as Figure 4, physics model and linear model are separate. It is confusing to me that in this example, the linear model seems to be the physics model. Is the dynamics equation of the DC motor linear? Am I missing something?\"}"
]
} |
S1xGCAVKvr | LEARNING TO LEARN WITH BETTER CONVERGENCE | [
"Patrick H. Chen",
"Sashank Reddi",
"Sanjiv Kumar",
"Cho-Jui Hsieh"
] | We consider the learning to learn problem, where the goal is to leverage deeplearning models to automatically learn (iterative) optimization algorithms for training machine learning models. A natural way to tackle this problem is to replace the human-designed optimizer by an LSTM network and train the parameters on some simple optimization problems (Andrychowicz et al., 2016). Despite their success compared to traditional optimizers such as SGD on a short horizon, theselearnt (meta-) optimizers suffer from two key deficiencies: they fail to converge(or can even diverge) on a longer horizon (e.g., 10000 steps). They also often fail to generalize to new tasks. To address the convergence problem, we rethink the architecture design of the meta-optimizer and develop an embarrassingly simple,yet powerful form of meta-optimizers—a coordinate-wise RNN model. We provide insights into the problems with the previous designs of each component and re-design our SimpleOptimizer to resolve those issues. Furthermore, we propose anew mechanism to allow information sharing between coordinates which enables the meta-optimizer to exploit second-order information with negligible overhead.With these designs, our proposed SimpleOptimizer outperforms previous meta-optimizers and can successfully converge to optimal solutions in the long run.Furthermore, our empirical results show that these benefits can be obtained with much smaller models compared to the previous ones. | [
"problem",
"simpleoptimizer",
"information",
"better convergence learning",
"better convergence",
"learning",
"goal",
"models",
"iterative",
"optimization algorithms"
] | Reject | https://openreview.net/pdf?id=S1xGCAVKvr | https://openreview.net/forum?id=S1xGCAVKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"mTLUj-1oO3",
"Byg2F33lcH",
"SJefgrxCKr",
"HyesVOJatr"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798722892,
1572027524126,
1571845354036,
1571776562842
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper1422/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper1422/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper1422/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes an improved (over Andrychowicz et al) meta-optimizer that tries to to learn better strategies for training deep machine learning models. The paper was reviewed by three experts, two of whom recommend Weak Reject and one who recommends Reject. The reviewers identify a number of significant concerns, including degree of novelty and contribution, connections to previous work, completeness of experiments, and comparisons to baselines. In light of these reviews and since the authors have unfortunately not provided a response to them, we cannot recommend accepting the paper.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work suggests a host of improvements and simplifications to the meta-learning approach of Andrychowicz et. al. The authors have carefully analyzed weaknesses in previous work and I think their experiments do suggest that they have improved on them. However, I would still recommend rejecting, as imo\", \"1\": \"the absolute state of these approaches seems unpromising, and\", \"2\": \"the authors do not do a good job contextualizing how well the learned optimization performs compared to more standard methods.\\n Each step used to train the meta-optimizer could be used instead to pick hyperparameters (choice of which \\\"classical\\\" optimizer, and its hyper-parameters, like lr etc.). However, there is not really any discussion about the trade-offs between the time it takes to train the meta-optimizer and the results that would be obtained by hyper-parameter search. Indeed, even with the tiny hyper-parameter search spaces the authors use, in most of their figures, one of ADAM or plain SGD does comparably or better than their method. \\n\\nIt is not clear if the figures represent train or test loss; the authors should report both. Furthermore, imo, almost the figures are cut off too soon- the models all still seem to be learning. What are the error rates at the cutoffs on test and train?\\n\\nFinally, I wonder at the choice of test problems. Why not pick something where one naturally will need to run an optimization many times (e.g. style transfer) rather than a toy problem like mnist or cifar? Note that there are already other approaches (not learning based on learning gradient descent) that have been successful there.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors build on the 'learning to learn' work, that aims to leverage deep learning models with optimization algorithms, most commonly with recurrent networks. The goal is to utilize meta-learners that can adapt the optimization update strategy based on data/experience. The authors aim to tackle problems that often arise in such meta-optimization schemes, such as covergence and generalization problems. The paper is overall well-written, and several experiments are presented\\n\\nBuilding on previous work, the authors propose some variations to the meta-learning schemes and architecture. I comment on these below.\\n\\n- The authors remove the bias term based on the intuition that for the meta-learners, removing the bias can lead to better convergence. This is supported by a set of experiments in Fig 3a (but with only one learning rate?). This experiment shows an extraordinary difference in accuracy between including and not including the RNN bias. I think, because this difference is substantial (along with papers that do use the bias in meta-learners showing good results overall), more evidence/ablations are required. \\n\\nCan we can be sure that if the bias is non-zero (as in other reported works), we should expect a worse performance?. Clearly this change seems to speed up the process, and in the example explained in the paper it makes sense, but could there be examples where lack of bias might lead to worse results?\\n\\n- The authors claim that one of the problems of learning-to-learn algorithms is the late-stage optimization strategy. While previous works introduce a weighting for the meta-optimizer's loss function (usually binary), the authors extend this to continuous monotonically increasing functions in order to weight the late stage steps of the algorithm more. The authors propose compensating small loss difference happening during late optimization with a larger weight. This naturally brings the question of which function should be used, as showing that any function will do seems difficult in practice. Have the authors tried any functions that failed or led to overfitting in comparison to other works? Could one argue that if the loss difference is not large then perhaps the network can be more prone to overfitting at least in some cases?\\n\\nFinally, the authors introduce embedding sharing between coordinates (aggregating hidden states), and also propose fine-tuning (sampling 5% of training data 5 times for warm-up). Sharing hidden state information is expected to be useful (and has been often employed in literature), similarly to fine-tuning. Also, 5% of dataset and 5 times seems to be an arbitrary choice - probably more related to the experimental section rather than the method itself.\", \"a_question_on_fine_tuning\": \"if fine-tuning was used for the proposed method, have the compared methods also been pre-trained to provide fair comparisons? This is particularly relevant to fig 4B where only the fine-tuned method slightly overperforms SGD.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents several improvements over the existing learning to learn models including Andrychowicz et al. (2016) and Lv et al. (2017). Specifically, this paper analyzes the issues in the original learning to learn paradigm (L2L), including instability during training and bias term issues in the RNN. It proposes a new loss based on weighted difference for improving the meta-optimizer in the later stage of optimization. It also proposes information sharing between RNNs for each coordinate. Finally it presents how to fine-tune the meta-optimizer on new task.\", \"pros\": \"Reasonable technical improvements to fix some issues of the learning to learn framework\\n(1) reduce the number of parameters in the meta-optimizer significantly\\n(2) improve the stability of the meta-optimizer.\\n(3) improve the generalization to new tasks and datasets.\\n\\n\\nCons\\n1. The novelty is not good enough and the method does not seem to be solid enough. Except for the technique in the *structure of each recurrent unit\\\" section, other techniques are tricks that are hard to tell why they could work. That said, I think the experiments should verify each of the proposed components, and see their roles in the proposed method.\\n\\nAlso, it is claimed in the Abstract and the paper that the proposed method *successfully converge to optimal solutions\\\"... I think this claim is quite unprofessional. How do you know it converges the optima?\\n\\n\\n2. With respect to the experiments:\\n\\nThe experiments are only done on the MNIST and CIFAR10 datasets, which are small-scale datasets. Current meta learning model has achieved advancement on more challenging datasets, e.g., Mini-Imagenet and CIFAR1000.\\n\\nFor the experiment section of comparison to baseline methods. It would be fair to compare all the methods with removing bias or not removing bias setting. As the author mentions that \\\"the maximal bias term of LSTM in Lv et al. (2017) is 0.41, and maximal bias term of LSTM in Andrychowicz et al. (2016) is 0.31; this, consequently, leads to bad performance\\\". It would be interesting to know that if the bias term of the two baseline model are removed, how is the performance difference compared to the method proposed by the authors? \\n\\nHow does the number of parameters of the meta optimizer scales with the problem size? That is, how does the number of parameters in the meta-optimizer grow with increasing the number of image classes in the problem at hand, e.g., 20 classes, 50 classes, etc?\"}"
]
} |
mllQ3QNNr9d | Towards Understanding Normalization in Neural ODEs | [
"Julia Gusak",
"Larisa Markeeva",
"Talgat Daulbaev",
"Alexander Katrutsa",
"Andrzej Cichocki",
"Ivan Oseledets"
] | Normalization is an important and vastly investigated technique in deep learning. However, its role for Ordinary Differential Equation based networks (Neural ODEs) is still poorly understood. This paper investigates how different normalization techniques affect the performance of Neural ODEs. Particularly, we show that it is possible to achieve $93\%$ accuracy on the CIFAR-10 classification task, and to the best of our knowledge, this is the highest reported accuracy among Neural ODEs tested on this problem. | [
"Neural Ordinary Differential Equations",
"Normalization",
"Image Classification",
"Deep Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=mllQ3QNNr9d | https://openreview.net/forum?id=mllQ3QNNr9d | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"lap4ZP-58h_"
],
"note_type": [
"decision"
],
"note_created": [
1587925085206
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}"
]
} |
M390_F-0o | The equivalence between Stein variational gradient descent and black-box variational inference | [
"Casey Chu",
"Kentaro Minami",
"Kenji Fukumizu"
] | We formalize an equivalence between two popular methods for Bayesian inference: Stein variational gradient descent (SVGD) and black-box variational inference (BBVI). In particular, we show that BBVI corresponds precisely to SVGD when the kernel is the neural tangent kernel. Furthermore, we interpret SVGD and BBVI as kernel gradient flows; we do this by leveraging the recent perspective that views SVGD as a gradient flow in the space of probability distributions and showing that BBVI naturally motivates a Riemannian structure on that space. We observe that kernel gradient flow also describes dynamics found in the training of generative adversarial networks (GANs). This work thereby unifies several existing techniques in variational inference and generative modeling and identifies the kernel as a fundamental object governing the behavior of these algorithms, motivating deeper analysis of its properties. | [
"variational inference",
"equivalence",
"svgd",
"bbvi",
"kernel",
"space",
"popular methods",
"bayesian inference",
"particular"
] | Accept (Poster) | https://openreview.net/pdf?id=M390_F-0o | https://openreview.net/forum?id=M390_F-0o | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"dUoLoBvVSH"
],
"note_type": [
"decision"
],
"note_created": [
1582774426068
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
u9LrVLDu5i | Stochastic gradient algorithms from ODE splitting perspective | [
"Daniil Merkulov",
"Ivan Oseledets"
] | We present a different view on stochastic optimization, which goes back to the splitting schemes for approximate solutions of ODE. In this work, we provide a connection between stochastic gradient descent approach and first-order splitting scheme for ODE. We consider the special case of splitting, which is inspired by machine learning applications and derive a new upper bound on the global splitting error for it. We present, that the Kaczmarz method is the limit case of the splitting scheme for the unit batch SGD for linear least squares problem. We support our findings with systematic empirical studies, which demonstrates, that a more accurate solution of local problems leads to the stepsize robustness and provides better convergence in time and iterations on the softmax regression problem. | [
"SGD",
"Splitting",
"ODE"
] | Accept (Poster) | https://openreview.net/pdf?id=u9LrVLDu5i | https://openreview.net/forum?id=u9LrVLDu5i | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"sCdA-nF5_"
],
"note_type": [
"decision"
],
"note_created": [
1582774458517
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
mTmgaxwynS | Constrained Neural Ordinary Differential Equations with Stability Guarantees | [
"Aaron Tuor",
"Jan Drgona",
"Draguna Vrabie"
] | Differential equations are frequently used in engineering domains, such as modeling and control of industrial systems, where safety and performance guarantees are of paramount importance. Traditional physics-based modeling approaches require domain expertise and are often difficult to tune or adapt to new systems. In this paper, we show how to model discrete ordinary differential equations (ODE) with algebraic nonlinearities as deep neural networks with varying degrees of prior knowledge. We derive the stability guarantees of the network layers based on the implicit constraints imposed on the weight's eigenvalues. Moreover, we show how to use barrier methods to generically handle additional inequality constraints. We demonstrate the prediction accuracy of learned neural ODEs evaluated on open-loop simulations compared to ground truth dynamics with bi-linear terms. | [
"Deep Learning",
"Ordinary Differential Equations",
"Physics Informed Machine Learning",
"Physics Informed Neural Networks",
"Eigenvalue Constraints"
] | Accept (Poster) | https://openreview.net/pdf?id=mTmgaxwynS | https://openreview.net/forum?id=mTmgaxwynS | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"qMzuBbiczX"
],
"note_type": [
"decision"
],
"note_created": [
1582774481374
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
iE8tFa4Nq | Lagrangian Neural Networks | [
"Miles Cranmer",
"Sam Greydanus",
"Stephan Hoyer",
"Peter Battaglia",
"David Spergel",
"Shirley Ho"
] | Accurate models of the world are built upon notions of its underlying symmetries. In physics, these symmetries correspond to conservation laws, such as for energy and momentum. Yet even though neural network models see increasing use in the physical sciences, they struggle to learn these symmetries. In this paper, we propose Lagrangian Neural Networks (LNNs), which can parameterize arbitrary Lagrangians using neural networks. In contrast to models that learn Hamiltonians, LNNs do not require canonical coordinates, and thus perform well in situations where canonical momenta are unknown or difficult to compute. Unlike previous approaches, our method does not restrict the functional form of learned energies and will produce energy-conserving models for a variety of tasks. We test our approach on a double pendulum and a relativistic particle, demonstrating energy conservation where a baseline approach incurs dissipation and modeling relativity without canonical coordinates where a Hamiltonian approach fails. | [
"Physics",
"Unsupervised Learning",
"Energy",
"Representation Learning",
"Dynamics",
"Lagrangians",
"Hamiltonians",
"Differential Equations",
"Neural ODEs",
"Invariants",
"Physical Prior",
"Inductive Bias"
] | Accept (Poster) | https://openreview.net/pdf?id=iE8tFa4Nq | https://openreview.net/forum?id=iE8tFa4Nq | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"6KmrokREM"
],
"note_type": [
"decision"
],
"note_created": [
1582774500368
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
fg2ZFmXFO3 | Neural Operator: Graph Kernel Network for Partial Differential Equations | [
"Anima Anandkumar",
"Kamyar Azizzadenesheli",
"Kaushik Bhattacharya",
"Nikola Kovachki",
"Zongyi Li",
"Burigede Liu",
"Andrew Stuart"
] | The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces. The purpose of this work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators). We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators. The kernel integration is computed by message passing on graph networks. This approach has substantial practical consequences which we will illustrate in the context of mappings between input data to partial differential equations (PDEs) and their solutions. In this context, such learned networks can generalize among different approximation methods for the PDE (such as finite difference or finite element methods) and among approximations corresponding to different underlying levels of resolution and discretization. Experiments confirm the purposed graph kernel network does have the desired properties and show competitive performance compared to the stat of the art solvers. | [
"Graph Neural Networks",
"Partial Differential Equations",
"Kernel Methods",
"Infinite Space Mappings"
] | Accept (Poster) | https://openreview.net/pdf?id=fg2ZFmXFO3 | https://openreview.net/forum?id=fg2ZFmXFO3 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"2Rg9HACtRI"
],
"note_type": [
"decision"
],
"note_created": [
1582774511442
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
G6qPBpSfyN | Port-Hamiltonian Gradient Flows | [
"Michael Poli",
"Stefano Massaroli",
"Atsushi Yamashita",
"Hajime Asama",
"Jinkyoo Park"
] | In this paper we present a general framework for continuous--time gradient descent, often referred to as gradient flow. We extend Hamiltonian gradient flows, which ascribe mechanical dynamics to neural network parameters and constitute a natural continuous-time alternative to discrete momentum-based gradient descent approaches. The proposed Port-Hamiltonian Gradient Flow (PHGF) casts neural network training into a system--theoretic framework: a fictitious physical system is coupled to the neural network by setting the loss function as an energy term of the system. As autonomous port--Hamiltonian systems naturally tend to dissipate energy towards one of its minima by construction, solving the system simultaneously trains the neural network. We show that general PHGFs are compatible with both continuous-time data--stream optimization, where the optimizer processes a continuous stream of data, as well as standard fixed-step optimization. In continuous-time, PHGFs allow for the embedding of black--box adaptive--step ODE solvers and are able to stick to the energy manifold, thus avoiding divergence due to large learning rates. In fixed-step optimization, on the other hand, PGHFs open the door to novel fixed-step approaches based on symplectic discretizations of the Port--Hamiltonian with similar memory footprint and computational complexity as momentum optimizers. | [
"optimization",
"gradient flow",
"port-Hamiltonian"
] | Accept (Poster) | https://openreview.net/pdf?id=G6qPBpSfyN | https://openreview.net/forum?id=G6qPBpSfyN | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"6vsTAxWmVF"
],
"note_type": [
"decision"
],
"note_created": [
1582774530267
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
YDNzrQRsu | Differentiable Molecular Simulations for Control and Learning | [
"Wujie Wang",
"Simon Axelrod",
"Rafael Gómez-Bombarelli"
] | Molecular simulations use statistical mechanics at the atomistic scale to enable both the elucidation of fundamental mechanisms and the engineering of matter for desired tasks. Non-quantized molecular behavior is typically simulated with differential equations parameterized by a Hamiltonian, or energy function. TheHamiltonian describes the state of the system and its interactions with the environment. In order to derive predictive microscopic models, one wishes to infer a molecular Hamiltonian from macroscopic quantities. From the perspective of engineering, one wishes to control the Hamiltonian to achieve desired macroscopic quantities. In both cases, the goal is to modify the Hamiltonian such that bulk properties of the simulated system match a given target. We demonstrate how this can be achieved using differentiable simulations where bulk target observables and simulation outcomes can be analytically differentiated with respect to Hamiltonians. Our work opens up new routes for parameterizing Hamiltonians to infer macroscopic models and develops control protocols | [
"Molecular Dynamics",
"Quantum Dynamics",
"Differentiable Simulations",
"Statistical Physics",
"Machine Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=YDNzrQRsu | https://openreview.net/forum?id=YDNzrQRsu | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"ERLfUjL_80"
],
"note_type": [
"decision"
],
"note_created": [
1582774556595
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
98ZntVFFrn | Encoder-decoder neural network for solving the nonlinear Fokker-Planck-Landau collision operator in XGC | [
"Marco Andres Miller",
"Randy Michael Churchill",
"Choong-Seock Chang",
"Robert Hager"
] | An encoder-decoder neural network has been used to accelerate the numerical solving of a partial integro-differential equation, the Fokker-Planck-Landau collision operator. This is part of the governing equation in the massively parallel particle-in-cell code, XGC, which is used to study turbulence in fusion energy devices. The neural network emphasizes physics-inspired learning, where it is taught to respect physical conservation constraints of the collision operator by including them in the training loss, along with the L2 loss. The run time for the current Picard iterative solver of the operator is $O(n^2)$, where n is the number of plasma species. As the XGC1 code begins to attack problems including a larger number of species, the collision operator will become expensive computationally, making the neural network solver even more important, especially since the training only scales as $O(n)$. A wide enough range of collisionality has been considered in the training data to ensure the full domain of collision physics is captured. Eventual work will include expansion of the network to include multiple plasma species. | [
"neural network",
"collision operator",
"xgc",
"nonlinear",
"equation",
"numerical solving",
"partial",
"part",
"massively parallel"
] | Accept (Poster) | https://openreview.net/pdf?id=98ZntVFFrn | https://openreview.net/forum?id=98ZntVFFrn | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"xYaKWIiJQk"
],
"note_type": [
"decision"
],
"note_created": [
1582774579176
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
pmvEzAbl7M | Generative ODE Modeling with Known Unknowns | [
"Ori Linial",
"Uri Shalit"
] | In several crucial applications, domain knowledge is encoded by a system of ordinary differential equations (ODE). A motivating example is intensive care unit patients: The dynamics of some vital physiological variables such as heart rate, blood pressure and arterial compliance can be approximately described by a known system of ODEs. Typically, some of the ODE variables are directly observed while some are unobserved, and in addition many other variables are observed but not modeled by the ODE, for example body temperature. Importantly, the unobserved ODE variables are “known-unknowns”: We know they exist and their functional dynamics, but cannot measure them directly, nor do we know the function tying them to all observed measurements. Estimating these known-unknowns is often highly valuable to physicians. Under this scenario we wish to: (i) learn the static parameters of the ODE generating each observed time-series (ii) infer the dynamic sequence of all ODE variables including the known-unknowns, and (iii) extrapolate the future of the ODE variables and the observations of the time-series. We address this task with a variational autoencoder incorporating the known ODE function, called GOKU-net for Generative ODE modeling with Known Unknowns. | [
"generative models",
"variatonal autoencoder",
"physical system",
"ordinary differential equation",
"healthcare"
] | Accept (Poster) | https://openreview.net/pdf?id=pmvEzAbl7M | https://openreview.net/forum?id=pmvEzAbl7M | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"VeU6n0pE2Q"
],
"note_type": [
"decision"
],
"note_created": [
1582774592231
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
P42AmLB1-4 | Generating Control Policies for Autonomous Vehicles Using Neural ODEs | [
"Houston Lucas",
"Richard Kelley"
] | The problem of robot control often requires solving a system of
ordinary differential equations (ODEs). Traditionally this has been
accomplished by using iterative ODE solvers. These solvers start with
an initial guess, which is iteratively improved to converge to a
correct solution. However, traditional solvers can be slow and do not
combine well with other systems since they are not differentiable. In
response, some researchers have proposed using neural networks in an
end-to-end system that directly maps perceptual inputs to control
actions. Because of their differentiablity, end-to-end approaches can
be composed with other modules more readily than traditional ODE
solvers. However the end-to-end approach no longer carries the
guarantee that the solution obeys the required dynamics.
We propose a framework for using Neural ODE to
combine the flexibility of the end-to-end approach with the guarantees
of traditional solvers. In our approach a neural network is used to
provide the initial guess to a differentiable ODE solver. The ODE
solver then yields a solution trajectory. We use this trajectory to
improve the guesses of the neural network. This
framework allows the neural network to learn initial guesses that are
close to the correct solution, improving overall system performance
while ensuring that dynamics constraints are always satisfied. We
demonstrate the utility of this framework in the case of robot
control, where we use it to solve a family of boundary value problems
that are essential for steering an autonomous vehicle to a goal state. | [
"robotics",
"ODE",
"BVP"
] | Accept (Poster) | https://openreview.net/pdf?id=P42AmLB1-4 | https://openreview.net/forum?id=P42AmLB1-4 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"0v4LdTdvG"
],
"note_type": [
"decision"
],
"note_created": [
1582774617432
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
C4ydiXrYw | Stochasticity in Neural ODEs: An Empirical Study | [
"Alexandra Volokhova",
"Viktor Oganesyan",
"Dmitry Vetrov"
] | Stochastic regularization of neural networks (e.g. dropout) is a wide-spread technique in deep learning that allows for better generalization. Despite its success, continuous-time models, such as neural ordinary differential equation (ODE), usually rely on a completely deterministic feed-forward operation. This work provides an empirical study of stochastically regularized neural ODE on several image-classification tasks (CIFAR-10, CIFAR-100, TinyImageNet). Building upon the formalism of stochastic differential equations (SDEs), we demonstrate that neural SDE is able to outperform its deterministic counterpart. Further, we show that data augmentation during the training improves the performance of both deterministic and stochastic versions of the same model. However, the improvements obtained by the data augmentation completely eliminate the empirical gains of the stochastic regularization, making the difference in the performance of neural ODE and neural SDE negligible. | [
"neural ODE",
"neural ordinary differential equations",
"continuous models",
"neural SDE",
"neural stochastic differential equations",
"stochasticity"
] | Accept (Poster) | https://openreview.net/pdf?id=C4ydiXrYw | https://openreview.net/forum?id=C4ydiXrYw | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"0EjIUGRtOG"
],
"note_type": [
"decision"
],
"note_created": [
1582774629051
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
8WKd467B8H | Neural Ordinary Differential Equation Value Networks for Parametrized Action Spaces | [
"Stefano Massaroli",
"Michael Poli",
"Sanzhar Bakhtiyarov",
"Atsushi Yamashita",
"Hajime Asama",
"Jinkyoo Park"
] | Action spaces equipped with parameter sets are a common occurrence in reinforcement learning applications. Solutions to problems of this class have been developed under different frameworks, such as parametrized action Markov decision processes (PAMDP) or hierarchical reinforcement learning (HRL). These approaches often require extensions or modifications to standard existing algorithms developed on standard MDPs. For this reason they can be unwieldy and, particularly in the case of HRL, computationally inefficient. We propose adopting a different parametrization scheme for state--action value networks based on neural ordinary differential equations (NODEs) as a scalable, plug--and--play approach for parametrized action spaces. NODEs value networks do not require extensive modification to existing algorithms nor the adoption of HRL methods. Our solution can directly be integrated into existing training algorithms and opens up new opportunities in single-agent and multi-agent settings with tight precision constraints on the action parameters such as robotics. | [
"neural ODE",
"reinforcement learning",
"parametrized action"
] | Accept (Poster) | https://openreview.net/pdf?id=8WKd467B8H | https://openreview.net/forum?id=8WKd467B8H | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"1rZHEoxF3Y"
],
"note_type": [
"decision"
],
"note_created": [
1582774640127
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
knjWFNx6CN | Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning | [
"Yaofeng Desmond Zhong",
"Biswadip Dey",
"Amit Chakraborty"
] | In this work, we introduce Dissipative SymODEN a deep learning architecture which can infer the dynamics of a physical system with dissipation from observed state trajectories. To improve prediction accuracy while reducing network size, Dissipative SymODEN encodes the port-Hamiltonian dynamics with energy dissipation and external input into the design of its computation graph and learns the dynamics in a structured way. The learned model, by revealing key aspects of the system, such as the inertia, dissipation, and potential energy, paves the way for energy-based controllers. | [
"Deep Learning",
"Physics-induced Prior",
"Transparency"
] | Accept (Poster) | https://openreview.net/pdf?id=knjWFNx6CN | https://openreview.net/forum?id=knjWFNx6CN | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"RrwugoR05L"
],
"note_type": [
"decision"
],
"note_created": [
1582774653179
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
ObkQpUsR-x | A Free-Energy Principle for Representation Learning | [
"Yansong Gao",
"Pratik Chaudhari"
] | We employ a formal connection of machine learning with thermodynamics to characterize the quality of learnt representations for transfer learning. We discuss how information-theoretic functionals such as rate, distortion and classification loss of a model lie on a convex, so-called equilibrium surface. We prescribe dynamical processes to traverse this surface under constraints, e.g., an iso-classification process that trades off rate and distortion to keep the classification loss unchanged. We demonstrate how this process can be used for transferring representations from a source dataset to a target dataset while keeping the classification loss constant. Experimental validation of the theoretical results is provided on standard image-classification datasets.
| [
"information theory",
"thermodynamics",
"rate-distortion theory",
"transfer learning",
"information bottleneck"
] | Accept (Poster) | https://openreview.net/pdf?id=ObkQpUsR-x | https://openreview.net/forum?id=ObkQpUsR-x | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"NmItt0upF8"
],
"note_type": [
"decision"
],
"note_created": [
1582774665218
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
VY1hqB5Z7V | Urban air pollution forecasts generated from latent space representation | [
"Cesar Quilodran Casas",
"Rossella Arcucci",
"Yike Guo"
] | This paper presents an approach to replicate computational fluid dynamics simulations of air pollution using deep learning. The study area is in London, where a tracer aims to replicate a busy traffic junction. Our method, which integrates Principal Components Analysis (PCA) and autoencoders (AE), is a computationally cheaper way to generate a latent space representation of the original unstructured mesh model. Once the PCA is applied on the original model solution, a Fully-Connected AE is trained on the full-rank PCs. This yields a compression of the original data by $10^{6}$. The number of trainable parameters is also reduced using this method. A LSTM-based approach is used on the latent space to produce faster forecasts of the air pollution tracer. | [
"latent space representation",
"pca",
"air pollution",
"deep learning",
"study area",
"london"
] | Accept (Poster) | https://openreview.net/pdf?id=VY1hqB5Z7V | https://openreview.net/forum?id=VY1hqB5Z7V | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"khzQPXlh_p"
],
"note_type": [
"decision"
],
"note_created": [
1582774674852
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
cfKpOiUzF | Solving ODE with Universal Flows: Approximation Theory for Flow-Based Models | [
"Chin-Wei Huang",
"Laurent Dinh",
"Aaron Courville"
] | Normalizing flows are powerful invertible probabilistic models that can be used to translate two probability distributions, in a way that allows us to efficiently track the change of probability density. However, to trade for computational efficiency in sampling and in evaluating the log-density, special parameterization designs have been proposed at the cost of representational expressiveness. In this work, we propose to use ODEs as a framework to establish universal approximation theory for certain families of flow-based models. | [
"Normalizing flows",
"ODE",
"neural ODE",
"universal approximation"
] | Accept (Poster) | https://openreview.net/pdf?id=cfKpOiUzF | https://openreview.net/forum?id=cfKpOiUzF | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"jWUERXmeWC"
],
"note_type": [
"decision"
],
"note_created": [
1582774687959
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
xi_8IzW9hG | Nano-Material Configuration Design with Deep Surrogate Langevin Dynamics | [
"Thanh V. Nguyen",
"Youssef Mroueh",
"Samuel Hoffman",
"Payel Das",
"Pierre Dognin",
"Giuseppe Romano",
"Chinmay Hegde"
] | We consider the problem of optimizing by sampling under multiple black-box constraints in nano-material design. We leverage the posterior regularization framework and show that the constraint satisfaction problem can be formulated as sampling from a Gibbs distribution. The main challenges come from the black-box nature of the constraints obtained by solving complex and expensive PDEs. To circumvent these issues, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling. We devise two approaches for learning surrogate gradients of the black-box functions: first, by using zero-order gradients approximations; and second, by approximating the Langevin gradients with deep neural networks. We prove the convergence of both approaches when the target distribution is $\log$-concave and smooth. We also show the effectiveness of our approaches over Bayesian optimization in designing optimal nano-porous material configurations that achieve low thermal conductivity and reasonable mechanical stability. | [
"Langevin dynamics",
"differential equations",
"black-box optimization",
"surrogate methods"
] | Accept (Poster) | https://openreview.net/pdf?id=xi_8IzW9hG | https://openreview.net/forum?id=xi_8IzW9hG | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"tT4APLkg-"
],
"note_type": [
"decision"
],
"note_created": [
1582774698649
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
nEPNoiGsU3 | Bringing PDEs to JAX with forward and reverse modes automatic differentiation | [
"Ivan Yashchuk"
] | Partial differential equations (PDEs) are used to describe a variety of physical phenomena.
Often these equations do not have analytical solutions and numerical approximations are used instead. One of the common methods to solve PDEs is the finite element method.
Computing derivative information of the solution with respect to the input parameters is important in many tasks in scientific computing.
We extend JAX automatic differentiation library with an interface to Firedrake finite element library.
High-level symbolic representation of PDEs allows bypassing differentiating through low-level possibly many iterations of the underlying nonlinear solvers.
Differentiating through Firedrake solvers is done using tangent-linear and adjoint equations.
This enables the efficient composition of finite element solvers with arbitrary differentiable programs. | [
"partial differential equations",
"adjoint",
"tanget linear",
"automatic differentiation"
] | Accept (Poster) | https://openreview.net/pdf?id=nEPNoiGsU3 | https://openreview.net/forum?id=nEPNoiGsU3 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"R0-xH80VV0"
],
"note_type": [
"decision"
],
"note_created": [
1582774708524
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
Jxv0mWsPc | Fast Convergence for Langevin with Matrix Manifold Structure | [
"Ankur Moitra",
"Andrej Risteski"
] |
In this paper, we study the problem of sampling from distributions of the form p(x) \propto e^{-\beta f(x)} for some function f whose values and gradients we can query. This mode of access to f is natural in the scenarios in which such problems arise, for instance sampling from posteriors in parametric Bayesian models. Classical results show that a natural random walk, Langevin diffusion, mixes rapidly when f is convex. Unfortunately, even in simple examples, the applications listed above will entail working with functions f that are nonconvex -- for which sampling from p may in general require an exponential number of queries.
In this paper, we study one aspect of nonconvexity relevant for modern machine learning applications: existence of invariances (symmetries) in the function f, as a result of which the distribution p will have manifolds of points with equal probability. We give a recipe for proving mixing time bounds of Langevin dynamics in order to sample from manifolds of local optima of the function f in settings where the distribution is well-concentrated around them. We specialize our arguments to classic matrix factorization-like Bayesian inference problems where we get noisy measurements A(XX^T), X \in R^{d \times k} of a low-rank matrix, i.e. f(X) = \|A(XX^T) - b\|^2_2, X \in R^{d \times k}, and \beta the inverse of the variance of the noise. Such functions f are invariant under orthogonal transformations, and include problems like matrix factorization, sensing, completion. Beyond sampling, Langevin dynamics is a popular toy model for studying stochastic gradient descent. Along these lines, we believe that our work is an important first step towards understanding how SGD behaves when there is a high degree of symmetry in the space of parameters the produce the same output. | [
"Langevin",
"diffusion",
"Ricci Curvature",
"Poincare inequality"
] | Accept (Poster) | https://openreview.net/pdf?id=Jxv0mWsPc | https://openreview.net/forum?id=Jxv0mWsPc | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"QJ-B-nN26c"
],
"note_type": [
"decision"
],
"note_created": [
1582774719496
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
0XCli3H-8F | Amortized Finite Element Analysis for Fast PDE-Constrained Optimization | [
"Tianju Xue",
"Alex Beatson",
"Sigrid Adriaenssens",
"Ryan P. Adams"
] | Optimizing the parameters of partial differential equations (PDEs), i.e., PDE-constrained optimization (PDE-CO), allows us to model natural systems from observations or perform rational design of structures with complicated mechanical, thermal, or electromagnetic properties. However, PDE-CO is often computationally prohibitive due to the need to solve the PDE—typically via finite element analysis (FEA)—at each step of the optimization procedure. In this paper we propose amortized finite element analysis (AmorFEA), in which a neural network learns to produce accurate PDE solutions, while preserving many of the advantages of traditional finite element methods. As FEA is a variational procedure, AmorFEA is a direct analogue to popular amortized inference approaches in latent variable models, with the finite element basis acting as the variational family. AmorFEA can perform PDE-CO without the need to repeatedly solve the associated PDE, accelerating optimization when compared to a traditional workflow using FEA and the adjoint method. | [
"Amortized FEA",
"PDE-Constrained Optimization"
] | Accept (Poster) | https://openreview.net/pdf?id=0XCli3H-8F | https://openreview.net/forum?id=0XCli3H-8F | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"76RO82xwWV"
],
"note_type": [
"decision"
],
"note_created": [
1582774730390
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
vO4cDCAVFp | Progressive Growing of Neural ODEs | [
"Hammad A. Ayyubi",
"Yi Yao",
"Ajay Divakaran"
] | Neural Ordinary Differential Equations (NODEs) have proven to be a powerful modeling tool for approximating (interpolation) and forecasting (extrapolation) irregularly sampled time series data. However, their performance degrades substantially when applied to real-world data, especially long-term data with complex behaviors (e.g., long-term trend across years, mid-term seasonality across months, and short-term local variation across days). To address the modeling of such complex data with different behaviors at different frequencies (time spans), we propose a novel progressive learning paradigm of NODEs for long-term time series forecasting. Specifically, following the principle of curriculum learning, we gradually increase the complexity of data and network capacity as training progresses. Our experiments with both synthetic data and real traffic data (PeMS Bay Area traffic data) show that our training methodology consistently improves the performance of vanilla NODEs by over 64%. | [
"Neural ODEs",
"Curriculum Learning",
"Progressive Growing",
"Time Series"
] | Accept (Poster) | https://openreview.net/pdf?id=vO4cDCAVFp | https://openreview.net/forum?id=vO4cDCAVFp | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"f7V9gHN0Z"
],
"note_type": [
"decision"
],
"note_created": [
1582774741574
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
Rsmqn9R2Mg | Neural Dynamical Systems | [
"Viraj Mehta",
"Ian Char",
"Willie Neiswanger",
"Youngseog Chung",
"Andrew Oakleigh Nelson",
"Mark D Boyer",
"Egemen Kolemen",
"Jeff Schneider"
] | We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models which incorporates prior knowledge in the form of systems of ordinary differential equations. NDS uses neural models to estimate free parameters of the system, predicts residual terms, and numerically integrates over time to predict future states. It also natively handles irregularly sampled data and implicitly learns values of interpretable system parameters. We find that NDS learns dynamics with higher accuracy and fewer samples than a variety of deep learning methods that do not incorporate the prior knowledge. We demonstrate these advantages first on synthetic dynamical systems and then on real data captured from deuterium shots from a nuclear fusion reactor. | [
"nuclear fusion",
"physics",
"differential equations",
"dynamical systems"
] | Accept (Poster) | https://openreview.net/pdf?id=Rsmqn9R2Mg | https://openreview.net/forum?id=Rsmqn9R2Mg | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"-dBKJZ4qR"
],
"note_type": [
"decision"
],
"note_created": [
1582774753830
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
pxlqJa21C | Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View. | [
"Yiping Lu*",
"Zhuohan Li*",
"Di He",
"Zhiqing Sun",
"Bin Dong",
"Tao Qin",
"Liwei Wang",
"Tie-yan Liu"
] | The Transformer architecture is widely used in natural language processing. Despite its success, the design principle of the Transformer remains elusive. In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a \emph{numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system}. In particular, how words in a sentence are abstracted into contexts by passing through the layers of the Transformer can be interpreted as approximating multiple particles' movement in the space using the Lie-Trotter splitting scheme and the Euler's method. Inspired from such a relationship, we propose to replace the Lie-Trotter splitting scheme by the more accurate Strang-Marchuk splitting scheme and design a new network architecture called Macaron Net. Through extensive experiments, we show that the Macaron Net is superior to the Transformer on both supervised and unsupervised learning tasks. | [
"Transformer",
"Ordinary Differential Equation",
"Multi-Particle Dynamic System",
"Natural Language Processing"
] | Accept (Poster) | https://openreview.net/pdf?id=pxlqJa21C | https://openreview.net/forum?id=pxlqJa21C | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"zscuQLNpGQ"
],
"note_type": [
"decision"
],
"note_created": [
1582774764321
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
gzcnMUReFv | Neural Differential Equations for Single Image Super-Resolution | [
"Teven Le Scao"
] | Although Neural Differential Equations have shown promise on toy problems such as MNIST, they have yet to be successfully applied to more challenging tasks. Inspired by variational methods for image restoration relying on partial differential equations, we choose to benchmark several forms of Neural DEs and backpropagation methods on single image super-resolution. The adjoint method previously proposed for gradient estimation has no theoretical stability guarantees; we find a practical case where this makes it unusable, and show that discrete sensitivity analysis has better stability. In our experiments, differential models match the performance of a state-of-the art super-resolution model. | [
"Neural Differential Equations",
"Image super-resolution",
"Total variation methods",
"partial differential equations"
] | Accept (Poster) | https://openreview.net/pdf?id=gzcnMUReFv | https://openreview.net/forum?id=gzcnMUReFv | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"SyxFmtdn2"
],
"note_type": [
"decision"
],
"note_created": [
1582774775299
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
C2nr-4elBV | Can auto-encoders help with filling missing data? | [
"Marek Śmieja",
"Maciej Kołomycki",
"Łukasz Struski",
"Mateusz Juda",
"Mário A. T. Figueiredo"
] | This paper introduces an approach to filling in missing data based on deep auto-encoder models, adequate to high-dimensional data exhibiting complex dependencies, such as images. The method exploits the properties of auto-encoders' vector fields, which allows to approximate the gradient of the log-density from its reconstruction error, based on which we propose a projected gradient ascent algorithm to obtain the conditionally most probable estimate of the missing values. Experiments performed on benchmark datasets show that imputations produced by our model are sharp and realistic. | [
"missing data",
"auto-encoders",
"dynamical systems",
"generative models",
"imputation"
] | Accept (Poster) | https://openreview.net/pdf?id=C2nr-4elBV | https://openreview.net/forum?id=C2nr-4elBV | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"2qT4MriRuH"
],
"note_type": [
"decision"
],
"note_created": [
1582774795981
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
MLXkfw5y79 | Wavelet-Powered Neural Networks for Turbulence | [
"Arvind T. Mohan",
"Daniel Livescu",
"Michael Chertkov"
] | One of the fundamental driving phenomena for applications in engineering, earth sciences and climate is fluid turbulence. Modeling these flows and explaining its associated spatio-temporal phenomena are notoriously difficult tasks. Navier-Stokes (NS) equations describe all the details of the fluid motions, but require accounting for unfeasibly many degrees of freedom in the regime of developed turbulence. Model reduction and surrogate modeling of turbulence is a general methodology aiming to circumvent this curse of dimensionality. Originally driven by phenomenological considerations, multiple attempts to model-reduce NS equations got a new boost recently with Deep Learning (DL), trained on the ground truth data, e.g. extracted from high-fidelity Direct Numerical Simulations (DNS). However, early attempts of building NNs to model turbulence has also revealed its lack of interpretability as the most significant shortcoming. In this paper we address the key challenge of devising reduced but, at least partially, interpretable model. We take advantage of the balance between strong mathematical foundations and the physical interpretability of wavelet theory to build a spatio-temporally reduced dynamical map which fuses wavelet based spatial decomposition with spatio-temporal modeling based on Convolutional Long Short Term Memory (C-LSTM) architecture. It is shown that the wavelet-based NN makes progress in scaling to large flows, by reducing computational costs and GPU memory requirements. | [
"Wavelets",
"Convolutional LSTM",
"turbulence"
] | Accept (Poster) | https://openreview.net/pdf?id=MLXkfw5y79 | https://openreview.net/forum?id=MLXkfw5y79 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"t4vwm34bKm"
],
"note_type": [
"decision"
],
"note_created": [
1582774806653
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
IaXBtMNFaa | Embedding Hard Physical Constraints in Convolutional Neural Networks for 3D Turbulence | [
"Arvind T. Mohan",
"Nicholas Lubbers",
"Daniel Livescu",
"Michael Chertkov"
] | Deep learning approaches have shown much promise for physical sciences, especially in dimensionality reduction and compression of large datasets. A major issue in deep learning of large-scale phenomena, like fluid turbulence, is the lack of physical guarantees. In this work, we propose a general framework to directly embed the notion of incompressible fluids into Convolutional Neural Networks, for coarse-graining of turbulence. These \textbf{physics-embedded neural networks} leverage interpretable strategies from numerical methods and computational fluid dynamics to enforce physical laws and boundary conditions by taking advantage the mathematical properties of the underlying equations. We demonstrate results on 3D fully-developed turbulence, showing that the \textit{physics-aware inductive bias} drastically improves local conservation of mass, without sacrificing performance according to several other metrics characterizing the fluid flow. | [
"Physics Conservation Laws",
"Numerical Methods",
"Convolutional Neural Networks"
] | Accept (Poster) | https://openreview.net/pdf?id=IaXBtMNFaa | https://openreview.net/forum?id=IaXBtMNFaa | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"0rhL2inlcK"
],
"note_type": [
"decision"
],
"note_created": [
1582774821829
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
2QLwS_ZVWG | Learning-Based Strong Solutions to Forward and Inverse Problems in PDEs | [
"Leah Bar",
"Nir Sochen"
] | We introduce a novel neural network-based partial differential equations solver for forward and inverse problems. The solver is grid free, mesh free and shape free, and the solution is approximated by a neural network.
We employ an unsupervised approach such that the input to the network is a points set in an arbitrary domain, and the output is the
set of the corresponding function values. The network is trained to minimize deviations of the learned function from the strong PDE solution and satisfy the boundary conditions.
The resulting solution in turn is an explicit smooth differentiable function with a known analytical form.
Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems
where the optimized loss function consists of few elements: fidelity terms of L2 and L infinity norms that unlike previous methods promote a strong solution. Robust boundary conditions constraints and additional regularizers are included as well. This setting is flexible in the sense that regularizers can be tailored to specific
problems. We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT). | [
"PDEs",
"forward problems",
"inverse problems",
"unsupervised learning",
"deep networks",
"EIT"
] | Accept (Poster) | https://openreview.net/pdf?id=2QLwS_ZVWG | https://openreview.net/forum?id=2QLwS_ZVWG | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"5ZOfXmS18P"
],
"note_type": [
"decision"
],
"note_created": [
1582774835472
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
SOeY2AQR9 | How Chaotic Are Recurrent Neural Networks? | [
"Pourya Vakilipourtakalou",
"Lili Mou"
] | Recurrent neural networks (RNNs) are non-linear dynamic systems. Previous
work believes that RNN may suffer from the phenomenon of chaos, where the
system is sensitive to initial states and unpredictable in the long run. In this paper,
however, we perform a systematic empirical analysis, showing that a vanilla or
long short term memory (LSTM) RNN does not exhibit chaotic behavior along
the training process in real applications such as text generation. Our findings suggest
that future work in this direction should address the other side of non-linear
dynamics for RNN. | [
"RNN",
"Chaos",
"NLP",
"Text Generation"
] | Accept (Poster) | https://openreview.net/pdf?id=SOeY2AQR9 | https://openreview.net/forum?id=SOeY2AQR9 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"BOXRk7yHS0"
],
"note_type": [
"decision"
],
"note_created": [
1582774846271
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
U03vS6mgX6 | A Mean-field Analysis of Deep ResNet and Beyond:Towards Provable Optimization Via Overparameterization From Depth | [
"Yiping Lu",
"Chao Ma",
"Yulong Lu",
"Jianfeng Lu",
"Lexing Ying"
] | Training deep neural networks with stochastic gradient descent (SGD) can often achieve zero training loss on real-world tasks although the optimization landscape is known to be highly non-convex. To understand the success of SGD for training deep neural networks, this work presents a mean-field analysis of deep residual networks, based on a line of works that interpret the continuum limit of the deep residual network as an ordinary differential equation when the network capacity tends to infinity. Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
This characterization enables us to derive the first global convergence result for multilayer neural networks in the mean-field regime. Furthermore, without assuming the convexity of the loss landscape, our proof relies on a zero-loss assumption at the global minimizer that can be achieved when the model shares a universal approximation property. Key to our result is the observation that a deep residual network resembles a shallow network ensemble~\cite{veit2016residual}, \emph{i.e.} a two-layer network. We bound the difference between the shallow network and our ResNet model via the adjoint sensitivity method, which enables us to apply existing mean-field analyses of two-layer networks to deep networks. Furthermore, we propose several novel training schemes based on the new continuous model, including one training procedure that switches the order of the residual blocks and results in strong empirical performance on the benchmark datasets. | [
"Optimization",
"Mean-field Analysis"
] | Accept (Poster) | https://openreview.net/pdf?id=U03vS6mgX6 | https://openreview.net/forum?id=U03vS6mgX6 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"htUj9NiVoI"
],
"note_type": [
"decision"
],
"note_created": [
1582774857093
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
SDD5n1888 | Learning To Solve Differential Equations Across Initial Conditions | [
"Shehryar Malik",
"Usman Anwar",
"Ali Ahmed",
"Alireza Aghasi"
] | Recently, there has been a lot of interest in using neural networks for solving partial differential equations. A number of neural network-based partial differential equation solvers have been formulated which provide performances equivalent, and in some cases even superior, to classical solvers. However, these neural solvers, in general, need to be retrained each time the initial conditions or the domain of the partial differential equation changes. In this work, we posit the problem of approximating the solution of a fixed partial differential equation for any arbitrary initial conditions as learning a conditional probability distribution. We demonstrate the utility of our method on Burger's Equation. | [
"initial conditions",
"differential equations",
"lot",
"interest",
"neural networks",
"partial differential equations",
"number",
"neural",
"performances equivalent"
] | Accept (Poster) | https://openreview.net/pdf?id=SDD5n1888 | https://openreview.net/forum?id=SDD5n1888 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"QjRjNVjWP-"
],
"note_type": [
"decision"
],
"note_created": [
1582774873476
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
HzYwpRgUDy | Time Dependence in Non-Autonomous Neural ODEs | [
"Jared Quincy Davis",
"Krzysztof Choromanski",
"Vikas Sindhwani",
"Jake Varley",
"Honglak Lee",
"Jean-Jacques Slotine",
"Valerii Likhosterov",
"Adrian Weller",
"Ameesh Makadia"
] | Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation. Neural ODEs are universal approximators only when they are non-autonomous, that is, the dynamics depends explicitly on time. We propose a novel family of Neural ODEs with time-varying weights, where time-dependence is non-parametric, and the smoothness of weight trajectories can be explicitly controlled to allow a tradeoff between expressiveness and efficiency. Using this enhanced expressiveness, we outperform previous Neural ODE variants in both speed and representational capacity, ultimately outperforming standard ResNet and CNN models on select image classification and video prediction tasks. | [
"neural odes",
"time dependence",
"odes",
"elegant reinterpretations",
"deep networks",
"continuous time",
"discrete notion",
"depth"
] | Accept (Poster) | https://openreview.net/pdf?id=HzYwpRgUDy | https://openreview.net/forum?id=HzYwpRgUDy | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"DxH4E0chRm"
],
"note_type": [
"decision"
],
"note_created": [
1582774884912
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
qXJAqIHbXW | Comparing recurrent and convolutional neural networks for predicting wave propagation | [
"Stathi Fotiadis",
"Eduardo Pignatelli",
"Mario Lino Valencia",
"Chris Cantwell",
"Amos Storkey",
"Anil A. Bharath"
] | Dynamical systems can be modelled by partial differential equations and numerical computations are used everywhere in science and engineering. In this work, we investigate the performance of recurrent and convolutional deep neural network architectures to predict the surface waves. The system is governed by the Saint-Venant equations. We improve on the long-term prediction over previous methods while keeping the inference time at a fraction of numerical simulations. We also show that convolutional networks perform at least as well as recurrent networks in this task. Finally, we assess the generalisation capability of each network by extrapolating in longer time-frames and in different physical settings. | [
"partial differential equations",
"spatiotemporal prediction",
"physical system",
"dynamical system",
"deep representation learning",
"convolutional networks",
"recurrent networks",
"lstms",
"predrnn"
] | Accept (Poster) | https://openreview.net/pdf?id=qXJAqIHbXW | https://openreview.net/forum?id=qXJAqIHbXW | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"UXcqIUDKA"
],
"note_type": [
"decision"
],
"note_created": [
1582774903062
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
p-SG2KFY2 | Differentiable Physics Simulation | [
"Junbang Liang",
"Ming C. Lin"
] | Differentiable physics simulation is a powerful family of new techniques that applies gradient-based methods to learning and control of physical systems. It enables optimization for control, and can also be integrated into neural network frameworks for performing complex tasks. We believe that differentiable physics simulation should be a key component for neural networks to bridge the gap between training performance and the generality to previously unseen real-world inputs. However, realizing a practical differentiable simulation is still challenging because of its high dimensionality and fragmented computation flow. In this paper, we motivate the importance of differentiable physics simulation, describe its current challenges, introduce state-of-the-art approaches, and discuss potential improvements and future directions. | [
"Differentiable simulation",
"implicit differentiation"
] | Accept (Poster) | https://openreview.net/pdf?id=p-SG2KFY2 | https://openreview.net/forum?id=p-SG2KFY2 | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"4a6Gv7MxWG"
],
"note_type": [
"decision"
],
"note_created": [
1582774921208
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
_uPd3skTsj | Differential Equations as a Model Prior for Deep Learning and its Applications in Robotics | [
"Michael Lutter",
"Jan Peters"
] | For many decades, much of the scientific knowledge of physics and engineering has been expressed via differential equations. These differential equations describe the underlying phenomena and the relations between different interpretable quantities. Therefore, differential equations are a promising approach to incorporate prior knowledge in machine learning models to obtain robust and interpretable models. In this paper, we summarize a straight forward approach to incorporate deep networks in differential equations to solve first-order non-linear differential equations by minimising the residual end-to-end. We describe the deep differential network that computes the functional value and smooth Jacobians in closed form. Afterwards, we demonstrate that the deep network Jacobians approximate the symbolic Jacboian and apply the proposed approach two robotics applications. These applications use differential equations as model prior for deep networks to learn physically plausible models and optimal feedback control. | [
"Deep Learning",
"Differential Equations",
"Physics Prior",
"Robotics"
] | Accept (Poster) | https://openreview.net/pdf?id=_uPd3skTsj | https://openreview.net/forum?id=_uPd3skTsj | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"Td00cTuXOX"
],
"note_type": [
"decision"
],
"note_created": [
1582774931838
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
f0hUCzC_y | Deep Ritz revisited | [
"Johannes Müller",
"Marius Zeinhofer"
] | Recently, progress has been made in the application of neural networks to the numerical analysis of stationary and instationary partial differential equations. For example, one can use the variational formulation of the Dirichlet problem in order to obtain an objective function – a penalised Dirichlet energy – for the optimization of the parameters of neural networks with a fixed architecture. Although this approach yields promising empirical results especially in high dimensions it is lacking any convergence guarantees. We use the notion of $\Gamma$-convergence to show that ReLU networks of growing architecture that are trained with respect to suitably penalised Dirichlet energies converge to the solution of the Dirichlet problem. We discuss how our findings generalise to arbitrary variational problems under certain universality assumptions on the neural networks that are used. We see that this covers nonlinear stationary PDEs like the $p$-Laplace. | [
"Deep Ritz method",
"$\\Gamma$-convergence"
] | Accept (Poster) | https://openreview.net/pdf?id=f0hUCzC_y | https://openreview.net/forum?id=f0hUCzC_y | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"mQ5uIYNRRm"
],
"note_type": [
"decision"
],
"note_created": [
1582774944141
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
q2noHUqMkK | Enforcing Physical Constraints in CNNs through Differentiable PDE Layer | [
"Chiyu \"Max\" Jiang",
"Karthik Kashinath",
"Prabhat",
"Philip Marcus"
] | Recent studies at the intersection of physics and deep learning have illustrated successes in the application of deep neural networks to partially or fully replace costly physics simulations. Enforcing physical constraints to solutions generated by neural networks remains a challenge, yet it is essential to the accuracy and trustworthiness of such model predictions. Many systems in the physical sciences are governed by Partial Differential Equations (PDEs). Enforcing these as hard constraints, we show, are inefficient in conventional frameworks due to the high dimensionality of the generated fields. To this end, we propose the use of a novel differentiable spectral projection layer for neural networks that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, allowing for its use as a layer within Convolutional Neural Networks (CNNs) during end-to-end training. We show that its computational cost is cheaper than a single convolution layer. We apply it to an important class of physical systems - incompressible turbulent flows, where the divergence-free PDE constraint is required. We train a 3D Conditional Generative Adversarial Network (CGAN) for turbulent flow superresolution efficiently, while guaranteeing the spatial PDE constraint of zero divergence. Furthermore, our empirical results show that the model produces realistic flow statistics when trained with hard constraints imposed via the proposed novel differentiable spectral projection layer, as compared to soft constrained and unconstrained counterparts. | [
"PDE",
"linear constraints",
"CNNs",
"turbulence",
"fluid dynamics"
] | Accept (Poster) | https://openreview.net/pdf?id=q2noHUqMkK | https://openreview.net/forum?id=q2noHUqMkK | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"X10oa5xyGk"
],
"note_type": [
"decision"
],
"note_created": [
1582774955021
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
dYH-JNPNGI | On the space-time expressivity of ResNets | [
"Johannes Müller"
] | Residual networks (ResNets) are a deep learning architecture that substantially improved the state of the art performance in certain supervised learning tasks. Since then, they have received continuously growing attention. ResNets have a recursive structure $x_{k+1} = x_k + R_k(x_k)$ where $R_k$ is a neural network called a residual block. This structure can be seen as the Euler discretisation of an associated ordinary differential equation (ODE) which is called a neural ODE. Recently, ResNets were proposed as the space-time approximation of ODEs which are not of this neural type. To elaborate this connection we show that by increasing the number of residual blocks as well as their expressivity the solution of an arbitrary ODE can be approximated in space and time simultaneously by deep ReLU ResNets. Further, we derive estimates on the complexity of the residual blocks required to obtain a prescribed accuracy under certain regularity assumptions. | [
"Residual networks",
"Universal approximation",
"Differential equations"
] | Accept (Poster) | https://openreview.net/pdf?id=dYH-JNPNGI | https://openreview.net/forum?id=dYH-JNPNGI | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"zUPFqfn4sw"
],
"note_type": [
"decision"
],
"note_created": [
1582774969764
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
XqOseg0L9Q | Nonlinear Differential Equations with external forcing | [
"Paul Pukite"
] | Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data. | [
"partial differential equations",
"navier-stokes"
] | Accept (Poster) | https://openreview.net/pdf?id=XqOseg0L9Q | https://openreview.net/forum?id=XqOseg0L9Q | ICLR.cc/2020/Workshop/DeepDiffEq | 2020 | {
"note_id": [
"fyCmBx6xoG"
],
"note_type": [
"decision"
],
"note_created": [
1582774981553
],
"note_signatures": [
[
"ICLR.cc/2020/Workshop/DeepDiffEq/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}"
]
} |
trPMYEn1FCX | GENERATIVE MODEL-ENHANCED HUMAN MOTION PREDICTION | [
"Anthony Bourached",
"Ryan-Rhys Griffiths",
"Robert Gray",
"Ashwani Jha",
"Parashkev Nachev"
] | The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here we formulate a new OoD benchmark based on the Human3.6M and CMU motion capture datasets, and introduce a hy- brid framework for hardening discriminative architectures to OoD failure by aug- menting them with a generative model. When applied to current state-of-the-art discriminative models, we show that the proposed approach improves OoD ro- bustness without sacrificing in-distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hard- ening diverse discriminative architectures to extreme distributional shift. | [
"ood",
"generative",
"human motion prediction",
"task",
"human motion",
"natural heterogeneity",
"compositionality",
"actions",
"robustness"
] | Reject | https://openreview.net/pdf?id=trPMYEn1FCX | https://openreview.net/forum?id=trPMYEn1FCX | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"YidG6-w3-Y",
"UaNKPz8sASK",
"humwfu3bgwS",
"XDz_nfKRCMr",
"obHLqSSHhs5",
"sK5JWeWRW_C",
"blQNVIlMJGk",
"k6EK7t1UUkU",
"6KUeTkR9rOk"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040431839,
1606155835852,
1606155153639,
1606154808935,
1606154426349,
1603983910871,
1603974066584,
1603924732695,
1603899183409
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3828/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3828/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3828/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3828/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3828/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3828/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3828/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3828/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": [\"This paper proposes a method for out-of-distribution modeling and evaluation in the human motion prediction task. Paper was reviewed by four expert reviewers who identified the following pros and cons.\", \"> Pros:\", \"New benchmark for testing out of distribution performance [R1]\", \"Compelling performance with respect to the baselines [R1,R4]\", \"Paper is well written and easy to follow [R2]\", \"Generative model in the context of out-of-distribution modeling of human motion is novel [R1,R2,R4]\", \"> Cons:\", \"Lack of support for interpretability claim [R1]\", \"Validity and usefulness of the metric [R1]\", \"Lack of \\\"effectiveness\\\" of the proposed approach [R2,R4]\", \"Technical contributions are not significant [R3,R4]\", \"Experimental validation lacks comparisons to other state-of-the-art in motion prediction methods [R3]\", \"Lack of evaluation on additional datasets and for the main task [R4]\", \"Authors tried to address the comments in the rebuttal, but largely unconvincingly to the reviewers. On balance, reviewers felt that negatives outweighed the positives and unanimously suggest rejection. AC concurs and sees no reason to overturn this consensus.\"]}",
"{\"title\": \"We are grateful for the reviewer\\u2019s comments, which we address in detail below.\", \"comment\": \"-- OoD Benchmark--\\nWe disagree. Any good benchmark should seek to replicate the circumstances and demands of the real-world\\ntask while being maximally sensitive to the performance differences between competing models. The critical point\\nwe make at the outset of our paper is that the first property is very difficult to satisfy owing to the extreme\\nheterogeneity and complex compositionality of human actions, and interacts with the more important second\\nproperty. Any modest selection of classes is likely to contain a blend of degrees of similarity, so that the extent to\\nwhich any one class is out-of-sample with respect to the set will vary a great deal. When a set of models are\\nevaluated within a leave-one-class-out framework, performance will then be brittle, dependent on the accidental\\nhomology between the test class and one or more of the training classes. Crucially, performance in such\\ncircumstances will be less sensitive to the generalising power of the test model than when evaluated within the\\nconverse framework\\u2014testing on all after training on one\\u2014for generalisation will be easier in being informed by a\\ngreater diversity of training data. The latter framework helpfully accentuates the difference between the training\\nand testing data, and broadens the range of contrasts being evaluated. Equally, given that an action of a given\\ndistinctive morphology might be relatively rare, data efficiency is an important concern here, and ought to be\\nstress-tested by varying the quantity of training data as well as its composition.\\n\\nWe agree that the AMASS dataset would be a good candidate for further benchmarks for OoD motion, but a\\ncomparison across different sets of instances of supposedly the same action is not the critical contrast, as we\\nhave already argued. What we need is a comparison across explicitly labelled actions, for the fundamentals of\\nreal-world action render all performance subject to unquantifiable and likely substantial distributional shifts. This\\nis what our benchmark sets out to achieve.\\n\\n-- Quantifying the OoD--\", \"the_protocol_is_not_the_same\": \"we modify it so as to quantify the difference between in- and out-of-distribution\\nperformance for two models that differ only in one of them being enhanced with a generative model. The same\\napproach can be taken for other models in other contexts.\", \"on_the_reviewers_questions\": \"- On comparison to Askan et al.\\nThe cardinal characteristics of human action we draw attention to in the introduction suggest deep generative\\narchitectures are likely to provide the best means of modelling it. But the focus of this paper is the felicity of the\\nsimpler, often computationally more economical, approach of enhancing a conventional discriminative model with\\ngenerative machinery. We call it a framework because it is transferable across an array of discriminative\\narchitectures, at least when implemented with neural networks of sufficient flexibility. That others may achieve\\nhardening to OoD problems via kindred mechanisms\\u2014in Aksan et al.\\u2019s work the reviewer cites via explicit\\nprobabilistic modelling of joint dependence\\u2014seems to us to reinforce the general point we are making here, and\\nto justify the introduction of the OoD benchmark I think we are all agreed is overdue.\\n\\n- On how can sequential deterministic models be addressed:\\nWhile our focus is on the current SOTA model, which happens to be a GCN model, the approach is potentially\\napplicable to any discriminative model flexibly implemented as a neural network. Weight sharing with an auxiliary,\\nsubservient generative model will always be possible, and would encourage the discriminative model's sequence\\nto contain a richer description of the input. Additionally, an auxiliary generative model that generates a sequence\\nin tandem with, but separate from, the discriminative model could share its uncertainty measures and alternative\\nfutures at each point in the generation of the sequence. Whereas in our case we encourage the data flowing\\nthrough a discriminative model to double as the representation in a latent variable model, in a sequential\\ndeterministic model, a fully or partially generated sequence could similarly serve a dual role as the seed for a\\ngenerative sequential model\\u2019s sequence.\\n\\nHow these tricks are used very much depends on the experimental setup, and while it is natural to consider the\\nanalogous approaches that may be applicable to sequential models, the considerations and metrics relevant to\\nthe juxtaposition of feedforward and sequential models are not. We believe that to include this investigation would\\nincrease the complexity of this paper to the point of obfuscating its main contribution.\\n\\n-- Additional comments --\\nAmended as requested and may now be viewed in the pdf. Note we cite Martinez et al., already, but have added others: many thanks.\"}",
"{\"title\": \"We are grateful for the reviewer\\u2019s comments, which we address in detail below.\", \"comment\": \"On the contribution: The judgment of publication significance is a subjective, editorial matter. But the reviewer states our contribution only partially. The value of our work is less in the specific implementation of a generatively-enhanced model of\\nmotion\\u2014though the implementation is novel\\u2014than in the demonstration of the general value of the approach in\\nthe domain of action modelling. We do this for reasons the reviewer agrees with\\u2014the complex compositionality\\nand constitutional indeterminacy of human motion\\u2014which have not been adequately discussed in the literature,\\nand have a bearing on all models of human action. The lack of sufficient awareness of the problem is reflected in\\nthe absence of an established OoD benchmark, which we provide here: another aspect of our contribution the\\nreviewer describes as highly valuable.\", \"on_the_experiments\": \"Our objective is to demonstrate the value of enhancing a discriminative model with a generative one: the correct comparison is therefore with the same model, but without the generative machinery. We have chosen the state-\", \"of_the_art_model_at_the_time_of_submission\": \"space precludes evaluation of a range of models, but demonstrating\\nan effect on less successful models would naturally be weaker, for they leave more room for improvement. In\\nreporting model performance we follow established practice in the literature, but have now conducted additional experiments to provide confidence intervals for the h3.6m dataset results. Which may now be viewed in the pdf.\"}",
"{\"title\": \"We are grateful for the reviewer\\u2019s comments, which we address in detail below.\", \"comment\": \"The results show that our approach matches or exceeds the current state-of-the-art in individual tasks, and\\nexceeds the state-of-the-art overall. A choice between the two architectures of similar size and training\\ncharacteristics would naturally find in favour of ours, for that is what the evaluation data compel. But our aim here\\nis less to build a state-of-the-art model of motion prediction than to show that the use of a relatively simple\\ngenerative model to enhance a discriminative model of a radically different architecture improves OoD\\nperformance even when the discriminative architecture is already heavily optimised. It is the felicity of that simple\\ngeneral architectural move that we wish to highlight here, for it has implications for the design of other models in\\nthe field, indeed any model deployed on the same task. We have amended the text to make this clear.\\n\\nWe have also conducted additional experiments to provide confidence intervals for the h3.6m dataset results. Which may now be viewed in the pdf.\"}",
"{\"title\": \"We are grateful for the reviewer\\u2019s comments, which we address in detail below.\", \"comment\": \"Interpretability: Our point is that the introduction of a succinct, surveyable latent space can facilitate the identification of\\ncharacteristic patterns of motion variability by rendering them intuitively apprehensible. This is a theoretical claim\\nimplied by the fundamentals of the approach, and here we merely draw attention to an additional benefit our\\napproach could potentially bring. Limited space precludes detailed empirical exploration of its value. We have\\namended the text to clarify this point.\", \"evaluations\": \"Our objective is to demonstrate that a state-of-the-art discriminative predictive model of motion can be hardened\\nto OOD challenge by the addition of a relatively simple generative network. The primary measure of performance\\nmust therefore be the fidelity of motion prediction\\u2014not action synthesis or action recognition\\u2014for that is our task.\\nA generative model that excels in synthetic fidelity and disentanglement of the constituent actions is theoretically\\nlikely to perform well, but since the device is here deployed subserviently to a discriminative model, the\\nappropriate metric is that of the target model it serves to enhance. Indeed, Myronenko\\u2019s work suggests a\\nrelatively crude generative model of limited synthetic power can nonetheless promote a basic discriminative\\narchitecture to state-of-the-art. What seems to us striking here is that the addition to discriminative architectures\\nof fairly simple generative machinery can be remarkably useful, achieving state-of-the-art performance without\\nthe architectural complexities and training demands an accomplished generative model involves. We have\\namended the text to clarify this point.\\n\\nDifferences with Kipf & Welling, 2016: Our generative model is a variational autoencoder (Kingma & Welling) with graph convolutional layers in place of dense layers, except for immediately around the (dense) layers that produce a sample from q(z|x). Kipf & Welling's application is a link prediction task in citation networks and thus it is desired to model only connectivity in the latent space. Here we model connectivity, position, and temporal frequency. We have amended the text to clarify this.\", \"thank_you\": \"we hope our replies are satisfactory.\"}",
"{\"title\": \"Review\", \"review\": \"Summary:\\nThis paper proposes a method and benchmark for out-of-distribution modeling and evaluation of human motion. They evaluate against state-of-the-art human motion methods, and show favorable performance against them.\\n\\n\\nPros\\n+ Generative model formulation for human motion prediction\\n+ Benchmark for testing out of distribution performance in Human 3.6M and CMU-Mocap\\n+ Proposed generative model outperforms baselines\\n\\n\\nComments / Suggestions:\\n- Interpretability claim:\\nThe authors talk about facilitating interpretability in the abstract, however, I fail to find any clear experiments suggesting this. For example, I cannot find analysis of the different dimensions in the learned latent space or anything of that nature. I see section B in the supplementary material discusses interpretability, but I fail to find any clear cut results about this. Can the authors clarify how this claim is reflected in the paper?\\n\\n- Evaluations:\\nThe evaluations provided in this paper are based on euclidean distance measured with respect to the ground truth. While this metric is reasonable, it may also not be enough to evaluate a generative model of motion (e.g., there are multiple plausible futures given a single past). Given that there are clearly defined actions in the used datasets, I would suggest using a metric that measures the generated sequences as a whole. For example, one can train a motion recognition network which given a motion tells us what type of motion we are observing. The authors could train this type of network and test it on their generated motion to see if the predicted / generated motion is recognized as the right category. Another similar evaluation would be FID, where the authors can see if the predicted / generated motion distribution in feature space is close to the ground truth distribution.\\n\\n- Differences with Kipf & Welling, 2016:\\nThe authors mention that they adopt VGAE from Kipf and Welling, but I fail to find where the authors mention what are the specific differences of their method in comparison to Kipf & Welling, 2016. Can the authors clarify this or point out where the specific differences are mentioned?\", \"conclusion\": \"The proposed benchmark is interesting and useful for out-of-distribution evaluations, however, some evaluations may be missing to make this more comprehensive. The differences between the method used by the authors and the related work need to be clarified. I am willing to change my score if the authors successfully address the issues mentioned above.\\n\\n###########################\\n Post Rebuttal Comments\\n###########################\\n\\nAfter reading the rebuttal, I am keeping my original score. For my first concern, they authors mention space as being a limitation for not providing analysis on the \\\"surveyable\\\" latent space, but as far as I know, additional experiments addressing my concern could have been added to the supplementary material. For my second concern, the authors talk about excelling in synthetic fidelity, however, there are fidelity measures for generative models that were not used in this submission. MSE is not a fidelity metric. I suggest that the authors address the concerns raised by the reviewers in future submissions, and it's highly likely that the work will be more solid.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The proposed approach is reasonalbe for dealing with Out-of-Distribution (OoD) problem of human motion prediction. But the experimental results are unconvinced.\", \"review\": \"This paper presents a generative model to solve the OoD problem in human motion prediction. It extends the GCN and attention-GCN works with VGAE for predicting human motions that are different from ones used in training. Experiments are performed on H36M and CMU benchmarks for illustrating the efficacy of the proposed approach.\", \"pros\": \"1. The paper is good in writting and easy to follow the idea\\n2. The perspective of using generative model to deal with OoD problem in human motion is novel.\\n\\nCons.\\n1. My major concern is the effectiveness of the proposed approach. From the results shown in Table 3 to Table 5, we could find that the proposed approach fail to solve the OoD problem for some actions when comparing with the baseline attention-GCN. For example, in Table 5, the proposed generative model achieve poor performance than attention-GCN, such as Discussion, Posing, Purchases, Walking Dog and Walking Together (5 out of 14 acitions). These experiments could not provide convinced results to depict the efficacy of the proposed approach. I think the authors should provide more explanations on this which should make this paper stronger.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of \\\"GENERATIVE MODEL-ENHANCED HUMAN MOTION PREDICTION\\\"\", \"review\": \"The paper presents, firstly, a new benchmark (based on Human3.6M and CMU datasets) for human activity and motion with a high degree of out-of-distribution examples, and secondly a hybrid framework for human motion prediction which is more robust to out-of-distribution samples.\\n\\nOn a positive note, the presented view of human activity as highly compositional and without a clear ontology of actions and sub-actions is highly relevant, and the observed issues with the state of the art methods are completely correctly characterized. The proposed behchmark is also highly valuable to the community.\\n\\nHowever, the paper suffer from two flaws that renders it unfit for publication in ICLR in its current form. \\n* Firstly, the contribution - to combine GCN with the approach of (Myronenko 2018) to regularize the training with a generative model that takes unlabeled samples into accound, and to replace their VAE framework with a corresponding VGAE one - is not on its own significant enough to serve as a basis for an ICLR paper. \\n* Secondly, the experiments are not adequate in that the method is only compared to a GCN without the generative model - and not with any of the other state-of-the-art in motion prediction. Moreover, results are presented without standard deviations which makes it hard to determine if the improvements are significant.\\n\\nThese two flaws leads to a Reject recommendation, but the authors are highly encouraged to expand the experiments to empirically verify that the proposed contribution is significant enough to warrant publication, and resubmit to a later conference. While addressing the second flaw, it would also help convince the reader of the significance of the method contribution.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review - Generative Model-Enhanced Human Motion Predicition\", \"review\": \"## Summary:\\n---\\nThis paper raises and studies concerns about the generalization of 3D human motion prediction approaches across unseen motion categories. The authors address this problem by augmenting existing architectures with a VAE framework. More precisely, an encoder network that is responsible for summarizing the seed sequence is shared by two decoders for the reconstruction of the seed motion and prediction of the future motion. Hence, the encoder is trained by using both the ELBO of a VAE and the objective of the original motion prediction task. \\n\\n## Pros:\\n--- \\nThe paper has a novel and interesting direction as robustness to distribution shifts has not been studied before in 3D human motion modeling. It is implemented around one of the SoTA models based on Graph Convolutional Networks (GCN) using discrete cosine transformation (DCT) features extracted from the motion sequence. To simulate the out-of-distribution scenario, the baselines and the proposed extension are trained on a single action category such as walking and tested on the remaining actions such as eating and sitting. Experiment results on the H3.6M and CMU datasets show that the proposed approach is useful on out-of-distribution (OoD) test cases. \\n\\n## Cons:\\n--- \\nI have two main concerns on the proposed benchmark and the models. \\n \\n-- OoD Benchmark--\\n- It looks like there is a significant underfitting problem. The performance of GCN on the walking category is 0.56 at 400 ms while the in-distribution (ID) performance with OoD training is 0.66 (Table 1). The training split for the OoD setup proposed by the authors is possibly too small. I also can not grasp the motivation for selecting a training set \\u201cas small in quantity, and narrow in domain as possible\\u201d (Section 3). While there is not enough or barely enough training samples, the comparisons might be misleading. We do not know how the proposed extension behaves on the standard task. The authors should compare their models on the main task as well.\\n \\n- Motion samples from different categories (i.e., walking, eating, etc.) can still be useful for the models in learning the 3D human motion prior. In fact, it has been shown by Martinez et al. (2017) [4] that training motion models with _all_ available actions improves the performance significantly compared to a single-action models as done in this paper. While the proposed approach outperforms the baselines in average performance, it is not always or substantially better on the fine-grained actions. \\n \\n- It is a tedious setup, but a leave-one-action-out strategy can be more reliable. In my opinion a better option would be training on one dataset and testing on another one. This would allow for an evaluation of the existing (and even pre-trained) models directly where the proposed extension would remain as the only factor for evaluation. In the context of H3.6M and CMU datasets, this might not be straightforward due to different skeletal configurations. Yet there exists a much larger benchmark for 3D motion prediction: AMASS [1]. This would be a suitable candidate for this task as it is a collection of several diverse mocap datasets with different motion categories. It would be very easy to train on a subset of datasets and test on the remaining ones as all the datasets follow a unified skeletal configuration. Note that this is only a suggestion to improve the current work and I am not asking for running experiments on AMASS for the rebuttal as it would drastically change the submission.\\n \\n-- Quantifying the OoD--\\n- The existing architectures are augmented with a VAE latent space and a decoder, which is not technically novel. However, regularization of the representation space and reconstruction of the inputs as auxiliary tasks seem like helpful to the motion prediction task and a good contribution under the \\u201climited\\u201d training/evaluation protocol. At the same, time the proposed evaluation protocol is not orthogonal to the task but instead it just follows the existing task protocol. In other words, it is not an independent metric/method/framework for assessing the existing models\\u2019 OoD performance. \\n\\nI am asking the following questions as the proposed approach is presented as a \\u201cframework\\u201d:\\n\\n- The authors hypothesize that motion prediction in generative modeling frameworks can alleviate the OoD problems. I find it too broad as generative modelling can be applied in various frameworks. Although it is conceptually very different, [1] uses an auto-regressive model, which is a generative model by design, and trains the model by predicting both the seed (i.e., loosely reconstruction) and the future frames similar to the proposed approach. Can we say that they also deal with the OoD problems implicitly? How do the authors position their \\u201cframework\\u201d compared to this line of work? \\n\\n- The authors only focus on the Seq2seq-based methods for motion prediction and choose a baseline with an implicit temporal model (i.e., using DCT to encode the motion sequences). Hence, the proposed approach seems to be limited to this GCN-based architecture. How can the sequential deterministic models [1, 3, 4] be addressed? \\n \\n-- Additional comments --\\n- Figures should be improved. Especially the text is hardly readable. \\n\\n- I find it very hard to follow Section 3. It was clear only after I read the section A in the appendix. It would be clearer if some of the findings are discussed in Section 3 already. \\n\\n- The losses in the tables are too high compared to the actual task. I am not sure if there is a qualitative difference between the models as the authors did not present any qualitative results. \\n\\n- Missing related work on 3D motion modelling. I list a s,all collection of SoA representatives below:\\n \\n[1] Aksan, Emre, Manuel Kaufmann, and Otmar Hilliges. \\\"Structured prediction helps 3d human motion modelling.\\\" Proceedings of the IEEE International Conference on Computer Vision. 2019.\\n[2] Gui, L. Y., Wang, Y. X., Liang, X., & Moura, J. M. (2018). Adversarial geometry-aware human motion prediction. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 786-803).\\n[3] Pavllo, D., Grangier, D., & Auli, M. (2018). Quaternet: A quaternion-based recurrent model for human motion. arXiv preprint arXiv:1805.06485.\\n[4] Martinez, J., Black, M. J., & Romero, J. (2017). On human motion prediction using recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2891-2900).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
73WTGs96kho | Net-DNF: Effective Deep Modeling of Tabular Data | [
"Liran Katzir",
"Gal Elidan",
"Ran El-Yaniv"
] | A challenging open question in deep learning is how to handle tabular data. Unlike domains such as image and natural language processing, where deep architectures prevail, there is still no widely accepted neural architecture that dominates tabular data. As a step toward bridging this gap, we present Net-DNF a novel generic architecture whose inductive bias elicits models whose structure corresponds to logical Boolean formulas in disjunctive normal form (DNF) over affine soft-threshold decision terms. Net-DNFs also promote localized decisions that are taken over small subsets of the features. We present an extensive experiments showing that Net-DNFs significantly and consistently outperform fully connected networks over tabular data. With relatively few hyperparameters, Net-DNFs open the door to practical end-to-end handling of tabular data using neural networks. We present ablation studies, which justify the design choices of Net-DNF including the inductive bias elements, namely, Boolean formulation, locality, and feature selection.
| [
"Neural Networks",
"Architectures",
"Tabular Data",
"Predictive Modeling"
] | Accept (Poster) | https://openreview.net/pdf?id=73WTGs96kho | https://openreview.net/forum?id=73WTGs96kho | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"9lyHx3L63R",
"aJ8mkn4lHlO",
"S_JZaOaMxgk",
"q1_rl8j34jF",
"pNnNSLtmbgZ",
"9pDA55ZUfu1",
"LiMjsRKMUUI"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040526872,
1606160113879,
1606159408877,
1606157998330,
1604600661682,
1603689245162,
1603611337230
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2021/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2021/Conference/Paper1574/Authors"
],
[
"ICLR.cc/2021/Conference/Paper1574/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper1574/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper1574/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes an end-to-end architecture, Net-DNF, for handling tabular data. This is a novel approach in the relatively under-explored domain of application of neural networks; the paper also presents justification of the design choices via ablation studies. The paper is clearly written, and empirical results are convincing.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"__*However, their empirical evaluations do not place the performance of Net-DNF in the context of existing work. It is good that the authors have compared Net-DNF against the obvious baselines of XGBoost and FCN, but they have neglected to demonstrate where Net-DNF stands in relation to other tabular-inspired approaches (mentioned in the related work). For example, why did the authors not compare against Popov et al. (2019; whose code is open-sourced) and Shavitt & Segal (2018)? Nazabal, Alfredo, et al. \\\"Handling incomplete heterogeneous data using vaes.\\\" Pattern Recognition (2020)*__\\n\\nEarly experiments with the method of Shavitt & Segal (using their own code) showed results that were substantially inferior and we decided to not pursue deeper exploration of their method. Popov\\u2019s approach is similar to FCNs according to their reported performance and we felt that it is reasonable to omit this additional comparison.\\n\\n__*I feel that the authors have left out a line of work that uses a deep learning approach for tabular data. The reference below is a representative publication. (NB: The VAE approach can be evaluated on prediction tasks like Net-DNF, and it works on multi-modal data that Net-DNF purports to handle.) Could the authors comment on where Net-DNF stands relative to this line of work (and include it in their related work section)?*__\\n\\nWhile the work cited by the reviewer is certainly relevant and we should have cited it, none of the experiments considered in the work are of a similar scale to ours. In particular, the categorical/tabular variables in each of the dataset considered is extremely small: just 1 for 4 datasets and 4,6 for the two additional datasets. Thus, the setting considered is mostly non-tabular in the categorical sense of the word. Moreover, the one-hot encoding they use to cope with tabular data, while reasonable for a small number of such variables, is less so when the domain has many categorical variables.\\n\\n__*Page 1, para 2, \\\"Moreover,...multi-modal data,...is problematic\\\": How does Net-DNF handle multi-modal data (e.g., \\\"medical records and images\\\")? Would it simply encode an image as a vector of bits and stack it in input x?*__\\n\\nWe believe that our work is a step toward the ultimate goal of being able to fuse tabular data on which neural network perform poorly and other types of data (e.g. images) where NNs are the natural champions. One can envision a Net-DNF that parallels, for example a convolution network, where both representations are merged, perhaps fortified with mutual attention, and then classified/regressed using a couple of fully connected layers. The crucial point is that such a setting allows for joined, end-to-end optimization of both modalities together.\\n\\n__*Page 1, para 2, \\\"GBDTs...hard to scale\\\": How scalable is Net-DNF? What is its size complexity?*__\\n\\nGBDTs and similar approach scale badly because they need to essentially store the entire data in memory. Net-DNF nets are similar to standard neural networks in that effective optimization is carried via batches and methods like SGD. The effectiveness of this combination to learn strong models from huge datasets.\", \"from_the_paper\": \"\\\"Scaling up the gradient boosting models was addressed by several papers (Ye et al., 2009; Tyree et al., 2011; Fu et al., 2019; Vasiloudis et al., 2019). The most significant computational disadvantage of GBDTs is the need to store (almost) the entire dataset in memory.\\\"\\n\\n__*Page 6, Table 1 and 2. \\\"# formulas\\\" corresponds to the number of DNNFs in an Net-DNF, but what does it correspond to in an FCN? How many parameters are in each system? Wouldn't the number of parameters provide a fairer comparison of the capacities of the systems?*__\\n\\nNumber of formulas is indeed meaningless in FCNs. See parameters count in response to AnonReviewer2\\n\\n__*Page 8, Figure 2. The authors claim that space localization will improve the green line's performance (feature selection) on Syn4-6. Why don't the authors provide empirical results to show this? Seems like a straightforward data point to add.*__\\n\\nWe agree with the reviewer and, given the chance, will add this data point.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"__*In Sec 2.1, this soft approximation is a very critical part of the paper. Why do you decide to modify the constant from 1.0 to 1.5, you suggested in the Appendix that \\\"the AND gate will be closer to 1\\\", but the OR gate will be affected, right? Then OR(0,0,0,1) = tanh(0.5) = 0.4, which is pushed away from 1. I believe there are some trade-offs for this hyperparameter, how do you decide it should be 1.5 rather than other numbers, do you have supporting evidence for your decision?*__\\n\\nIt is indeed likely that the value of the bias term has certain tradeoffs. Early on in our study we reasoned that it makes sense to emphasize the case of AND(1,1,...,1) and (arbitrarily) fixed it to 1.5 without further optimization (we realized that it can potentially hurt cases like OR(1,0,0,...) and also AND(1,1,1,-1) = tanh(-5)). While we believe that this bias constant is not critical (because the first layer is trainable and can in principle compensate for any fixed bias term), in retrospect this was not a good choice due to symmetry arguments (and it justifiably raises doubts/questions like yours). At this stage it will be impossible for us to repeat the entire empirical study with bias=1 in reasonable time, but we can add a quick examination that shows that results aren\\u2019t too sensitive on average to this constant.\\n\\n__*In your feature selection part, the is a learnable parameter to control the sparsity of the selected feature. You propose to achieve this by using the elastic net regularization. Is the trainable mask shared among all the DNNF blocks or is it specific to a certain DNNF block?*__\\n\\n$w_t$ is specific to each block (we will clarify this).\\n\\n__*In your experiment table 1, some results from FCN are shown to cause OOM. I kind of get the idea that DNNF uses a more compact parameterization, but not entirely sure how it quantitatively compare with FCN. What is the parameter complexity for DNNF? Could you give a more formal explanation about why it's more compact?*__\\n\\nThe OOM was encountered when the DNF structure was replaced by a three hidden layer FCN. A fair comparison is obtained by defining the widths of the FCN layers in accordance with the widths in the Net-DNF with the given number of formulas (see Tables 1&2). To see the parameter complexity advantage of DNNF observe that in the pure DNNF block, only the first layer (which creates affine \\u201cliterals\\u201d or features) is trainable. If d is the input dimension and we consider m literals, the difference is O(dm) for DNNF vs O(dm + m^2) for the corresponding FCN.\\n\\n__*In table 3, some of the experimental results are better than XGBOOST and some are worse. I can understand that the current DNNF has not yet achieved the same performance as XGBOOST as XGBOOST has always been a go-to algorithm for tabular data. But could you explain why DNNF is much better than GAS Concentrations, is it because of some properties of this dataset? And do you have a more high-level suggestion as to \\\"under what circumstances is DNNF likely to out-perform XGBOOST\\\", what could be the reason for that? Is it because of the better generalization of the neural network?*__\\n\\nThis is an interesting open question for further research. While the inductive biases of XGBoost and Net-DNF both rely on a weighted sum of logical formulas, there are still substantial differences. For example, the formulas we use are over linear separators and XGBoost relies on decision stumps. Another major difference is the training process (SGD vs. decision tree learning and boosting). The family of generating distributions for tabular classification tasks is large and the labeling mechanisms can be diverse from very simple (e.g., labels defined by a very simple function of few input features) to very complex (e.g., parity-like). Thus, it is indeed likely that some datasets are better aligned with one representation or the other, but we believe that a-priori identification of what will work better is going to be quite elusive.\"}",
"{\"title\": \"Response to Reviewer4\", \"comment\": \"__*Classical ML approaches are outperforming Net-DNF*__\\n\\nWe do not claim to consistently beat classical gradient boosting trees methods on tabular data. Instead, we are making a real step towards a unified framework where you can be competitive with these on tabular data, while also offering the benefits of neural networks when the data is different or not amenable to such methods because of its size.\\n\\n__*The experimental analyses are not convincing: The datasets are a bit limited and performance metrics are rather limited to log-loss which is difficult to interpret*__\\n\\nWe intentinally focused on large datasets where GBTs perform best, including two large historic Kaggle competitions whose sizes are 62k and 200K samples. As a result, the number of datasets is indeed limited because the computation we explored for the baselines, to ensure extra fairness. See details provided in the supplementary material (864 configurations for XGBoost, and 3300 configurations for FCNs). We agree with the reviewer that log-loss isn\\u2019t natural to interpret. But, it is the maximum likelihood function applied to test data and captures the extent to which the model fits the data. Importantly, this is the conventional/suggested metric on many of these tasks (check the *Kaggle* instructions for the competitions, e.g., the Otto competition https://www.kaggle.com/c/otto-group-product-classification-challenge/overview/evaluation).\\n\\n__*It is unclear whether adding the Net-DNF layers, in the ablation study, gives improvement due to simply an increase of model capacity or the specific architecture of Net-DNF.*__\\n\\nIn terms of model capacity/parameters, some of the FCNs we considered were substantially larger (see response to Reviewer 2), but still were far behind in terms of error performance. \\n\\n__*In Exp 7, why there is OOM despite removing additional layer ?*__\\n\\nSee response to Reviewer 2.\\n\\n__*The authors did not discuss a similar recent work: TabNet*__\\n\\nThanks for the comment. We were not aware of this paper. We will add the following paragraph: TabNet\\u2019s paper also tackles the same problem, of providing a deep-learning based predictive model for tabular data comparative or superior to gradient boosting technique. The differences are in the technique used to reach this goal. TabNet uses an attention mechanism to induce sparsity/feature selection, while Net-DNF uses DNF formulas and elastic net regularization for feature selection.\\n\\n__*The authors need to include additional datasets similar to TabNet experiments*__\\nWe will add TabNets experiments to the paper. We ran Tabnet on some of our datasets using the PyTorch implementation (https://github.com/dreamquark-ai/tabnet). Tabnet results were slightly inferior to the results we obtained for XGBoost. For example, for the Gas Concentrations we got 4.89 log-loss (vs 2.22 on XGBoost).\\n\\n__*In $R(m_t, m_s)$ I think there should not be a division by 2*__\\n\\nThis is how elastic net is derived and implemented. See, e.g. the sklearn implementation::https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html.\"}",
"{\"title\": \"Positioning w.r.t. TabNet is needed\", \"review\": \"The authors propose a end-to-end deep learning model called Net-DNF to handle tabular data. The architecture of Net-DNF has four layers: the first layer is a dense layer (learnable weights) with tanh activation eq(1). The second layer (DNNF) is formed by binary conjunctions over literals eq(2). The third layer is an embedding formed by n DNNF blocks eq(3). the last layer is a linear transformation of the embedding with a sigmoid activation eq(4). The authors also propose a feature selection method based on a trainable binarized selection with a modified L1 and L2 regularization. In the experimental analysis, Net-DNF outperforms fully connected networks.\", \"pros\": [\"A novel approach to handle tabular data using deep learning with an integrated feature selection\", \"The VC dimension bound gives theoretical motivation for expressing decision trees as DNF formulas.\"], \"cons\": [\"Classical ML approaches are outperforming Net-DNF\", \"The experimental analyses are not convincing: The datasets are a bit limited and performance metrics are rather limited to log-loss which is difficult to interpret\"], \"remarks\": [\"It is unclear whether adding the Net-DNF layers, in the ablation study, gives improvement due to simply an increase of model capacity or the specific architecture of Net-DNF.\", \"In Exp 7, why there is OOM despite removing additional layer ?\", \"The authors did not discuss a similar recent work: TabNet (https://arxiv.org/abs/1908.07442)\", \"The authors need to include additional datasets similar to TabNet experiments\", \"in $R(m_t, m_s)$ I think there should not be a division by 2\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper attempts to propose a neural network-based algorithm on tabular data, or non-distributed representation to address the unique challenges in tabular data, which are non-existing in the distributed data (images, language, etc). The primary motivation of this paper is the simulation of DNF, which simulates the Boolean formulas in decision making. The DNF form can better capture the non-distributional nature of tabular data and somehow simulates with the decision-tree algorithms in a more soft-threshold way. The general idea makes sense to me and sounds novel compared to the existing literature. However, I still have some doubts and questions about the paper:\\n1) In Sec 2.1, this soft approximation is a very critical part of the paper. Why do you decide to modify the constant from 1.0 to 1.5, you suggested in the Appendix that \\\"the AND gate will be closer to 1\\\", but the OR gate will be affected, right? Then OR(0,0,0,1) = tanh(0.5) = 0.4, which is pushed away from 1. I believe there are some trade-offs for this hyperparameter, how do you decide it should be 1.5 rather than other numbers, do you have supporting evidence for your decision? \\n2) In your feature selection part, the $w_t$ is a learnable parameter to control the sparsity of the selected feature. You propose to achieve this by using the elastic net regularization. Is the trainable mask $w_t$ shared among all the DNNF blocks or is it specific to a certain DNNF block? \\n3) In your experiment table 1, some results from FCN are shown to cause OOM. I kind of get the idea that DNNF uses a more compact parameterization, but not entirely sure how it quantitatively compare with FCN. What is the parameter complexity for DNNF? Could you give a more formal explanation about why it's more compact?\\n4) In table 3, some of the experimental results are better than XGBOOST and some are worse. I can understand that the current DNNF has not yet achieved the same performance as XGBOOST as XGBOOST has always been a go-to algorithm for tabular data. But could you explain why DNNF is much better than GAS Concentrations, is it because of some properties of this dataset? And do you have a more high-level suggestion as to \\\"under what circumstances is DNNF likely to out-perform XGBOOST\\\", what could be the reason for that? Is it because of the better generalization of the neural network?\", \"typo\": \"2.4 Spacial Localization -> Spatial Localization\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Novel neural architecture emulating decision trees/forest, but with gaps in empirical evaluation.\", \"review\": \"This paper proposes a neural architecture that emulates the characteristics of decision-tree variants, in the hope of mirroring their successes on tabular data. The architecture consists of three components: DNNF blocks, feature-selection masks, and spacial-localization weightings of DNNF blocks. I find these components and their coherent combination into the Net-DNF structure to be novel.\\n\\nHowever, their empirical evaluations do not place the performance of Net-DNF in the context of existing work. It is good that the authors have compared Net-DNF against the obvious baselines of XGBoost and FCN, but they have neglected to demonstrate where Net-DNF stands in relation to other tabular-inspired approaches (mentioned in the related work). For example, why did the authors not compare against Popov et al. (2019; whose code is open-sourced) and Shavitt & Segal (2018)?\\n\\nI feel that the authors have left out a line of work that uses a deep learning approach for tabular data. The reference below is a representative publication. (NB: The VAE approach can be evaluated on prediction tasks like Net-DNF, and it works on multi-modal data that Net-DNF purports to handle.) Could the authors comment on where Net-DNF stands relative to this line of work (and include it in their related work section)?\\n\\nNazabal, Alfredo, et al. \\\"Handling incomplete heterogeneous data using vaes.\\\" Pattern Recognition (2020)\\n\\nThe exposition of the paper is lucid throughout.\\n\\nQUESTIONS\\n\\n* Page 1, para 2, \\\"Moreover,...multi-modal data,...is problematic\\\":\\nHow does Net-DNF handle multi-modal data (e.g., \\\"medical records and images\\\")? Would it simply encode an image as a vector of bits and stack it in input x?\\n\\n* Page 1, para 2, \\\"GBDTs...hard to scale\\\":\\nHow scalable is Net-DNF? What is its size complexity?\\n\\n* Page 6, Table 1 and 2.\\n\\\"# formulas\\\" corresponds to the number of DNNFs in an Net-DNF, but what does it correspond to in an FCN? How many parameters are in each system? Wouldn't the number of parameters provide a fairer comparison of the capacities of the systems?\\n\\n* Page 8, Figure 2.\\nThe authors claim that space localization will improve the green line's performance (feature selection) on Syn4-6. Why don't the authors provide empirical results to show this? Seems like a straightforward data point to add.\\n\\nNITS\\n\\n* Page 2, first line: inherent GBDTs -> inherent in GBDTs\\n \\n* Page 6, In is evident -> it is evident\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
mNtmhaDkAr | Predicting Inductive Biases of Pre-Trained Models | [
"Charles Lovering",
"Rohan Jha",
"Tal Linzen",
"Ellie Pavlick"
] | Most current NLP systems are based on a pre-train-then-fine-tune paradigm, in which a large neural network is first trained in a self-supervised way designed to encourage the network to extract broadly-useful linguistic features, and then fine-tuned for a specific task of interest. Recent work attempts to understand why this recipe works and explain when it fails. Currently, such analyses have produced two sets of apparently-contradictory results. Work that analyzes the representations that result from pre-training (via "probing classifiers") finds evidence that rich features of linguistic structure can be decoded with high accuracy, but work that analyzes model behavior after fine-tuning (via "challenge sets") indicates that decisions are often not based on such structure but rather on spurious heuristics specific to the training set. In this work, we test the hypothesis that the extent to which a feature influences a model's decisions can be predicted using a combination of two factors: The feature's "extractability" after pre-training (measured using information-theoretic probing techniques), and the "evidence" available during fine-tuning (defined as the feature's co-occurrence rate with the label). In experiments with both synthetic and natural language data, we find strong evidence (statistically significant correlations) supporting this hypothesis. | [
"information-theoretical probing",
"probing",
"challenge sets",
"natural language processing"
] | Accept (Poster) | https://openreview.net/pdf?id=mNtmhaDkAr | https://openreview.net/forum?id=mNtmhaDkAr | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"yRJ4ibxazn2",
"AG-R-yVgLgZ",
"VZYXhT1h0mX",
"18rswl9jEWJ",
"f4O-QfyBJ0",
"-NuFfD4ESy9",
"OThGqa9L_Uw",
"mAuPTI8l369",
"alY7rJr7SLJ",
"coTE0yHcX4O",
"7wU0SXnwEO",
"VGTqco_38th",
"CsE0g3JfN-x",
"wjiGejIhgG",
"bzqPfJ0T1rN"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040415986,
1606201802409,
1606201769085,
1606201730590,
1606201693985,
1606201658140,
1606201588635,
1606201399464,
1605651026241,
1605647631226,
1605631329179,
1605631233723,
1603912616350,
1603820231092,
1603264215668
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3826/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3826/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3826/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper studies the features extracted by the pre-trained language model and how fine-tuning makes use of these features. The paper is well-motivated by two lines of research in the NLP area -- 1) probing approaches for understanding the features extracted in the pre-training model, 2) model behavior analysis that shows models take shortcuts for making predictions. The paper provides a comprehensive study to bridge the gaps between these two lines of discussion.\\n\\nAll the reviewers agree the paper has strong merits and concerns have been addressed.\"}",
"{\"title\": \"(2/7)\", \"comment\": \"### Reviewer #1\\n***The two sets of experiments are described as \\u201csynthetic\\u201d versus \\u201cnatural language\\u201d -- but if I'm understanding correctly, the natural language examples are generated synthetically. If this is correct, then the current framing of the distinction is misleading.***\\nWe appreciate the point and we will reframe this terminology for the camera-ready.\\n\\n***Figure 2 is difficult to interpret, and the placement of the legend is odd-looking and confusing. Fig 3 is also pretty difficult to extract information from -- generally presentation of information could be made clearer for the reader.***\\nWe moved the legend in Figure 2 so that the charts are fully visible. We also updated Figure 3 by giving each subplot its own row and adding the axes that were previously only detailed in the captions. Beyond this, we now use the same scale for each subplot, share the axis labels, and use log scales. We are happy to make additional changes here.\\n\\n***The wording on p3 can be taken to imply that MDL was introduced by Voita & Titov (2020). I would recommend rephrasing and/or also citing earlier MDL references.***\\nWe updated the MDL references to cite an earlier work and clarify the contribution of Voita and Titov.\"}",
"{\"title\": \"(3/7)\", \"comment\": \"### Reviewer #2\\n***Regarding the synthetic nature of the task***\", \"quoting_our_previous_comment\": \"There are important differences between our setup and naturally occurring data, but we believe it\\u2019s important to establish a relationship between extractability and downstream feature usage in a relatively simple setting as a precursor to exploring more complex hypotheses.\\n\\nStill, we appreciate your points on this (regarding the templatic sentences and the nature of our target feature), and we will include more discussion on the subject in the camera-ready. For example, we will explicitly call out some of our simplifying assumptions (we have started to add this discussion in the latest version).\\n\\n***The natural language examples are important as they go beyond synthetic data and closer to a naturalistic scenario. However, these are still templatic sentences and synthetic in a sense. I wouldn't call these naturalistic examples. Ideally, experiments on naturally occurring data would be more convincing. Or, at the very least a discussion of this issue should be made.***\\nWe appreciate the point. We will reframe our terminology for the camera-ready and add a discussion of this issue (see above note).\\n\\n***The assumption that the extent to which a model uses a feature can be measured by the spurious-only error rate (at some spurious-only occurrence rate) is questionable in my opinion. In a very clean setting like the synthetic data, I could maybe accept it. But, \\\"using\\\" is in fact a causal concept, while a causal mechanism has not been demonstrated. The paper alludes to this point in the discussion, but I think the discussion around this point should be expanded, and the strong claims should be rephrased or modulated.***\\nWe agree that \\u201cuse\\u201d is a causal concept, and that we\\u2019ve been using it informally. We will rephrase in camera ready to use more careful phrases such as \\u201cmakes predictions consistent with feature use\\u201d , and explicitly highlight the fact that we are not making a causal argument. We discuss our use of the spurious-only rate below.\\n\\n***The paper makes the assumption that the target feature t and the label are the same. I am not convinced about the \\\"without loss of generality\\\" claim. In practice, it is not easy to isolate a feature t that is identical with the label. How would this assumption affect the generalization of the approach to more realistic scenarios?***\\n\\n***The task is a binary classification task. The features holding is also binary, that is either a feature holds (1) or not (0). But, suppose the label is 0, then the t feature is also 0, meaning it does not hold. This seems contrary to what is meant. This could be a confusion on my part.***\\n\\nCorrect -- we make the simplifying assumption in this setting that the target feature is identical with the label. We agree that this has limitations and will include a discussion of this for camera-ready (see above note). One could extend our framework to consider more complex models of target and spurious features, but we leave this to future work.\\n\\nThe \\u201cwithout loss of generality\\u201d was intended to account for features that are exactly predictive of the label but not identical (i.e. the feature which is always the opposite of the label). In this case, we can use this feature to construct a target feature that is identical to the label (by taking the opposite of the feature). We hope this clears up any confusion, and we have updated the text with some language around the w.l.o.g. claim.\\n\\n***Why is MDL computed by training a classifier to distinguish s-only from neither, and not from some other part of S?***\\n\\nTo measure extractability of the spurious feature, we chose not to train a classifier to distinguish between s-only and t-only for consistency with the second set of experiments. In that setting, generating t-only examples is difficult (see note below).\\n\\nAdditionally, we found evidence that distinguishing between s-only and both would require sensitivity to the target feature, instead of the spurious feature (in the experiments in the Appendix that you reference).\"}",
"{\"title\": \"(4/7)\", \"comment\": \"### Reviewer #2 (continued)\\n\\n***Footnote 3 is concerning - Aren't MDLs higher than a uniform code meaningless?***\\nWe include experiments in the (Appendix F, Fig 10) that indicate that these high MDL\\u2019s are indeed the result of overfitting by the model on the early blocks. As discussed in Footnote 3, the information-theoretic interpretation of these cases is that the model learns an encoding that is worse than the uniform baseline on these blocks. We appreciate your point here; if the MDL\\u2019s are higher than the uniform code, then they are no longer \\u201cminimum\\u201d in some sense. \\n\\nHowever, for the sake of comparison to other work, we follow the existing definition of the prequential code. Additionally, the model\\u2019s overfitting is a potentially useful signal to include in our definition of extractability, as it indicates that small amounts of data mislead the model. \\n\\n\\n***The both-error subplot in figure 2 shows a slight increase in error rate with large s-only rate. Does this mean that the model has (falsely) learned to reject the example when s is in it? That is, it has learned another spurious feature, just in the other direction, instead of learning to rely on t.***\\n\\n***A similar pattern is found in the t-only error subplot. There, even with high s-only rate, the models don't classify t-only examples correctly. I wonder why this plot is different from the s-only error plot, as this shows a directional behavior. Some discussion of this would be useful.***\\nIndeed, the model in the first set of experiments (Section 3) seems to adopt new incorrect heuristics before learning the target feature . As you point out, the model incorrectly classifies some \\u201cboth\\u201d examples, evidence that it is starting to learn the incorrect heuristic that the presence of the spurious feature implies a negative label.\\nWe also agree the behavior in the t-only plot is interesting. The high t-only error could indicate that the additional \\u201cs-only\\u201d examples help the model become sensitive to the target feature in some cases (when the spurious feature is present) but not others (when the spurious feature is not present). \\nFrom the perspective of the training data available to the model, both of these hypotheses are equally supported:\\n\\n1. Target \\u2194 1\\n2. (Target ^ Spurious) \\u2194 1\\n\\nThe second hypothesis (2) is wrong, but because we do not provide target-only examples, it is not disambiguated from (1). It is unclear when the model would (or should) choose (1) over (2). Assuming high performance on the other partitions, the target-only error indicates which of the two hypotheses the model uses (or is consistent with). Figure 2 shows that the target feature extractability orders the features\\u2019 target-only error. Our results suggest that the easier the target feature is to extract, the greater the extent a model will tend towards (1).\\nWe believe that both these takeaways point to the limitations of data augmentation, which is an interesting avenue for future work. While this is not the focus of this paper, we agree that it\\u2019s important to mention these patterns. We will call out the U-shaped curve in the \\u201cboth\\u201d plot (though we don\\u2019t find the curves to be substantial enough to make any claims here). And we will briefly discuss the disparity between the s-only and t-only graphs that you describe and potential implications for the effectiveness of data augmentation.\\n\\n***There seems to be a stark contrast between the contains-1 feature and the other three, both in terms of MDL and in figure 2. Is it possible to show a more gradual behavior between the two extremes?***\\nWe agree that greater variation in the first set of experiments (Section 3) would bolster our argument (we will acknowledge this in the camera ready as a direction for future work), but we are encouraged by the greater variation in the natural language experiments.\"}",
"{\"title\": \"(5/7)\", \"comment\": \"### Reviewer #2 (continued)\\n\\n***Why are the training sets so small? How does this affect MDL numbers and their validity? Apropos footnote 1.***\\nOur templates (grammars) and lexicons do not allow us to generate datasets much larger than what we use. A larger dataset would be better, although small datasets are not uncommon in the pre-train/finetune paradigm. \\n\\nMDL can be thought of as the cost of encoding a dataset, so a smaller dataset results in a lower MDL. However, we use the same dataset size for each of our experiments, accounting for this effect. \\n\\nAdditionally, in most cases, the models have good classification performance with access to the complete training set, so we hypothesize that the cost of encoding additional, larger blocks will be relatively low and will not significantly affect the MDL. We find evidence for the limited effect of larger blocks in experiments that we have added to the Appendix where we investigate the cost of encoding each block (for the features in Section 3, Figure 1,2).\\n \\n***Why exactly is it hard to generate t-only examples? The appendix is indeed helpful in making sure the MDL(t) calculation method is legit, but more clarification around this issue would be good.***\\nTarget features may be unavoidably linked to spurious ones. For example, for a Negative Polarity Item to be licensed (perhaps smoothing over some intricacies) the NPI (\\u201cany\\u201d, \\u201call\\u201d, etc) must be a downward entailing context. These downward entailing contexts are created by triggers, e.g., if a negative word like \\u201cno\\u201d or \\u201cnot\\u201d or a quantifier like \\u201csome\\u201d. Linguists who study the problem have assembled a list of such triggers (see [Hoeksema (2012)](http://www.let.rug.nl/hoeksema/NPI-types.pdf]). Arguably, one cannot write down a correct example of NLP licensing that doesn\\u2019t contain one of these memorizable triggers. Thus, we cannot train or test models on correct examples of NPI usage while simultaneously preventing it from having access to trigger-specific features.\\n\\nSimilar to the NPI example, it's not possible (to our knowledge) to construct target-only examples for filler-gap since construction requires a wh-word and syntactic gap; thus, we can\\u2019t create a positively labeled, grammatical sentence that exhibits a Filler Gap without these elements. Similar arguments hold for learning in general, not just NLP. E.g., [Carey (2009)](https://link.springer.com/article/10.1007/s11191-010-9307-2) makes the argument about how monkey\\u2019s learn eyetracking: \\u201c...every time eyes are pointed at an object, so are mouths and noses, yet...monkeys avoided the competitor that was looking at the grape, not whose mouth was pointing at the grape.\\u201d\\n\\nIn summary, target-only examples may add new spurious features (as with NPI), or be impossible to construct because the presence of the target feature implies the presence of the spurious feature (as with filler gaps). Still, our setup permits the MDL to be computed directly with target-only examples, and so, in cases where it is feasible to create target-only examples (e.g. the Subject-Verb Agreement templates), it would have bolstered our argument to do so. We plan on including this in-depth discussion in the camera-ready appendix.\"}",
"{\"title\": \"(6/7)\", \"comment\": \"### Reviewer #2 (continued)\\n\\n***The classifier is not so simple (LSTM + 1-layer MLP). Why is that? How does the identity of the classifier affect the results?***\\nThe features we use in the first set of experiments (Section 3) requires that our classifiers represent token order. A bag-of-words or n-gram model would, for example, not be able to learn the \\u201cfirst-last\\u201d pattern (whether the first token is equal to the last) and the \\u201cprefix-duplicate\\u201d pattern (whether the first two tokens are equal). The \\u201cfirst-last\\u201d pattern requires the model to track both the first and last token (and whether or not they match). A typical n-gram would not be able to access this information. Likewise, a count-based representation like a bag-of-words would not track what the first and last values. \\nAn n-gram model could have a high recall for the \\u201cprefix duplicate\\u201d patten, but would not be able to fully learn the feature: An n-gram could only detect if that there is a duplicate, but it could not disambiguate duplicates that occur in positions in the prefix verus anywhere else in the sentence.\\nThat all being said, we appreciate your point and acknowledge that it would be ideal if we tested more classifiers in the first set of experiments. We have added more models to our second set of experiments (GPT2 and RoBERTa) to further confirm that our results are not model-dependent. (For this second set of experiments we ran pilot studies with BOW and CNN-based models, and found that both were unable to solve the task. We make note of this in footnote 5.)\\n\\n***Here, s-only error is used as \\\"the use of the spurious feature\\\", but this is only one aspect in which a model may make use of s\\u2026 The discussion touches upon this point by acknowledging that the work does not establish a causal relationship between extractability and feature use. I'd go even further and say that \\\"feature use\\\" should be defined in causal terms.***\\nWe agree that feature-use ought to be defined in causal terms, and we\\u2019re interested in making this connection. That being said, in this paper we did not do this, nor did we mean to suggest that our work was establishing a causal relationship. A better term might be \\u201ccompatible\\u201d or \\u201cconsistent\\u201d. As mentioned above, we plan replace the phrase \\u201cfeature usage\\u201d with something more carefully framed, and will add this discussion to the camera ready.\\nWe also agree that s-only error only captures \\u201cone aspect in which a model may make use of s.\\u201d Describing the s-only error as \\u201cthe use of the spurious feature\\u201d was intended to develop the reader\\u2019s intuition of our findings and goals. However, for the camera-ready version, we will add discussion that s-only error does not in itself represent \\u201cuse of the spurious feature\\u201d and we will point to the results on the other partitions of the data in the Appendix.\\n\\n***What is the performance (F-score) for determining s-rate\\u2605? Is that on s-only examples, other examples? Why the shift to F-score now?***\\nWe set the threshold to be a 0.99 F-score across the test set, which includes spurious-only, neither, and both examples. This is written in the body of the text (Section 4.3), and we now include it in Figure 3 as well. We shift to F-score (rather than accuracy or spurious-only error) to ensure that the model is performing well on all available sections of the data. \\nFigure 2 asks if the relative MDL correlates with s-rate\\u2605. At an F-score of 0.99 the model is consistent with the target feature (on the available partitions of the dataset). Using instead the spurious-only error, for s-rate\\u2605, could be misleading, as the spurious-only error could be low, but all other partitions of the data could have high error. We were doubly concerned about this because s-rate\\u2605 is coarse, aggregating information across multiple runs. This makes it harder to spotcheck or visualize. Lastly, we use F-score not accuracy because the labels are not balanced on the test set (as we have 1000 each of both, neither, and spurious-only).\\nOn the other hand, we show the s-only error in the line plots (Figure 4) (question above) because it\\u2019s easier to interpret than an aggregated metric. Furthermore, the s-only error is a fair representation of the model performance because the models generally perform well on the other partitions (we show all results in the Appendix).\"}",
"{\"title\": \"(7/7)\", \"comment\": \"### Reviewer #2 (continued)\\n\\n***Discuss y-axis differences in Figure 3c. BERT needs much less evidence than (some cases of) GloVe and T5.***\\nAt face value, it seems that BERT requires much less data than T5 to capture our target features. However, we are wary about making such strong claims. Something to consider here (noted in the Appendix hyperparameters) is that for T5 we used a linear model on top of T5\\u2019s pretrained encodings, rather than formatting the task in text (which is how T5 is trained). We made this decision (1) because we had trouble training T5 in this purely textual manner, and (2) using a linear classification head over two classes is consistent with the other model architectures. \\nWe added further experiments with GPT2 and RoBERTa. They performed on par with BERT, so the difference between the performance of BERT and T5 may be due to how we trained T5. We added the GPT2 and RoBERTaresults to the main body of the paper, and added a brief discussion in the paper about this possible explanation for the performance of T5.\\n\\n***The term \\\"learning curves\\\" for figure 4 is confusing: those aren't results during training, right?***\\nCorrect and agreed -- we updated the terminology there.\"}",
"{\"title\": \"Additional Points\", \"comment\": [\"## (1/7)\", \"In this response (and the following) we respond to the additional points raised by the reviewers. We also itemize the corresponding changes in our paper.\", \"### Paper Changelist\", \"Incorporated some of the high-level responses into the paper\\u2019s discussion section.\", \"Added additional discussion on the assumptions of our setting in the final paper (re. real-world data).\", \"Updated paper with correct MDLs. In the second set of experiments (Section 4), they were scaled down by approximately the batch-size. This changed the correlations slightly, but did not change any patterns or conclusions.\", \"Updated Figure 2 to move the legend to the bottom.\", \"Updated Figure 3 to give each subplot its own row.\", \"Updated Figure 3: we now use the same scale for each subplot, share the axis labels, and use log scales.\", \"Updated footnote regarding the definition of a target feature.\", \"Supplemented the results with experiments on GPT2 and RoBERTa; added discussion of differences between T5 and other models.\", \"Updated the description of the \\u201clearning curve\\u201d.\", \"Added a figure showing the overfitting on early blocks for MDL on the first set of experiments (Section 3) to the Appendix.\", \"Fixed a typo that indicated we had 16 not 20 feature pairs in the Appendix.\"]}",
"{\"title\": \"Fixed IDs\", \"comment\": \"Fixed \\u2014 thanks!\"}",
"{\"title\": \"Refer to reviewers by their designated IDs\", \"comment\": \"Hi, it seems like you're numbering reviewers by the order in which the reviews appear, while it would be better to use the reviewer number assigned by OpenReview. So, the order is 1,2,3, but the assigned IDs are actually 1,3,2. This would help navigate the comments.\"}",
"{\"title\": \"We contribute to an active research direction in NLP\", \"comment\": \"This response focuses on the concerns of @R1/@R3.\\n\\nThere is active interest in the problems we explore in this submission. Concurrent work [1], which we became aware of after submission, investigates when the inductive biases of large pre-trained models (RoBERTa) shift from a surface-based feature (what we call spurious features) to a linguistic-based feature (what we call a target feature). In our work, we further show how to predict which of these two biases most characterize the model and the generalizations it acquires . Our approach is not analytic, but still, we are able to predict this inductive bias without evaluating on a downstream task . From a technical perspective, this work connects the recent wave of probing results with actual model behavior, which has previously been wanting. E.g., another recent work, [2] strives to close this loop by connecting structural analysis (of which probing is an example) to behavioral analysis (model predictions), by finding neurons that are causal with respect to the model predictions.\\n\\n@R3 \\u201cReal-world data may have various spurious features and it is possible that not one feature alone is playing a role in pushing the model to rely on spurious features.\\u201d \\n\\nWe agree that real-world data will not always have simple target and spurious features. However, our definition of a feature accommodates multiple spurious or target features. We could construct the spurious or target feature in our setup to be a combination of several features. In fact, some of our spurious features are already defined in this way: the lexical feature, for example, is defined as a combination of several individual-word features because it holds if one of a set of words is in the sentence. This type of spurious feature is common in real datasets: E.g., the hypothesis-only baseline in NLI is a disjunction of lexical features (with semantically unrelated words like \\u201cno\\u201d, \\u201csleep\\u201d, etc.) [3, 4]. \\n\\nThere are important differences between our setup and naturally occurring data, but we believe it\\u2019s important to establish a relationship between extractability and downstream feature usage in a relatively simple setting as a precursor to exploring more complex hypotheses. Still, we appreciate your point and will include discussion of the assumptions of our setting in the final paper.\\n\\n[1] Warstadt, Alex, et al. \\\"Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually).\\\" arXiv preprint arXiv:2010.05358 (2020).\\n[2] Vig, Jesse, et al. \\\"Causal mediation analysis for interpreting neural nlp: The case of gender bias.\\\" arXiv preprint arXiv:2004.12265 (2020).\\n[3] Gururangan, Suchin, et al. \\\"Annotation artifacts in natural language inference data.\\\" arXiv preprint arXiv:1803.02324 (2018).\\n[4] Poliak, Adam, et al. \\\"Hypothesis only baselines in natural language inference.\\\" arXiv preprint arXiv:1805.01042 (2018).\"}",
"{\"title\": \"Our findings are non-obvious and have important implications\", \"comment\": \"Thank you for your detailed reviews. In this (and the next) response we focus on the high-level concerns of R1/R3. We\\u2019ll address the itemized concerns of R2 in a separate response.\\n\\n@R1/@R3 ask if our hypothesis is too obvious a finding. We believe our hypothesis is intuitive, but it is not tautologically true. \\n\\n@R1: \\u201cThe experiments are seemingly demonstrating that the more readily a classifier is able to pick up on a given feature, the more readily another classifier will use that feature during learning.\\u201d \\n@R3: \\u201c...the findings are quite expected\\u2026 MDL is likely to look at the same things as the model is looking at since both of them are based on the same training data and input features.\\u201d\", \"we_find_evidence_for_the_hypothesis\": \"The more extractable a target feature is compared to a spurious feature, the less training evidence needed for a model to use the target feature (and generalize). An implication of this is that the harder feature can be obscured completely by a spurious one; i.e., there are settings in which the model just won't adopt the harder feature at all. This is different from the alternative interpretation to which the reviewers refer, in which the harder feature is learned later than the spurious one, but is still learned.\\n\\nIndeed, we find that BERT & T5 are both entirely capable of learning the target feature: both perfectly solve the classification task when the label can be predicted only from the target feature or only from the spurious feature (See footnote 1). But, even though the harder (target) features are present (and learnable), the models do not learn them when the target feature is relatively more difficult to extract than the spurious features.\\n\\nThus, @R1\\u2019s paraphrase that \\u201cthe more readily a classifier is able to pick up on a given feature, the more readily another classifier will use it\\u201d misses a crucial insight. Our results show that if one classifier does not pick up on a feature readily enough, another classifier (or, rather, the same classifier trained with different data) may not be sensitive to that feature at all. (And, equally importantly, if a feature is readily-enough detected, the classifier will be sensitive to it even without overt statistical incentive to prefer it. This is a type of inductive bias that is very desirable for many language learning problems.) This finding changes how we view fine-tuning, which is generally considered to be beneficial because it allows models to learn new, task-relevant features. Our findings suggest that if the needed feature is not already \\u201creadily enough available\\u201d after pre-training, fine-tuning will not have the desired effect. \\n\\nWe will include discussion on these points in the paper, highlighting the distinction between this work and previous findings.\\n\\n(Footnote 1): *BERT, T5, get >0.99 accuracy on all feature pairs when testing the target feature in isolation. This control study also illustrates why the glove-based LSTM failed: The GloVe model only solved the target task in isolation for only 60% (12/20) feature pairs.*\"}",
"{\"title\": \"Clearly defined hypothesis, but limited contribution; finding seems to support existing knowledge/assumptions\", \"review\": \"After reading author responses:\\n\\nThank you to the authors for your detailed responses. With regard to the highlighted implication that \\\"the harder feature can be obscured completely by a spurious one; i.e., there are settings in which the model just won't adopt the harder feature at all\\\" -- to clarify, while my phrasing may not have made this apparent, I was assuming this implication in my interpretation of the results. So my impression of the finding is not changed substantially by the author response. However, I do want to give appropriate acknowledgment of the value of explicitly testing/confirming intuitive explanations of model behaviors, and it is clear that other reviewers find value in the contribution, so I am bumping my score up a bit.\\n\\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\n\\nThis paper addresses a seeming contradiction between findings that indicate encoding of linguistic information in models' internal representations, and findings that show models not to use more sophisticated linguistic information during fine-tuning on downstream tasks. The paper hypothesizes that a model's use of a given feature can be explained as a function of extractability of the feature in combination\\u00a0with the amount of evidence for that feature's predictive reliability. The authors test on toy, non-language data as well as natural language data, and find support for their hypothesis.\\u00a0\\n\\nAll in all I think this is a reasonably clear and well-written paper, with a concrete and intuitive\\u00a0hypothesis. My main concern is that the motivating issue is a bit of a strawman, in that the posited explanation\\u00a0was fairly obvious as a means of reconciling the \\\"contradiction\\\" raised at the start of the paper. I can't speak for the rest of the community, and it may be that this is something that people have found puzzling -- but speaking for myself I can say that I haven't at any point considered the highlighted \\\"contradiction\\\" to be a contradiction, having simply assumed something like the explanation hypothesized in this paper. Now, there is of course value in providing concrete evidence supporting intuitive assumptions made by the community. However, as the authors point out, related\\u00a0intuitions have already been supported by, e.g., evidence that models will more readily pick up on \\\"easy\\\" examples over \\\"difficult\\\" examples. So it's not clear to me that the paper is making a sufficiently novel, surprising contribution at present.\\n\\nI think one way in which these findings would be more compelling would be if the measure of extractability were defined independently of empirical classifier sensitivity. As it is, the experiments are seemingly demonstrating\\u00a0that the more readily a classifier is able to pick up on a given feature, the more readily another classifier will use that feature during learning. I have to assume that this will strike most readers as obvious. However, if extractability/MDL were measured independently of classification performance, then we would presumably learn some interesting and valuable things about what determines extractability for these models.\", \"smaller_notes\": \"The two sets of experiments are described as \\\"synthetic\\\" versus \\\"natural language\\\" -- but if I'm understanding correctly, the natural language examples are generated synthetically. If this is correct, then the current framing of the distinction is misleading.\\n\\nFigure 2 is difficult to interpret, and the placement of the legend is odd-looking and confusing. Fig 3 is also pretty difficult to extract information from -- generally presentation of information could be made clearer for the reader.\\n\\nThe wording on p3 can be taken to imply that MDL was introduced by Voita & Titov (2020). I would recommend rephrasing and/or also citing earlier MDL references.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper aims to bridge the gap between model interpretation using probing and model's use of spurious features. They show that the findings of MDL with respect to a feature correlate with the extractability of the feature, given the evidence of representing the feature is available in the training data. The results are presented using both synthetic and natural language data.\", \"review\": \"The paper aims to bridge the gap between model interpretation using probing and model's use of spurious features. They show that the findings of MDL with respect to a feature correlate with the extractability of the feature, given the evidence of representing the feature is available in the training data. The results are presented using both synthetic and natural language data.\\n\\nI really like the premise of the paper, which is connecting the research on the linguistic learning of a model with the presence of important and spurious features in the data.\\n\\nOne issue I have with the work is the simplistic assumptions that are likely to be different in the real-world data. Real-world data may have various spurious features and it is possible that not one feature alone is playing a role in pushing the model to rely on spurious features. It can be a combination of spurious features plus the relative presence of important features. It is hard to imagine how this method will scale to real-world datasets. I would like the authors to comment on it.\\n\\nMoreover, the findings are quite expected. In general, the probing methods including MDL were mainly aimed at analyzing the linguistic learning of the representations. In that case, MDL is scoring the representations with respect to various linguistic properties. Here, the authors are using MDL to look at how input features are represented in the model. Statistically, MDL is likely to look at the same things as the model is looking at since both of them are based on the same training data and input features. Please comment on this, in case I misunderstood the point.\", \"minor_comments\": [\"what is the reason for low performance when using Glove?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Important work filling a gap in current NLP interpretability literature\", \"review\": \"# Summary\\nThis paper studies the relationship between extractability of features from pre-trained representations and how much a fine-tuned model uses that feature. The extractability of features is measured by the minimum description length of a probing classifier trained to detect the feature from the pre-trained representations (using the online code version of Voita and Titov). The degree to which a fine-tuned model uses the feature is measured by the amount of evidence required for a model to tease apart spurious from non-spurious features (called \\\"target\\\" features). Evidence here means examples where a spurious feature occurs but a non-spurious feature does not occur. When there are many such examples (high spurious-only rate), it is easier for a model to reject the spurious feature and learn to rely on the target feature. The \\\"degree to which a fine-tuned model uses a feature\\\" is defined as the minimal spurious-only rate at which the model can accomplish the task. \\n\\nThe paper has two kinds of experiments, on synthetic and more natural data. The synthetic data are sequences of symbols where the task is to identify simple properties like occurrence or repetition of symbols. The experiments are set up such that varying rates of spurious-only examples are presented during training, providing increasing amounts of evidence against the spurious feature (presence of the symbol 2) and in favor of the target feature. The target feature is identical to the label, that is, it is 1 when the example corresponds to the label and 0 otherwise. The paper reports extractability of the spurious and target features via the MDL of a probing classifier. The metric of interest is the relative MDL, where higher means the feature is more extractable. When the features are more extractable, less evidence is required for the model to reject spurious features. With less extractable features, more evidence is required. \\n\\nThe natural language examples are made with acceptability judgements of examples generated by grammars for three linguistic phenomena (subject-verb agreement, negative polarity items, and filler gap dependencies). Here again the setting is similar, modulu a tweak on how to calculate extractability. The main result here is high (negative) correlation between extractability and evidence required for rejecting the spurious feature. \\n\\n# Main comments\\n1. This paper fills an important gap in the NLP interpretability literature that has recently been a cause of concern in the community. On the one hand, probing classifiers tell us something about the existence (and more recently the extractability) of properties in pre-trained models' representations. But they do not tell us whether a model uses those properties. One the other hand, many challenge sets and test suites tell us whether a model can successfully perform a task requiring some linguistic property. The paper aims to connect these two aspects, and it does so quite convincingly, although I have some reservations below. \\n2. The experimental setup is well designed. The use of synthetic data allows a fairly clean setup where spurious and non-spurious features are distinct and simple. The experiments of training with increasing amount of spurious-only examples are instructive. \\n3. The natural language examples are important as they go beyond synthetic data and closer to a naturalistic scenario. However, these are still templatic sentences and synthetic in a sense. I wouldn't call these naturalistic examples. Ideally, experiments on naturally occurring data would be more convincing. Or, at the very least a discussion of this issue should be made. \\n4. The paper makes use of recent advances in interpretability work, including information-theoretic probing, and draws connections to a broad range of related work. \\n5. The assumption that the extent to which a model uses a feature can be measured by the spurious-only error rate (at some spurious-only occurrence rate) is questionable in my opinion. In a very clean setting like the synthetic data, I could maybe accept it. But, \\\"using\\\" is in fact a causal concept, while a causal mechanism has not been demonstrated. The paper alludes to this point in the discussion, but I think the discussion around this point should be expanded, and the strong claims should be rephrased or modulated. \\n \\n\\n# Questions and other comments\\n1. The paper makes the assumption that the target feature t and the label are the same. I am not convinced about the \\\"without loss of generality\\\" claim. In practice, it is not easy to isolate a feature t that is identical with the label. How would this assumption affect the generalization of the approach to more realistic scenarios? \\n2. The task is a binary classification task. The features holding is also binary, that is either a feature holds (1) or not (0). But, suppose the label is 0, then the t feature is also 0, meaning it does not hold. This seems contrary to what is meant. This could be a confusion on my part. \\n\\n## Synthetic data\\n3. Why is MDL computed by training a classifier to distinguish s-only from neither, and not from some other part of S? \\n4. Footnote 3 is concerning - Aren't MDLs higher than a uniform code meaningless? \\n5. The classifier is not so simple (LSTM + 1-layer MLP). Why is that? How does the identity of the classifier affects the results? \\n6. The both-error subplot in figure 2 shows a slight increase in error rate with large s-only rate. Does this mean that the model has (falsely) learned to reject the example when s is in it? That is, it has learned another spurious feature, just in the other direction, instead of learning to rely on t. \\n7. A similar pattern is found in the t-only error subplot. There, even with high s-only rate, the models don't classify t-only examples correctly. I wonder why this plot is different from the s-only error plot, as this shows a directional behavior. Some discussion of this would be useful. \\n8. There seems to be a stark contrast between the contains-1 feature and the other three, both in terms of MDL and in figure 2. Is it possible to show a more gradual behavior between the two extremes? \\n\\n## Natural language examples\\n9. Why are the training sets so small? How does this affect MDL numbers and their validity? Apropos footnote 1. \\n10. Why exactly is it hard to generate t-only examples? The appendix is indeed helpful in making sure the MDL(t) calculation method is legit, but more clarification around this issue would be good. \\n11. Here, s-only error is used as \\\"the use of the spurious feature\\\", but this is only one aspect in which a model may make use of s. It may be that a model makes more complicated use of s, when s is found in combination with t. The discussion touches upon this point by acknowledging that the work does not establish a causal relationship between extractability and feature use. I'd go even further and say that \\\"feature use\\\" should be defined in causal terms. \\n12. What is the performance (F-score) for determining s-rate*? is that the performance on s-only examples? on other examples? Why the shift to F-score now?\\n13. Discuss y-axis differences in figure 3c. BERT needs much less evidence than (some cases of) GloVe and T5. How does that impact the analysis?\\n14. The term \\\"learning curves\\\" for figure 4 is confusing: those aren't results during training, right? They are results after training, each time with a different rate of s-only examples.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
oev4KdikGjy | FMix: Enhancing Mixed Sample Data Augmentation | [
"Ethan Harris",
"Antonia Marcu",
"Matthew Painter",
"Mahesan Niranjan",
"Adam Prugel-Bennett",
"Jonathon Hare"
] | Mixed Sample Data Augmentation (MSDA) has received increasing attention in recent years, with many successful variants such as MixUp and CutMix. We analyse MSDA from an information theoretic perspective, characterising learned models in terms of how they impact the models’ perception of the data. Ultimately, our analyses allow us to decouple two complementary properties of augmentations that are useful for reasoning about MSDA. From insight on the efficacy of CutMix in particular, we subsequently propose FMix, an MSDA that uses binary masks obtained by applying a threshold to low frequency images sampled from Fourier space. FMix improves performance over MixUp and CutMix for a number of models across a range of data sets and problem settings, obtaining new state-of-the-art results on CIFAR-10 and Fashion-MNIST. | [
"msda",
"cutmix",
"models",
"fmix",
"mixup",
"attention",
"recent years",
"many successful variants"
] | Reject | https://openreview.net/pdf?id=oev4KdikGjy | https://openreview.net/forum?id=oev4KdikGjy | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"SzlYErOabIc",
"PU90baw-jx",
"4LqULTxDfzy",
"WVZWjaBqXq",
"E0PGOzov59J",
"LScbdKncqrb",
"eo0Js1bg_2v",
"dDAQqn0WkeE",
"ND9Xrp8upVc",
"OM9oOeLT16",
"3u8821TDeo1"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040511684,
1606209845047,
1606206137160,
1605638482531,
1605638279070,
1605638017534,
1605637835977,
1603935083285,
1603928436742,
1603880189684,
1603836246961
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3825/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3825/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3825/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3825/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3825/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3825/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3825/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3825/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3825/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3825/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper analyzes the space of mixed sample data augmentation approaches, and proposes a new variant, FMix, based on a new masking strategy. Reviewers point to the fact that FMix is only marginally better than previous approaches, that the experimental setup is unconvincing, and that the proposed analysis might not be grounded. This is a really borderline paper but I see the issues as more important than the benefits, so I recommend rejection.\"}",
"{\"title\": \"Thank you for your reply\", \"comment\": \"Thank you for your comments and for acknowledging the mistake in the initial review. We would like to take this opportunity to try to explain the story of how the paper came about and hopefully provide some insight into why we feel our work is of value.\\n\\nOur first motivations were to try to understand how mixup manages to work so well despite the mixed images not seeming to truly resemble the data. Furthermore, we were confused as to how cutmix can have the same effect despite the cutmixed images looking so different from the mixup images. This is what we feel our MI analysis starts to expand on. Actually, cutmix and mixup do very different things to the models trained with them but __both__ of those things can improve performance. Mixup prevents the model from learning about specific features (hence the lower information between the input and latent space) whereas cutmix simulates learning from the real data whilst preventing memorisation (more like a traditional augmentation such as random flipping would).\\n\\n__How FMix can solve the problems in theory or in conceptually?__\\n\\nGiven this new understanding, we wondered whether there was some way to improve cutmix to make the augmented images resemble the real data even more. That is the problem FMix tries to solve by removing the horizontal and vertical edge artefacts from cutmix. Our belief is that cutmix biases models towards these edges as they are a guaranteed feature of the data and learning about them would reduce the loss since these edges can tell you how much of each source image is present in the input (a key part of the objective). In contrast, the edges in FMix are so inconsistent that learning about them is a much harder task and so the network is forced to learn about the actual data, rather than the augmentation.\\n\\n__Performance__\\n\\nRegarding performance our above understanding tells us that each of the MSDAs will be advantageous in different settings. This is because whether you would rather avoid specific features or avoid memorisation will depend heavily on the data set, model etc. Although FMix does not always provide a significant performance boost over the alternatives it is an important option that is sometimes the difference (as with the Bengali competition) between mediocre performance and prize winning performance. Ultimitely, we have struggled with comments regarding performance as we have tried to be as unbiased as possible in our presentation of the results by reporting all of the experiments we performed and not aggresively tuning our method over the others. If we had omitted the cases where our method didn't win and devoted a lot of resources to finding the best FMix parameters in every setting we no doubt would have seen more impressive numbers but at the unacceptable (to us) cost of no longer giving an honest reflection of the real world performance of FMix.\"}",
"{\"title\": \"Thanks for your answers\", \"comment\": \"I thank the authors for answering my questions.\\nAfter carefully reading the rebuttal and other reviews, I think my main concern still remains.\\n\\nFirst, I'm still confusing how MI / adversarial analyses and the proposed random mask from Fourier space (btw, thanks for correcting my mistake). What does the MI analysis reveal? How the proposed method can solve or improve the proposed analyses? \\n\\nIn the rebuttal, the authors claimed:\\n\\n> However, we do not claim this at any point in the paper. We state: \\u201cThe results show that MixUp consistently reduces the amount of information that is learned about the original data. In contrast, CutMix manages to induce greater mutual information with the data than is obtained from just training on the un-augmented data\\u201d.\\n\\nTo me, it is still a confusing argument. How FMix can solve the problems in theory or in conceptually? If the authors can answer this question before the final deadline (sorry for my late), it will be very delightful to me.\\n\\n\\nFurthermore, as my first review, \\\"I wonder what is the advantage to use FMix comparing to CutMix / Mixup if FMix shows worse performance than CutMix / Mixup.\\\"\\n\\nI don't think performance is everything but in this case, I'd expect one of strong theoretical support, conceptual inspiration, or significant performance gap.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for their comments. Our responses are as follows:\\n\\n__\\u201cwhat then explains the improved generalization accuracy MixUp showcases in their original paper [given that we claim MixUp reduces the amount of information that is learned about the original data]\\u201d__\\nWe take a reduction in the amount of information to indicate more compressed representations, which, from an information theoretic point of view explains the improvement in generalization.\\n\\n__\\u201c[the adversarial attack experiment] does not answer the original question of whether MixUp gives rise to practical differences other than just improved generalisation.\\u201d__\\nThe worst-case analysis only serves to summarize the results from the individual attacks. The individual results do illuminate differences between the trained models and particularly that MixUp trained models are the most different from the baseline (e.g. in robustness to DeepFool). That said, we will tone down the language here to make this clearer. We can move the robustness experiments to the appendix if the reviewer feels it would help. We would like to thank the reviewer for helping us create a more concise and stronger argument for our approach. \\n\\n__\\u201cThe finding that MixUp yields greater ImageNet-A robustness (presented later in the paper) also contradicts this early claim.\\u201d__\\nSince the ImageNet-A data is generated for a particular model (ImageNet trained ResNet-50 w/o MSDA) the \\u2018increased ImageNet-A robustness\\u2019 is perhaps better described as the MixUp model making mistakes in a different way to a baseline model. To determine robustness, we would need to re-generate the ImageNet-A set from each model (in which case we would expect to see all of the model's performance reduced to near zero). We will improve the writing here to make this more clear. \\n\\n__\\u201c[the paper] describes an experiment in which combining FMix+MixUp gives the best results (presumably because their representations of data are different and therefore combining them would yield the best of both worlds). This seems to contradict the previous adversarial analysis in which MixUp was found to not yield significantly more robustness.\\u201d__\\nThe point of the experiment where we combine masking and interpolation is simply to show that the differences between interpolation and masking can be jointly exploited. This does not contradict the robustness experiments since they make no claim about the generalization performance of models trained with the different methods. \\n\\n__\\u201cFurther, the combination experiment has the two leading combination methods (FMix+MixUp and CutMix+MixUp) yield very similar results (within the margin of error), which opens the question of whether FMix meaningfully improves over CutMix.\\u201d__\\nWe would like to stress that we did not experiment with multiple ways of combining MSDA that may or may not lead to different results. As such, although this is a valid observation, many more experiments would be needed to properly assess the various possible combinations of different MSDAs and we are reluctant to make any strong claims regarding the results from this section.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for their comments. Our responses are as follows:\\n__[Contradictory results between mutual information and performances]__\\n\\nWe would like to clarify that the mutual information was not intended or expected to directly correlate with generalization performance. Our intention was merely to explore how learned representations are influenced by the different forms of MSDA. We make no claim that increasing or decreasing the mutual information measure will have a strong impact on performance. Instead, we contend that MixUp works by forcing the model to ignore sample specific features (thus learning compressed representations \\u2013 the reason for discussing the information bottleneck theory) and that CutMix works by mimicking the real data whilst preventing example memorization. The result for FMix only serves to validate that FMix does a good job of mimicking the data, not as indicator of performance. \\n\\nOur intention with the adversarial robustness experiments was to show how these differences between learned functions (evidenced with our MI experiments) correspond to practical differences in how the models are impacted by out of distribution data (adversarial examples).\\n\\n__[Weak logical connection between the motivation and the method]__\\nUnfortunately, we believe the reviewer misunderstood both our approach and our motivation. The masks in FMix are not sampled by a low-pass filter of a particular image. Instead, we sample masks __randomly__ from Fourier space. Figure 1 in the paper shows examples of the masks we use. \\n\\nRegarding the motivation, we would understand the reviewer\\u2019s concerns had we indeed claimed that our purpose was \\u201cenhancing mutual information between input and augmented images\\u201d. However, we do not claim this at any point in the paper. We state: \\u201cThe results show that MixUp consistently reduces the amount of information that is learned about the original data. In contrast, CutMix manages to induce greater mutual information with the data than is obtained from just training on the un-augmented data\\u201d. Our analysis, as explained in the introduction of Section 5, concerns the learned representations and not the augmented images. We will reiterate this in the last part of this section to avoid future confusions. We hope this clarifies our approach and the connection between the motivation and the method.\\n\\n__[Related works]__\\nA comparison to masks generated using additional models (and, thus, significant additional computation) does not seem fair to us. The purpose of our proposed augmentation is to increase performance with little to no additional impairments and we choose to compare to other methods that do so.\\n\\n__[Too small performance gap]__\\nWe appreciate that not all results show dramatic improvement over the alternatives, however, this could also be said for both CutMix and MixUp. Whilst FMix may not improve performance across the board, we believe it is better to have the option rather than not. Additionally, we believe that significance is best reflected in the impact on the community. FMix has been the starting point for further publications (e.g. FMixCutMatch [1]) and was also used by the second place team in the BengaliAI Kaggle competition. Please note that all of this was done independently of us. \\n\\nPuzzleMix doesn\\u2019t report the batch size used and performs other modifications to the training procedure. Such as resizing images differently depending on the epoch and learning rate jumps beyond cosine annealing with warm start. These methods were explicitly chosen [https://arxiv.org/pdf/2001.03994.pdf] to speed up ImageNet training, which whilst useful, is not the same as reporting performance on the base task. Furthermore, the resulting improvement over CutMix is under 0.5% despite supervised optimization of masks...\\n\\n__[Potential issues in VAE analysis]__\\nWe disagree with the statement that \\u201cIf the VAE is not optimized well, the analysis will not be convincing enough.\\u201d. For any VAE our method allows for a comparison to be made. Although the values will likely change with different architectures the ordering should not - it would be wrong to assert that a more optimized VAE would necessarily be more informative. One interesting potential direction would be to vary the architecture to understand how these values change with weaker / stronger models. That said, we will include VAE generated samples and performance metrics in an appendix.\\n\\n__[Minor Comments]__\\nCutMix is explicitly defined as a 2D method. Sentiment analysis augmentation is a 1D task.\\n\\n[1] Wei, X., Wei, X., Kong, X., Lu, S., Xing, W., & Lu, W. (2020). FMixCutMatch for semi-supervised deep learning. Neural Networks.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for their comments. Our responses are as follows (numbered in order of appearance):\\n1. Both suggested approaches would likely provide some improvement. However, the purpose of our proposed augmentation is to increase performance with little to no additional resources or computation (the approach from context encoders would require additional loading and manipulation of mask images). As such, we only compare to other methods for which this is true. That said, these are valid suggestions, we will add them to the future work.\\n2. It is correct to say that our approximation of the MI is in fact an upper bound. However, the divergence between the marginal $p_{Z_A}$ and the prior is implicitly minimized during training. Furthermore, there is no reason to think that this value should deviate greatly between independent runs since this part of the objective can be satisfied arbitrarily during training without hurting any other (it is always possible to translate / scale the conditional distributions such that the marginal more closely fits the prior without losing any information about the input). We did consider an alternative which would use an unbiased sample from the marginal obtained by independently sampling an additional data point and passing that through the model (an approach used by InfoVAE). Please let us know if you would find this to be more convincing.\\n3. Some of our early experiments did use MINE, however, we found that it was prone to instability issues and getting reliable results that were consistent across runs was virtually impossible. We will add some discussion of these early investigations to the paper.\\n4. Regarding the point about generator quality, it is true that a more powerful VAE would retain more information about the data. However, our primary concern was with assessing the learned representations rather than the images themselves. For any fixed model architecture, independent of its performance we can obtain values which permit relative comparisons. One interesting potential direction would be to vary the architecture to understand how these values change with weaker / stronger models.\\n5. This should say \\u201cpost-processing cannot increase information about the input\\u201d. Non-deterministic post-processing can increase entropy but not the mutual information about the input (e.g. given an X, no function X -> Y can yield a value which has more information about X than X does).\\n6. As mentioned in the beginning of the phrase, CutMix is limited to using square masks only. As evidenced in our paper, using a larger variety of masks can improve performance.\\n7. We do not plan to run additional experiments with such a resource-intensive setting as it is unclear what the scientific gain is in doing so (we would argue that ImageNet is a bad test of an augmentation approach in many ways since it is already pre-augmented with $\\\\approx 1.2$ million examples). We focused on providing experiments for a wide range of applications and data sets and we believe this is more valuable than having a limited pool of experiments with even larger batch sizes and more epochs.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"We thank the reviewer for their comments. Our responses to the stated weaknesses are as follows:\\n1. We would like to clarify that the mutual information was not intended or expected to directly correlate with generalization performance. Our intention was merely to explore how learned representations are influenced by the different forms of MSDA. We make no claim that increasing or decreasing the mutual information measure will have a strong impact on performance. Instead, we contend that MixUp works by forcing the model to ignore sample specific features and that CutMix works by mimicking the real data whilst preventing example memorization. The result for FMix only serves to validate that FMix does a good job of mimicking the data, not as indicator of performance. We would welcome any suggestions on how this could be made clearer in our work.\\n2. We appreciate that not all results show dramatic improvement over the alternatives, however, this could also be said for both CutMix and MixUp. Whilst FMix may not improve performance across the board, we believe it is better to have the option rather than not. Additionally, we believe that significance is best reflected in the impact on the community. FMix has been the starting point for further publications (e.g. FMixCutMatch [1]) and was also used by the second place team in the BengaliAI Kaggle competition. Please note that all of this was done independently of us.\\n3. As mentioned in the paper, ImageNet performance is heavily dependent on hyperparameter choices. The results we provide were obtained with different hyperparameters than those used in the MixUp / CutMix papers due to computational restrictions. Importantly, our results __do not__ contradict the results from the respective works \\u2013 it is simply the case that training with different hyperparameters will yield different results. The only alternative would be to not include the ImageNet results. However, we feel it is important to report all results obtained regardless of their potential for controversy.\\n4. We will reference saliency-based augmentation methods in our related work section. However, we reserve the discussion and comparison to other mixed sample augmentations that are similar in both approach and computational requirements (PuzzleMix requires costly additional computation that limits its value in practice).\\n\\n[1] Wei, X., Wei, X., Kong, X., Lu, S., Xing, W., & Lu, W. (2020). FMixCutMatch for semi-supervised deep learning. Neural Networks.\"}",
"{\"title\": \"A new variant of cutmix but the improvement is marginal\", \"review\": \"In this work, authors provide an analysis of mutual information for MSDA and the develop a new variant of mixup. The effectiveness is demonstrated by experiments.\\n\\nStrength\\n1.\\tAuthors study the difference between masking MSDA and interpolative MSDA, which is helpful for understanding the power of mixup and its variants.\\n2.\\tThey develop a new augmentation method and improve the performance of masking MSDA.\\n\\nWeakness\\n1.\\tThe proposed measurement is not helpful for designing new methods. Note that the mutual information in mixup is lower than baseline while mixup still outperforms baseline.\\n2.\\tCompared to mixup and cutmix, the improvement reported in Table 2 is marginal.\\n3.\\tThe experiments on ImageNet is unconvincing. Both of mixup and cutmix are worse than baseline, which contradicts the existing results.\\n4.\\tThere lacks the discussion for the saliency based mixup methods, e.g., Puzzle Mix [1]. It is closely related to fmix but equipped with a learnable strategy to obtain patches for mixing.\\n\\n[1] J-H Kim, et al. Puzzle mix: Exploiting saliency and local statistics for optimal mixup\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review: FMix: Enhancing Mixed Sample Data Augmentation\", \"review\": \"This paper introduces a new mixup method that builds masks by first sampling a grey-scale mask from fourier space, which is subsequently transformed into a binary mask. This improves results against several baselines and achieves state-of-the-art on a few important vision benchmarks.\\n\\nMy first remark is about the masking. The procedure seems fine, but why not compare to the way masks are sampled in context encoders [1]. This seems like an important baseline masking method to compare to. In addition, one could try sampling masks from a standard segmentation model, e.g., R-CNN.\\n\\nMy second remark is about the MI bounds. On page 14 in the Appendix, you state that the MI between Z_A and X_hat is approximately equal to the KL divergence between the posterior and the normal distribution, but in general this wont be true as in training the Gaussian mixture p_Za wont match the normal distribution, so you have an upper bound. So you have a lower bound of an upper bound to the MI, not a lower bound.\\n\\nI'm curious though why not just use one of the recent neural estimators, e.g., found in [2] or [3]. In general, using VAEs for MI estimators depends heavily on the quality of the generator, so these neural estimators might be better suited.\", \"other_comments\": \"\", \"p1\": [\"\\\"'post-processing cannot increase information'\\\" if such processing is deterministic, no?\"], \"p2\": \"* \\\"CutMix imposes an unnecessary limitation\\\": what limitation? Could you clarify?\\n\\nFinally, do you plan to have updated results that compare to the 1024 batch size / 300 epoch settings?\\n\\n[1] Pathak, Deepak, et al. \\\"Context encoders: Feature learning by inpainting.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\\n[2] Belghazi, Mohamed Ishmael, et al. \\\"Mine: mutual information neural estimation.\\\" arXiv preprint arXiv:1801.04062 (2018).\\n[3] Poole, Ben, et al. \\\"On variational bounds of mutual information.\\\" arXiv preprint arXiv:1905.06922 (2019).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Weak logical connection between the motivation and the method, too small performance gap\", \"review\": \"This paper proposes an advanced masking strategy for CutMix augmentation based on the low-pass filter. The authors provide an interesting mutual information analysis for different augmentation strategies to describe their motivation. The experiments include many vision tasks (CIFAR-10, CIFAR-100, Fashion-MNIST, Tiny-ImageNet, ImageNet, Bengali datasets) and language tasks (Toxic, IMDb, Yelp).\\n\\n**Pros**\\n\\n\\\\+ The mutual information analysis provides us a new perspective to understand different data augmentations.\\n\\n\\\\+ Various experiments.\\n\\n**Cons**\\n\\n**[Contradictory results between mutual information and performances]**\\nIf we believe the VAE experiments in section 3, we have another paradox: the mutual information measurement and the real performance are not related.\\nTable 1 shows that in terms of mutual information, MixUp < Baseline < CutMix (and < FMix with a very small gap).\\nHowever, many experiments in this paper show that baseline < mixup < cutmix in terms of the performances.\\nThis paper cites information bottleneck theory to justify the deceases shown by Mixup, but it is still contradictory to the performances.\\nIt makes me confused to understand the meaning of mutual information. What is good for an augmentation method if we have high or low mutual information? It is still unclear to me.\\n\\nA similar comment also can be applicable to the \\\"adversarial robustness\\\" experiments. Aside from that mixup is hard to say \\\"adversarial training\\\" (what is the threat model in this scenario?), I feel that this result is irrelevant to FMix motivation.\\n\\n\\n**[Weak logical connection between the motivation and the method]**\\nIn my opinion, the connection between the analysis in the motivation and the proposed method is too weak. This paper proposes a CutMix variant where the mask is sampled by a low-pass filter. Why the low-pass filter approach can solve the motivation, i.e., enhancing mutual information between input and augmented images? There could be other possible variants as discussed in my \\\"related works\\\" comment\\n\\n\\n**[Related works]**\\nThere are a few CutMix variants that employ a non-random masking strategy. Especially, I believe these two variants, which have similar motivation, should be compared:\\n\\n- Walawalkar, Devesh, et al. \\\"Attentive Cutmix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification.\\\" ICASSP 2020\\n- Kim, Jang-Hyun, Wonho Choo, and Hyun Oh Song. \\\"Puzzle mix: Exploiting saliency and local statistics for optimal mixup.\\\" ICML 2020\\n\\nwhere Attentive CutMix uses CAM to extract masks, and PuzzleMix employs an optimization problem to optimize masks.\\nIf it is possible, please provide more comparison between these two papers.\\n\\n\\n**[Too small performance gap, less convincing experiments]**\\nIn Table 2, the performance gap between FMix and CutMix is too small, usually less than 0.3%. Note that the performance gaps are almost neglectable in these tasks.\\n\\nFurthermore, FMix is often worse than CutMix in many tasks (Table 2 TinyImageNet, Table 3 ImageNet-A, Table 4, CIFAR-10H Table 6). I wonder what is the advantage to use FMix comparing to CutMix if FMix shows worse performance than CutMix.\\n\\nEspecially, I believe Table 3 is problematic. This paper argues that \\\"Mixup uses 1024 batch size and CutMix uses 300 epochs\\\". However, in PuzzleMix Table 5, CutMix-trained ResNet50 (top1 err 22.92) outperforms baseline ResNet50 (top1 err 24.31) with only 100 epochs. Thus, to me, this table is not convincing enough.\\n\\n\\n**[Potential issues in VAE analysis]**\\nThe mutual information analysis is heavily relying on the learned VAE model. I wonder the quality of the generated images by VAE, in terms of both qualitatively (please provide generated samples in the supplementary) and quantitatively (e.g., FID).\\nIf the VAE is not optimized well, the analysis will not be convincing enough.\\n\\n\\n**Minor comments**\\n- Why CutMix experiments are missed in Table 5?\\n- I suggest avoiding using the words, \\\"clear\\\" and \\\"clearly\\\".\\n\\n---\\n\\n**Post-rebuttal update**\", \"my_main_concerns_in_the_initial_review_were_three_folds\": [\"Potential flaws in the analyses based on VAE and adversarial attacks\", \"Unclear connection between the MI analysis and the proposed method\", \"Small performance gap, and even sometimes worse performance, compared to the baseline methods (Mixup, CutMix)\", \"After having discussions with the authors, I will keep my initial score because:\", \"I am still confused about the MI-based analysis conclusion. The authors mentioned *\\\"We make no claim that increasing or decreasing the mutual information measure will have a strong impact on performance. Instead, we contend that MixUp works by forcing the model to ignore sample specific features (thus learning compressed representations \\u2013 the reason for discussing the information bottleneck theory) and that CutMix works by mimicking the real data whilst preventing example memorization.\\\"* in the rebuttal, but these two conclusions are not trivial to me (by the MI analysis).\", \"Even if we ignore the first part, my second concern still remains. The authors mentioned *\\\"That is the problem FMix tries to solve by removing the horizontal and vertical edge artefacts from cutmix. Our belief is that cutmix biases models towards these edges as they are a guaranteed feature of the data and learning about them would reduce the loss since these edges can tell you how much of each source image is present in the input (a key part of the objective).\\\"*. But if this paper assumes that the rectangle masking strategy of CutMix makes bias, then I think other CutMix variants such as AttentiveCutMix or PuzzleMix should be considered as the comparison methods. Hence, I disagree with this statement *\\\"A comparison to masks generated using additional models (and, thus, significant additional computation) does not seem fair to us.\\\"*\", \"For my last concern, the small performance gap, the authors claimed that this method *\\\"was also used by the second place team in the BengaliAI Kaggle competition\\\"*. It is good evidence that FMix can sometimes offer benefit to real-world applications, but I think more evidence that FMix can really solve problems of previous MSDA in a certain scenario, e.g., the edge bias as pointed by the authors.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good paper, but concerns in the conclusions from the analyses\", \"review\": \"The paper presents an interesting analysis of CutMix and MixUp data augmentation techniques. It also presents an improvement to CutMix that removes the horizontal/vertical axis bias. The idea to use fourier noise to construct masks for a variant of CutMix is interesting and well-motivated.\\n\\nMy main concern is whether the conclusions drawn by the analyses are fully grounded. The paper performs an analysis of the effect of the augmented data on learned representations by training unsupervised models on the augmented or clean data and measuring their mutual information. This analysis has the undesirable property of not matching the supervised case in a number of ways, such as different learning objectives, model architectures, etc. Even ignoring this, if we take the result that \\u201cMixUp consistently reduces the amount of information that is learned about the original data\\u201d, what then explains the improved generalization accuracy MixUp showcases in their original paper?\\n\\nMoreover, after claiming that the analysis indicates that MixUp learns different representations, the paper asks \\u201cwhether these different representations learned from MixUp give rise to practical differences other than just improved generalisation.\\u201d The issue is that they do this via an adversarial attack analysis, rather than a more realistic non-worst-case robustness analysis. This leads to the conclusion \\u201cMixUp (...) does not correspond to a general increase in robustness.\\u201d But it does not answer the original question of whether \\u201cMixUp gives rise to practical differences other than just improved generalisation.\\u201d The finding that MixUp yields greater ImageNet-A robustness (presented later in the paper) also contradicts this early claim.\\n\\nThe finding that MixUp provides more compressed representations does not necessarily mean that masking augmentation methods are better than interpolation ones. The paper seems to acknowledge this as well, in the final paragraph of the introduction, where it describes an experiment in which combining FMix+MixUp gives the best results (presumably because their representations of data are different and therefore combining them would yield the best of both worlds). This seems to contradict the previous adversarial analysis in which MixUp was found to not yield significantly more robustness. Further, the combination experiment has the two leading combination methods (FMix+MixUp and CutMix+MixUp) yield very similar results (within the margin of error), which opens the question of whether FMix meaningfully improves over CutMix.\\n\\nOverall, I find the paper very easy to read and presenting some interesting ideas and even some exciting improvements in performance. I just wish the presentation and the claims made in the analysis of MSDA methods accounted for some of the inconsistencies described above.\", \"update_after_rebuttal\": \"I appreciate the authors' response and clarifications. I maintain my original score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
-aThAo4b1zn | A Theory of Self-Supervised Framework for Few-Shot Learning | [
"Zhong Cao",
"Jiang Lu",
"Jian Liang",
"Changshui Zhang"
] | Recently, self-supervised learning (SSL) algorithms have been applied to Few-shot learning(FSL). FSL aims at distilling transferable knowledge on existing classes with large-scale labeled data to cope with novel classes for which only a few labeled data are available. Due to the limited number of novel classes, the initial embedding network becomes an essential component and can largely affect the performance in practice. But almost no one analyzes why a pre-trained embedding network with self-supervised training can provide representation for downstream FSL tasks in theory. In this paper, we first summarized the supervised FSL methods and explained why SSL is suitable for FSL. Then we further analyzed the main difference between supervised training and self-supervised training on FSL and obtained the bound for the gap between self-supervised loss and supervised loss. Finally, we proposed potential ways to improve the test accuracy under the setting of self-supervised FSL. | [
"fsl",
"theory",
"learning",
"training",
"framework",
"ssl",
"data",
"novel classes",
"algorithms",
"transferable knowledge"
] | Reject | https://openreview.net/pdf?id=-aThAo4b1zn | https://openreview.net/forum?id=-aThAo4b1zn | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"r4z7ByMW-oO",
"iL0I81vT27r",
"hqp53oRYSr",
"ffIrngqRY9m",
"_YqJ6ZJzPb",
"KRD0zkNQEmJ",
"raEVa7FTBh",
"Uvxbqhu7Tbg"
],
"note_type": [
"decision",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040511759,
1605606726606,
1605559180887,
1604642274129,
1604121016038,
1604088138743,
1604043351665,
1603868534676
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3823/Authors"
],
[
"~Nikunj_Saunshi1"
],
[
"ICLR.cc/2021/Conference/Paper3823/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3823/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3823/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3823/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3823/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposed to theoretically explain why a pre-trained embedding network with self-supervised training (SSL) can provide representation for downstream few-shot learning (FSL) tasks. The review process finds that the paper may over-claim the results and that the results seem unsatisfactory. Both Reviewer 4 and Reviewer 5 expressed concerns regarding the writing, organizing, and grammar errors of this paper. The paper needs a substantial revision to improve clarity and accessibility. As pointed out by Nikunj Saunshi\\u2019s public comment, this paper may benefit from discussing the differences from the previous works, including [1].\\n\\n[1] Arora et al., A Theoretical Analysis of Contrastive Unsupervised Representation Learning, ICML 2019\"}",
"{\"title\": \"Missing important citations\", \"comment\": \"You are right, I think we should read and discuss the differences.\"}",
"{\"title\": \"Missing important citations; very similar results and analysis to prior work\", \"comment\": \"The introduction of this submission states, \\u201cAlmost no one analyzes why a pre-trained embedded network with self-supervised training can provide a representation for downstream FSL tasks in theory.\\u201d and the related work section does not seem to have any citations to this effect. We would like to point out that the following works [1,2,4] have theoretically analyzed representations learned from contrastive learning on downstream tasks with few samples, while [3] analyzes the same for reconstruction based self-supervised learning. While [3,4] are quite recent, [1,2] have been online since at least 6 months before the deadline.\\n\\n\\nIn particular, the results and analysis in this submission bear strong resemblance to those from our work [1]. Particularly, theorems 1 and 2 from this submission look very similar to theorems 4.1 and 4.5 (also Theorem 6.1) from [1], with the definition of $\\\\mathcal{L}\\\\_{U}$, $\\\\mathcal{L}\\\\_{U}^{-}$ and $\\\\mathcal{L}\\\\_{sup}$ (from the proof) being very similar to $L\\\\_{un}$, $L_{un}^{\\\\neq}$ and $L_{sup}$ from [1].\\nFurthermore, the proofs of these results are primarily based on the use of Jensen\\u2019s inequality and handling of the \\u201cfalse negative data\\u201d using an intra-class deviation measure $s(f)$, both of which also appear in [1], as does the use of mean classifier in the supervised learning phase.\\nThe main difference seems to be the use of different representation functions $f_q$ and $f_k$ as opposed to the same function $f$ in [1]. This, however, is a straightforward extension since the proofs in [1] do not need the functions to be the same.\\n\\u00a0\\n\\nIf the authors benefited from looking at our results from [1], it should be cited as such, along with a discussion about the differences from [1].\\n\\n\\n[1] Arora et al., A Theoretical Analysis of Contrastive Unsupervised Representation Learning, ICML 2019\\n\\n[2] Tosh et al., Contrastive estimation reveals topic posterior information to linear models, 2020\\n\\n[3] Lee et al., Predicting What You Already Know Helps: Provable Self-Supervised Learning, 2020\\n\\n[4] Tosh et al., Contrastive learning, multi-view redundancy, and linear models, 2020\"}",
"{\"title\": \"Poor writing hampers an otherwise interesting study of a simple method\", \"review\": [\"#### Summary\", \"The authors analyze a self-supervised learning framework for downstream (supervised) few-shot classification. The self-supervised stage is a simplified version of MoCo (He et al. 2019) and relies on class-invariant augmentation of unlabeled data to produce samples for a contrastive loss. This produces two encoder networks that are used in the subsequent few-shot learning stage via a distance-based classification scheme similar to that used by Snell et al. (2017), [1], [2], and Chen et al. (2019).\", \"The authors show that the method minimizes an upper bound on an oracle supervised distance-based classification loss. They then further analyze the looseness by decomposing the self-supervised loss into contributions from false-negative and true-negative samples. They relate these quantities to key methodological considerations, such as the level of diversity in the meta-training/base data and the number of negative samples to use during contrastive learning.\", \"The authors assess this method on the Omniglot and miniImageNet few-shot datasets, following the setup proposed by Hsu et al. (2019) in which the meta-training (aka base) split is treated as unlabeled. The results are strong, though are curiously relegated entirely to the Appendix.\", \"#### Strengths\", \"The overall pipeline is to my knowledge novel, even though the authors are careful to state that the method is not a core contribution as it draws heavily from prior methods. Unlike previous works that consider unsupervised/self-supervised pre-training for few-shot learning, this work provides some theoretical justification for its method.\", \"Due to the judicious choice of considering contrastive learning and distance-based classification, the resulting analysis is relatively straightforward.\", \"#### Weaknesses\", \"This submission is overall poorly written. It was very difficult to parse due to a copious number of grammatical errors. In numerous instances, I can't quite discern what the authors mean. Aside from this, there are many vague statements unsupported by reference or argument.\", \"The organization leaves much to be desired. For example, results of an ablation take center stage in the main text, while key experimental exposition and benchmark results are left entirely to the Appendix.\", \"Comparison to CACTUs (Hsu et al., 2019) is not entirely fair as the method (like most modern contrastive learning methods) requires the specification of instance transformations that are class-invariant for test tasks. This should be noted. (Though comparison to UMTRA (Khodadadeh et al., 2019) is fair.)\", \"#### Recommendation\", \"I currently recommend rejection (3), as the submission's poor writing severely hampers clarity and thus prevents it from meeting publication standards. If the writing were fixed, I would probably rate it around a 6.\", \"#### References\", \"[1] Qi et al., Low-Shot Learning with Imprinted Weights, CVPR 2018\", \"[2] Gidaris et al., Dynamic Few-Shot Visual Learning without Forgetting, CVPR 2018\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice motivation and some good ideas. Need to improve writing and empirical validation.\", \"review\": \"This paper performs theoretical analysis of the relationship between supervised learning (SL) and self-supervised learning (SSL) in the context of few-shot learning (FSL). It aims to quantify the gap in training loss between SL and contrastive SSL on FSL tasks by casting SSL as an SL problem. Using this formulation, the authors show that the self-supervised training loss is an upper bound of the supervised metric loss function, implying that if you reduce the self-supervision loss to be small enough, you can control the model\\u2019s supervision loss on the training data, and thus improve results on the downstream FSL tasks. The theoretical formulation also provides guidelines for the optimal values for the queue size in contrastive SSL, which the authors evaluate on omniglot and miniImageNet datasets, showing that the test performance varies with queue size.\", \"strengths\": \"The motivation to perform theoretical analysis on the utility of SSL for few-shot learning is a good one. While I could not check the proofs thoroughly, they seem to provide a nice framework for explaining why SSL might provide good performance on few-shot learning.\", \"weaknesses_and_suggestions\": \"1. The paper is very difficult to follow. While the theory section (Section 4) is reasonably well-written, the rest of the paper needs a substantial rewrite to improve clarity and accessibility. Unfortunately the writing quality makes it difficult to make a strong case for the paper. 2. The experiments only touch upon one aspect of theory discussed in the paper -- the impact of N and M on test performance. A more thorough comparison with SL based few-shot learning and the impact of other factors like number of classes and class imbalance on test performance would make the paper stronger.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A theoretical justification for why self-supervised learning (SSL) helps few-shot learning (FSL). Make connection between SSL loss and supervised learning loss.\", \"review\": \"*** Key idea justification ***\\n\\nThis work shows that contrastive loss (for self-supervised learning) is an upper bound of cross-entropy loss (for supervised learning) and leads to a conclusion that this is the underlying reason why self-supervised learning can help supervised learning in FSL. This reasoning makes little sense with little logic. \\n\\nConcretely, there exist a number of to-be-answered questions before connecting the two things and making theoretical conclusion: \\n1) Why we need to know the upper bound of supervised learning loss given that we already have label data with the training data? \\n2) Decreasing SSL loss does not necessarily mean that supervised learning loss is also decreased, as it is just an upper bound. No guarantee there. \\n3) Assume SSL helps decrease the supervised learning loss, then why is this needed when we can simply use class labels to minimize it? Intuitively, the two are overlapping and SSL should be not useful. \\n\\nBesides, this paper only considers the case of contrastive loss which involves false negative samples. What if applying other SSL loss function, for example rotation? I do see the same analysis applies to that.\\n\\nIn conclusion, the proposed theory makes little sense and is also over-claimed. The whole study is neither theoretical nor logical. \\n\\n\\n*** Presentation clarity ***\\n\\n1) In general, the presentation of this paper is poor. One reason is using odd/strange terminologies and equation expressions. For example, contrastive loss (Eq 1) and cross-entropy loss (Eq 3) both are not given in their common expression. Other examples are \\\"Supervised Metric for Representations\\\" and \\\"Self-Supervised Metric (SSM) for Representations\\\", \\\"a metric loss\\\", etc. \\n\\n2) Quite a few equations are hard to read and understand. First, Eq (1) and (3) are not expressed in a standard way. How are they derived? \\n\\n3) What is the difference between a class-wise prototype pc and an episodic mean of support samples (At the end of Sec 3).\\n\\n4) What means by \\\"the class distribution \\u03c1 is uniform\\\" in the proof of Theorem 2?\\n\\n5) What is implied by the last sentence of Sec 4: Theoretically, if given an unsupervised set with infinite classes and data, the performance achieved by SSM can be very close to that by supervised training?\\n\\n\\n\\n*** Grammatical errors ***\\n1) a episodic -> an episodic\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A theoretical trial for understanding whether self-supervised learning helps solving FSL problem\", \"review\": \"The paper proposes to theoretically analyze whether self-supervised learning can help FSL.\\nUnder simplified assumptions (a simple mean classifier is used; training data is balanced; and a particular form of loss is used), the main result in Theorem 1 shows that self-supervised training loss is an upper bound of the supervised metric loss function. \\n\\nThe idea is interesting and inspiring. However, the analysis is less satisfactory. \\nThe main concern is that Theorem 1 and 2 are quite loose. \\nThey only apply for the so-called supervised metric loss function. Is it work for any fk and fq? Can you provide more strict error bound to quantify the difference? As said in the paper, \\\"\\u03b30, \\u03b4 are constants depending on the class distribution \\u03c1\\\", then how to estimate \\u03b30, \\u03b4? If they cannot be estimated, why we need this theory? How to link this theory to the success of self-supervised learning in solving FSL problem? Or can this theory be validated empirically?\\n\\nI think this paper indeed proposes an interesting direction to explore. But without answering the above questions, the current version is not complete enough to be published. \\n\\n===\\n\\nDuring discussion period, I noticed import missing references of this paper as written by Nikunj Saunshi. \\nBesides, the authors do not respond to any of the reviewers' questions. Hence I change my score to strong rejection.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A Theory of Self-Supervised Framework for Few-Shot Learning\", \"review\": \"The paper establishes a relationship between self-supervised learning (SSL) and supervised few-shot learning (FSL) method and shows that when both are equivalent. The whole analysis and proof are based upon the two main assumptions: mean classifier and balanced class training data. The paper shows that if we have a too large number of classes in the SSL, then it is equivalent to the supervised learning scenario and model enjoy the same generalization ability. Always supervised loss is the upper bound by the SSL loss.\", \"comment\": \"\", \"1\": \"The paper theoretically connects the SSL and FSL and shows when both will be equivalent. Theorem-1 shows that the supervised loss is upper bound by SSL loss by a linear relation (mostly scale+shift) when |C|-->infinity then both loss is equivalent. It seems that Theorem-1 is trivial since it is obvious that for the large class there will be very less chance of the negative pair is incorrect (i.e. false negative). If all the negative pair is correct, then it is same as we know the class label and we make the negative pair using the class information of all samples. I believe this theorem provides less useful information for a practical perspective.\", \"2\": \"Theorem 2 provides the underlying factor between the L_sup and L_U, and shows that L_sup loss is upper bound by the loss of the true-negative and the intraclass variance. For the small variance, we can reduce the gap between the supervised loss and SSL loss. Once a trivial solution is when |C|--> infinity. This theorem shows then when |C| is not large still we can still focus on reducing the intraclass variance and reduce the gap.\", \"3\": \"It is clear that if we have large number of class, we can reduce the gap between the supervised loss and self-supervised loss, but why the large batch size help in to get a practically better result? In this case, the probability of the false-negative samples is the same, and it does not depend on the batch size. Could you please explain that? It is written that \\\"We can increase N by increasing the total negative samples N_k\\\", is true but in the total negative samples the probability of the false-negative will be same, and it depends on the number of class only. Then how large batch size help?\", \"4\": \"In the N-way and M-shot, it is intuitive that when M increase the model performance will increase, but why with the increase of the N model performance will increase?\", \"5\": \"Omniglot dataset has 1623 classes, while in the paper it is written that \\\"Omniglot involves up to 4800 classes\\\" please check that.\", \"https\": \"//github.com/brendenlake/omniglot\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
CBmJwzneppz | Optimism in Reinforcement Learning with Generalized Linear Function Approximation | [
"Yining Wang",
"Ruosong Wang",
"Simon Shaolei Du",
"Akshay Krishnamurthy"
] | We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation. We analyze the algorithm under a new expressivity assumption that we call ``optimistic closure,'' which is strictly weaker than assumptions from prior analyses for the linear setting. With optimistic closure, we prove that our algorithm enjoys a regret bound of $\widetilde{O}\left(H\sqrt{d^3 T}\right)$ where $H$ is the horizon, $d$ is the dimensionality of the state-action features and $T$ is the number of episodes. This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions. | [
"reinforcement learning",
"optimism",
"exploration",
"function approximation",
"theory",
"regret analysis",
"provable sample efficiency"
] | Accept (Poster) | https://openreview.net/pdf?id=CBmJwzneppz | https://openreview.net/forum?id=CBmJwzneppz | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"0PxM0kJ6Uj",
"XKjpLf9jftM",
"GsGrViTfIgp",
"UD-t3bg36lO",
"kwmWMzZwyPE",
"8Kxx8brpwm",
"HLWCfbtxYfY",
"h2bA2wE7iOX",
"-H7ANcZfYi",
"0CNPIETK54n",
"Z2GWArKY92Z",
"fwYSy-IrnS",
"NJpYxFkocwu"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040511825,
1606235676786,
1606229172077,
1606166425164,
1605571342267,
1605541255558,
1605541168438,
1605541061697,
1605540984688,
1604673477722,
1604568419821,
1604110300826,
1603772244739
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3820/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3820/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3820/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3820/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3820/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3820/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3820/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3820/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3820/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3820/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3820/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3820/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper analyzes a version of optimistic value iteration with generalized linear function approximation. Under an optimistic closure assumption, the algorithm is shown to enjoy sublinear regret. The paper also studies error propagation through backups that do not require closed-form characterization of dynamics and reward functions.\\n\\nOverall, this is a solid contribution and the consensus is to accept.\"}",
"{\"title\": \"Regarding Zanette et al., and bias-variance decomposition\", \"comment\": \"We highlighted the computational efficiency point above, because with current techniques it does not seem possible to get the bias-variance decomposition in a computationally tractable manner. It is not just an issue of the analysis, but rather the algorithm itself. Put another way, we do not think that the optimistic algorithm will be robust to approximation errors, due to subtleties regarding error propagation. On the other hand, we do believe that we can get a computationally _inefficient_ algorithm that is robust even in the GLM setting (which would generalize Zanette et al.), but it would look very different from our optimistic algorithm here.\\n\\nThe point is that these two issues are _not_ unrelated at least given current techniques. We must give up robustness to approximation error to enjoy computational efficiency, so you have to choose which you care about more. Perhaps this is a matter of taste, but we felt that computational efficiency is more important than handling approximation errors. We apologize if we did not make this clear in our earlier comment.\"}",
"{\"title\": \"I thank the authors for their clarifying remarks\", \"comment\": \"I thank the authors for their response, it helped in understandin the author's submission. Still, after considering the author\\u2019s response, I feel my major concerns are still valid and I see little reason to change my overall assessment\\n\\nOn points 1) and 2), I brought up the issues to give the authors a chance to make their submission easier to understand for other readers. For example, if the only way to understand that their approach is optimistic is by carefully examining line 10 of the algorithm then I believe there\\u2019s room for improvement in understandability. The authors seem to believe the paper is fine as is, and as understandability is very subjective we\\u2019ll have to agree to disagree on these points.\\n\\nOn the weakness of assumption 2, the authors respond that \\u201c we do not know of any weaker assumptions that permit computationally and statistically tractable RL.\\u201d This ends-justify-the-means argument does not advance science. Just because one wishes to prove something, and hasn\\u2019t come up with realistic assumptions by which to prove the thing doesn\\u2019t justify making unrealistic assumptions.\\n\\nIt therefore saddens me that the authors didn\\u2019t comment on whether the roadmap for weakening assumption 2 as laid out in Zanette would work for the author\\u2019s algorithm. While I agree that Zanette\\u2019s algorithm isn\\u2019t computationally tractable while the author\\u2019s proposed algorithm is, the analysis methods used there (splitting the error into an approximation and variance error term) could, in theory, be applied to the author\\u2019s optimistic algorithm. And that is exactly what I wrote in my comment, saying that the authors could perhaps better analyze their algorithm by using the strengths of Zanette\\u2019s theoretical approach. I would have hoped the authors address this question instead of addressing the unrelated issue of the intractability of Zanette\\u2019s algorithm.\\n\\nFinally, as a comment not on the paper itself but the author's response, I find the comments about what appeared when on arxiv unbecoming for three reasons:\\n\\n- I can\\u2019t verify the authors claim without also discovering the author\\u2019s identity\\n- By claiming that \\u201cZanette et al., cites this paper\\u201d the authors are giving hints about their true identity, which is unprofessional\\n- The intent of mentioning Zanette\\u2019s work was not to say the author\\u2019s work is unoriginal or isn\\u2019t novel, but as a hint on how the author\\u2019s can strengthen their work. Therefore, who published first is irrelevant in this context. I wish the authors had constructively commented on this instead of bringing up irrelevant who-published-first debates.\"}",
"{\"title\": \"Thanks for bringing up the minors\", \"comment\": \"Thanks for bringing up the minors again, here are some responses:\\n\\n1. There are two typos here and thank you for noticing them. In the middle term, we are integrating the derivative f'(s) instead of f(s); in the last term the integrating term should be $f'(<x_\\\\tau, s\\\\hat\\\\theta+(1-s)\\\\bar\\\\theta>)$. This chain of equalities actually holds by the fundamental theorem of calculus, instead of the mean value theorem.\\n\\n2. Yes you are right, there should be a factor of 2 here.\\n\\n3. Both are correct. We can mention this in the final version.\\n\\nWe'll make the appropriate changes in the final version. Thanks!\"}",
"{\"title\": \"Thanks the author response. Please address some of my questions in the \\\"Minor Comment\\\" section as well\", \"comment\": [\"I thank the authors for your response. I would like the authors to respond to some of the questions in the minor comments as well before I make my final recommendation. I put them as minor comments because I assume that these technical minors should not affect the main conclusions and theorem of the paper, but they might improve the paper and I would like to see how the authors will take these. For convenience, I will explicitly rewrite some least minor questions here that I would like to hear a response to.\", \"The equation between eq. (5) and eq. (6) on page 14 does not look very right. I think the correct one should be the one with the RHS replaced by $\\\\langle x_{\\\\tau}, \\\\hat{\\\\theta} - \\\\bar{\\\\theta} \\\\rangle f\\u2019(\\\\langle x_{\\\\tau}, s \\\\hat{\\\\theta} + (1-s) \\\\bar{\\\\theta} \\\\rangle)$ for some $s \\\\in [0,1]$ (according to the mean value theorem). If this is true, I am afraid the bounds of the difference between $D_{\\\\tau}$ (after Eq. (10)) might not be precise.\", \"In Corollary 4, shouldn\\u2019t it be **2** $\\\\gamma \\\\| \\\\phi(s,a) \\\\|$ instead of $\\\\gamma \\\\| \\\\phi(s,a) \\\\|_{\\u2026}$?\", \"On page 15, the paper says that E[ xi_tau^# | x_{1:tau}, xi_{1 : tau-1}^# ] = 0. Do we really need that martingale structure when we already consider a fixed g_{epsilon}? Given a fixed g_{\\\\epsilon}, we already have E[ xi_tau^# | x_tau ] = 0.\"]}",
"{\"title\": \"author response\", \"comment\": \"We thank the reviewer very much for his/her helpful suggestions. Below we respond to the main concerns/questions from the reviewer.\\n\\n1. Difference in regret definition with Jin et al: The objective in Jin et al., is *not* the value function estimate, but rather the true value function of the deployed policy \\\\pi_k. Due to the fact that we are taking an expectation (and we do not update the policy during the episode), the two objectives are actually the same. Note also that the actual collected rewards differ from this quantity by at most H\\\\sqrt{T} due to Azuma-Hoeffding.\\n\\n2. In the proof of Fact 1, why Q_H^\\\\star is in \\\\mathcal{G}? Indeed, it is required (as the base case of the inductive proof) that the Q_H^* for the last episode belong to the function class G. We will clarify this in the revised paper, as suggested by the reviewer by strengthening the Assumption 2 to make sure that Q_H^* is in \\\\mathcal{G}.\\n\\n3. Comparison to the results of Ruosong Wang et al. We would like to clarify that, the paper mentioned by the reviewer is actually a *follow-up* paper of our results. Indeed, our paper was arxived in December, 2019 and the paper of Ruosong Wang et al. was arxived in May, 2020, in which they clearly cited our paper as a starting point/prior literature.\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer very much for his/her appreciation of our paper and the helpful comments. We do not think that the Factored MDP model satisfies optimistic closure for any \\\"small\\\" Q-function class. Indeed, it is known that the optimal Q function for a factored MDP is in general extremely complicated (formally cannot be expressed as a polynomially sized circuit), and for this reason, all known provably efficient approaches for Factored MDPs are model based. We do not expect model-free methods to be sample-efficient in these environments.\\n\\nRegarding the LQG, we do know that the simpler LQR satisfies completeness using quadratic value functions, but unfortunately we do not believe it satisfies optimistic completeness. The reason is that with a quadratic value function at time h+1, the one-step optimal policy is linear, which results in a quadratic value function at time h. But the optimism bonus results in a quartic value function at time h+1, which does not admit a closed form optimal policy.\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer very much for his/her helpful suggestions.\\n\\nIt seems the main concern of this reviewer is regarding the contribution of this paper relative to the results from Jin et al. As remarked by the reviewer, Jin et al's proof goes through as is for linear functions, as they only back up functions in our class \\\\Gcal_{up}. However, as remarked, we feel this is a conceptual point worth emphasizing as the linear MDP does not naturally accommodates the GLM structure.\\n\\nOn the technical side, Our analysis has some differences with that of Jin et al., to address the GLM setting. For example, we use the constrained least squares objective, rather than the regularized objective. This manifests in lemma 6, where we also incorporate the required changes to address GLMs.\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the helpful suggestions. For the concerns raised by the reviewer, we respond as follows:\\n\\n1. Some intuitive explanation of notations: The motivation for G_up (not G_op, as the reviewer mistakenly copied) is to define a function class that covers all optimistic policies. The matrix A in the definition of G_up is part of the confidence interval, similar to the role of the sample covariance in the construction of confidence intervals for linear contextual bandit. The motivation of Lambda_{h,t} is the sample covariance matrix which will be used to construct confidence intervals.\\n\\n2. It's not clear that the LSVI-UCB algorithm is optimistic: We would argue that the optimistic nature of the LSVI-UCB algorithm is very clear, from line 10 of the algorithm that clearly appends a confidence interval term to the generalized linear estimates of the Q function. This kind of optimism term appears in many other settings, including (generalized) linear bandits, so there's no need to look into further details of the algorithm to see the optimistic structure.\\n\\n3. Assumption 2 is fairly strong and not realistic: This is true to some extent, but several points are worth emphasizing. First, Assumption 2 is strictly _weaker_ than the linear MDP assumption that has become quite popular in the theoretical analysis of RL (as we show). Second, it is unlikely that these kinds of optimistic algorithms provably succeed under much weaker assumptions, indeed very recent work shows that just assuming realizability of Q^\\\\star would be insufficient. Third, we do not know of any weaker assumptions that permit computationally and statistically tractable RL (to date, there is no computationally efficient method for the the low IBE setting of Zanette et al.). Thus our results represent the weakest tractable assumptions to-date and are close to what is information-theoretically possible.\\n\\n4. Bias-variance tradeoff: While our analysis indeed assumes there exists a perfect fit of the Q functions in generalized linear forms, we would like to clarify that reporting lower regret is NOT the focus or objective of this paper. The main objective/message of this paper is to show that the regret analysis and algorithms that are previously developed for purely linear Q approximation functions can be extended to the much more general model classes discussed in this paper, thereby making the analysis/algorithm more applicable to reinforcement learning questions.\\n\\nWe would also like to point out that Zanette et al. appeared on arxiv several months after this paper first appeared. Indeed Zanette et al., cites this paper!\", \"minor_issues\": \"Yes we can update both the H-dependence and the venues in the bibliography\"}",
"{\"title\": \"A promising line of research, but assumptions are not well motivated and paper isn't clearly written\", \"review\": \"Summary After Discussion Period:\\n-----------------------------------------------\\nAfter corresponding to the authors and reading other reviews, my assessment hasn't changed much, which is that the paper is a good line of research but still needs improvement readability and strictness of assumptions.\\n\\nThe authors and reviewers all point out that this work is a relaxation over some previous works, e.g. Jin et al. Yet [1] has assumptions which are relaxed further than in this paper, and show that at regret bounds are possible with weaker assumptions. \\n\\nThe author's correctly point out their algorithm is computational efficiency while [1]'s algorithm isn't, which is a point in favor of the author's algorithm. Unfortunately, the benefits in computational efficiency were not clear to me and none of the other reviewers highlighted computational efficiency as one of the algorithm's strengths. If indeed one of the author's algorithm's main advantage over other work wasn't clear to the reviewers, then the paper still has room for improvement in readability.\", \"summary\": \"--------\\nIn this paper, the authors propose a Q-learning method to solve episodic RL problems. Key to their method is assuming the Q-function takes the form of a generalized linear function plus an optimism term. Once this assumption has been made, they demonstrate that their algorithm, The LSVI-UCB algorithm provably finds a policy with bounded regret.\", \"pros\": \"-----\\nI like that the authors were able to show that using generalized linear functions for the Q function opens the doors to many theoretical analysis possibilities, and I like the modification they made to allow the Q-function to be optimistic. I thought a further strong point of the paper was the proof which shows that using general linearized q functions isn't a stricter assumption than the linear MDP assumption.\\n\\nAnd in general, I like the idea. I think it's a good line of research, and an idea which will yield important progress in the field of RL research.\", \"cons\": \"-----\\nI found this paper hard to follow. After multiple readings, I was still confused in multiple areas. This included \\n\\n - What is the motivation of the function set $G_\\\\text{op}$?\\n - What are some examples of common RL problems for which the Q-function is / is not a generalized linear function\\n - Where does the matrix $A$ come from in $G_\\\\text{op}$?\\n - What is the motivation for the $\\\\Lambda_{h,t}$ in the algorithm\\n - Links to previous work, for example using the generalized linear models as a Q function, it's unclear if this is a new idea or is already present in previous work.\\n - It would be good to point out links not just in the related work section, but also while introducing concepts.\\n\\nTo discover why the author's algorithm is optimistic, we need to look at the details of the LSVI-UCB algorithm, a clear explanation isn't given anywhere else.\\n\\n\\nThe other issue I had was with assumption 2. It is a fairly strong assumption, and although the author's show it holds for the linear MDP setting, it isn't nicely motivated why this assumption is realistic for other settings.\\n\\nIn any modeling setting, there's always a bias-variance tradeoff. As the model becomes more complex, it better captures the observations but is more prone to fitting the noise. By assuming the model can perfectly fits the true Q-function, the author's have assumed there's no \\\"bias\\\" in this bias variance tradeoff, and it is not surprising that they can then report lower regret as compared to other methods.\\n\\nI feel a better analysis should be more along the lines of [1], where they introduce the \\\"inherent bellman error,\\\" an error stating how far the true Q function is from the best estimate. One sees that this inherent bellman error then factors prominently in the regret bounds they show. There, they recognize that such a bellman error is generally non-zero, and prove their results by splitting the regret into an approximation and a variance term (like the bias variance trade-off).\", \"minor_concern\": \"---------------------\\nIn theorem 1, you state the regret is $O(H\\\\sqrt{d^3T})$ while in the abstract it's $O(\\\\sqrt{d^3T})$.\\nPlease keeps it consistent.\\n\\nIn the bibliography, many of the works have been published. It's nice to cite the published version (i.e. the ICML or NeurIPS version) instead of the Arxiv version.\", \"conclusion\": \"----------------\\nIf the authors could better address assumption 2 (ideally by doing an analysis akin to [1]), this\\nwould make the theory a good contribution to RL research. And if the authors could write their paper to tell a compelling story, where the different facts, assumptions, definitions and theorems nicely flow into one another, and one understands where things are coming from and where they are going, then this would be a good submission. But in it's current form, with an assumption which masks a large source of regret and a story which is hard to follow, I don't believe this paper is ready for submission.\\n\\n[1] Andrea Zanette, Alessandro Lazaric, Mykel Kochenderfer, and Emma Brunskill. Learning near\\noptimal policies with low inherent bellman error. arXiv preprint arXiv:2003.00153, 2020a\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Important generalization; questions on novelty of techniques.\", \"review\": \"The backdrop for this work is the linear MDP model. In linear MDPs, typically the transition function is assumed to be a low rank matrix in the span of d feature vectors (over S, A); such an assumption lends itself to regret bounds that only scale with d (and not explicitly with the size of the state space).\\n\\nThe first contribution here is to establish that it is enough to assume that the function approximation class (for Q functions) is closed under an optimistic (~inverse covariance bonus) version of Bellman update. Qualitatively, this is desirable because this is an assumption on the Q-function class and does not present an explicit assumption on dynamics, unlike linear MDPs. The paper establishes that this is strictly more general the linear MDP assumption, where the above-discussed closure holds for backups of all functions (and not just linear Q functions). It must be noted that Jin et al had already noted & observed that such an assumption is enough, and that their proofs accommodate this. \\n\\nThe second contribution is that the Q function class is generalized here to accommodate generalized (vs just) linear models.\", \"strengths\": [\"I think this is an important relaxation in assumptions to point out. Bellman closure of the policy class seems like a necessary precondition; optimistic variant is a bit further, yet more palatable than a factorization of the dynamics matrix.\", \"The GLM part of the extension could be significant in practice, given similar observations in supervised learning.\", \"The proof exposition (Appendix A) here is potentially cleaner that Jin et al.\"], \"comments\": [\"Regarding the first contribution, did the authors think it was necessary to modify any part of the proof from Jin et al? From my reading, since all concentration arguments were always made on backups, it seemed their proof did indeed go through.\", \"Regarding the second contribution, what changes did does this work introduce to handle GLMs? I understand part of the answer may be in Lemma 6.\", \"Typo: Page 5 > linear MPD?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting contributions to the line of research on episodic MDP learning with function approximation\", \"review\": \"The authors studies an episodic MDP learning problem, where they propose to study an Optimistic Closure assumption which allows the Q function to be expressed as a generalized linear function plus a positive semi-definite quadratic form. They motivate the assumption by showing that the assumption allows the tabular MDP case to be modeled, and that the Optimistic Closure is in fact a strictly weaker assumption than the linear MDP assumption made in previous related works. The authors then proceed to the design and analysis of the LSVI-UCB algorithm, which involves estimating the the parameter of the GLM model by a ridge estimator and adding an optimistic exploration bonus to the Q function. The authors propose a regret bound for the algorithm.\\n\\nThe proposed work is an interesting development to the line of research on RL with function approximation, and is large well written. I am in favor of acceptance, given that it provides a non-trivial extension to what is known and the Optimistic Closure assumption seems to me to be closer to the reality than the linear MDP assumption. One suggestion is to investigate if other large scaled but structured MDP models, such as the Factored MDP model by Osband and Van Roy 2014 : https://papers.nips.cc/paper/5445-near-optimal-reinforcement-learning-in-factored-mdps, and the LQG model, satisfy the Optimistic closure assumption with appropriate choices of $\\\\phi, f$.\", \"minor_comments\": \"In the abstract, brackets are missing for the d^3\\\\sqrt{T} regret bound.\\n\\nOn page 3, $\\\\Gamma$ should be replaced by $\\\\gamma$.\\n\\nOn page 5, MPD -> MDP\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A nice extenstion of analysis for LSVI-UCB with generalized linear function approximation\", \"review\": \"### Summary\\nThis paper analyses an existing algorithm (LSVI-UCB) with generalized linear function approximation instead of conventional linear function approximation. Under this generalized linear setting, they propose a so-called \\u201coptimistic closure\\u201d assumption which is shown to be strictly weaker than the expressivity assumption in the conventional linear setting. The paper then proves that LSVI-UCB still enjoys sub-linear regret in the generalized linear setting with strictly weaker assumptions. The paper also derives a general error propagation through steps that do not require a closed-form expression of the empirical dynamic and reward functions as in the linear case; this could be applicable to general function approximations. \\n### Strong points \\n-\\tNovelty: The generalized linear setting appears novel and generalizes the linear settings. \\n-\\tSignificance: The optimistic closure appears novel and is strictly weaker than the linear MDP assumption in the prior works. \\n-\\tCorrectness: A complete analysis that successfully retains a sublinear regret and honest comments on the limitations of the present work. \\n\\n### Weak points \\n-\\tThe work is almost merely about analysis of an existing algorithm with modest algorithmic contribution (which however is not a big problem). There are some parts of the proofs pointed out in the Minor comment section that potentially require some attention (but I believe these are minor points which could be fixed if there is any issue)\\n\\n### Minor comments \\n-\\tPeriod \\u2018.\\u2019 after the first sentence of the second paragraph of section 2.\\n-\\tFirst sentence of section 3: \\u2018MPD\\u2019 -> \\u2018MDP\\u2019\\n-\\tLemma 1: Should it be $\\\\pi_{h,t}$ instead of $\\\\pi_t$ there?\\n-\\tIn Appendix A: \\u201cWe believe these technical results will be useful in designing RL algorithms for general function classes\\u201d. It seems that an analysis of LSVI-UCB with general function classes has recent done in [1] (?) \\n-\\tIn Corollary 4, shouldn\\u2019t it be **2** $\\\\gamma \\\\| \\\\phi(s,a) \\\\|$ instead of $\\\\gamma \\\\| \\\\phi(s,a) \\\\|_{\\u2026}$?\\n-\\tAt the end of Page 12: \\u201cThe first term forms a martingale\\u201d -> shouldn\\u2019t it be a \\u201cdifference martingale\\u201d instead?\\n-\\tThe equation between eq. (5) and eq. (6) on page 14 does not look very right. I think the correct one should be the one with the RHS replaced by $\\\\langle x_{\\\\tau}, \\\\hat{\\\\theta} - \\\\bar{\\\\theta} \\\\rangle f\\u2019(\\\\langle x_{\\\\tau}, s \\\\hat{\\\\theta} + (1-s) \\\\bar{\\\\theta} \\\\rangle)$ for some $s \\\\in [0,1]$ (according to the mean value theorem). If this is true, I am afraid the bounds of the difference between $D_{\\\\tau}$ (after Eq. (10)) might not be precise. \\n-\\tThe second paragraph on page 12: \\u201cHence $y_{h, \\\\tau}$ is not measurable with\\nrespect to the filtration $F_{\\\\tau}$ , which prevents us from directly applying a self-normalized martingale\\nconcentration inequality\\u201d. Should it be $F_{\\\\tau-1}$ instead of $F_{\\\\tau}$?\\n\\n-\\tOn page 15, the paper says that E[ xi_tau^# | x_{1:tau}, xi_{1 : tau-1}^# ] = 0. Do we really need that martingale structure when we already consider a fixed g_{epsilon}? Given a fixed g_{\\\\epsilon}, we already have E[ xi_tau^# | x_tau ] = 0. \\n\\n\\n### Questions for the authors \\n-\\tIn Chi Jin et al. 2019, the regret is the difference between the optimal value function and the value function estimate while in the present paper, the regret is the difference between the optimal value function and the expected value of the cumulative rewards by the algorithm. What is the difference between these two notions of regret? Can it make the two results comparable? \\n-\\tIn the proof of \\u2018Fact 1\\u2019, why Q^*_H \\\\in \\\\mathcal{G}? For that to hold, it seems to require that the expected reward \\\\mathbb{E}[r_H] has a generalized linear form of \\\\mathcal{G}? If so, one way to fix it is maybe letting 1 <= h <= H (instead of 1 <= h < H) in Assumption 2? \\n-\\tIt seems that [1] already analyses LSVI-UCB with general function approximations which means that [1] is more general than the present work (?) If so, could the authors comment on the benefit of this work for a generalized linear function class given that an analysis for a general function class has been done? For example, does the present work give a tighter bound when considering generalized linear function as compared to the bound for a general function class in [1]?\\n\\n\\n### My initial recommendation \\nOverall, I vote for accepting. An extension from linear settings to generalized linear settings is novel and natural, and it must be done at some point. I think this work is nice for filling in that gap. \\n\\n### My final recommendation \\n\\nI remain my initial score after the discussion. \\n\\n### References\\n[1] Ruosong Wang et al. \\u201cReinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension\\u201d\\n\\n\\n### Additional comments about the correctness of the proof of Lemma 8\\n\\nI have recently checked their proof of Lemma 8 and noticed one thing that looks a bit strange to me. Since the discussion is over, I hope the authors will clarify/fix it in their final paper. That is, in the proof of Lemma 8 in the step where they applied Lemma 7 (Azuma's), they used $c_{\\\\tau'} = |q(u_{\\\\tau'}, \\\\phi')|$, but the Azuma's inequality requires that $c_{\\\\tau'}$ is a constant while here $|q(u_{\\\\tau'}, \\\\phi')|$ is a random variable (depending on the random variable $u_{\\\\tau'}$). How is this possible to apply Azuma's inequality here when $c_{\\\\tau'} = |q(u_{\\\\tau'}, \\\\phi')|$ is random?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
xfOVXyO_cwJ | Empirical Frequentist Coverage of Deep Learning Uncertainty Quantification Procedures | [
"Benjamin Kompa",
"Jasper Snoek",
"Andrew Beam"
] | Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model's uncertainty is evaluated using point-prediction metrics such as negative log-likelihood or the Brier score on heldout data. In this study, we provide the first large scale evaluation of the empirical frequentist coverage properties of well known uncertainty quantification techniques on a suite of regression and classification tasks. We find that, in general, some methods do achieve desirable coverage properties on \emph{in distribution} samples, but that coverage is not maintained on out-of-distribution data. Our results demonstrate the failings of current uncertainty quantification techniques as dataset shift increases and establish coverage as an important metric in developing models for real-world applications. | [
"uncertainty quantification",
"coverage",
"dataset shift"
] | Reject | https://openreview.net/pdf?id=xfOVXyO_cwJ | https://openreview.net/forum?id=xfOVXyO_cwJ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"o0qKWjMsS8T",
"8aLdmZAhejq",
"LrvTfsG_3D",
"OPVeeq4Vax",
"Fe4jq_oQQY_",
"3pUPyzAUWp",
"g1QtYFTJ6s7",
"Mdt_15ElLcc",
"EexjCtRnul"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040511893,
1606180398118,
1606180361994,
1606180325921,
1606180289271,
1604208543030,
1603960881380,
1603764149235,
1602744019127
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3804/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3804/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3804/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3804/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3804/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3804/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3804/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3804/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Overall, the reviewers agree that there is definite value in the empirical evaluation you have provided. However, as you have acknowledged in your responses to the reviewers, the presentation could be significantly improved. A final point that was not touched upon by the reviewers--where possible (e.g. certainly not ImageNet, but for some of the smaller datasets in Table 1) it would be helpful to have a comparison to fully Bayesian methods (you have linear regression and GPs, but I don't see the implementation details; my suggestion is to implement these within an MCMC framework, specifying reasonable priors over the (hyper)parameters).\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you for your time in reviewing our work. We think that actually we are perhaps more well-aligned then you may think, and are sorry to see that this alignment was potentially obfuscated due to the paper\\u2019s presentation. It was not our intention to claim that we were introducing coverage to the machine learning community since, as you highlight, the topic has been a rich source of research for many decades. Instead, it was our intent to evaluate the empirical coverage properties of this specific class of methods since 1) they are the subject of much attention in the deep learning literature and 2) as the other reviewers note, such an evaluation for these models had not yet been done. We should have included the references provided by the reviewer and will provide them in updated versions of the manuscript.\\n\\nWe would also like to push back against the claim that \\u201cthese results will not generalize in any meaningful sense\\u201d since the results were consistent across all of the regression datasets on which the GP was used. We understand and appreciate that there are conditions (e.g. the ones from Hadji & Szabo) under which it can fail, but to discard 9 evaluations on real datasets under realistic conditions seems to be entirely too strong and unsupported by our results.\"}",
"{\"title\": \"Confidence interval vs prediction interval\", \"comment\": \"Thank you for your review -- we found it helpful and appreciate the time you and all other reviews contributed. With regards to specific terminology like \\u201cconfidence levels\\u201d or \\u201cconfidence intervals\\u201d, note that we are not actually attempting to construct a confidence interval as the term is used in the statistical literature. A confidence interval is an interval that, under repeated sampling, will contain the true population parameter (e.g. the mean) of interest with probability at least 1 - $\\\\alpha$. Empirical evaluations of confidence intervals are only possible using simulations when the true value of the mean is known. Since such simulations would likely be unrealistically simple for deep learning scenarios, we sought here to evaluate prediction intervals which is a set that contains the observed values with probability 1 - $\\\\alpha$. We appreciate this distinction is confusing and will make it more clear in a revised version of the paper.\"}",
"{\"title\": \"Literature and experiments\", \"comment\": \"Thank you for your thorough and insightful review. We appreciate the references to the literature. While these references [R1, R2, R3] are related ideas, none of the work touch on the exact same concept we are trying to capture here. In [R1, R2], the authors develop techniques to create intervals that have provable coverage guarantees. While this is impressive work, here we aimed to quantify the empirical coverage of the built in uncertainty estimates of existing methods. To our knowledge, this has not been done before for the approximate Bayesian methods that we consider. However, we will include these citations in revisioned versions of the manuscript due to their relevance.\\n\\nWith regards to the Experiments section, we will consider your suggestions for future iterations of the paper. Comparing the ordering of coverage against other metrics such as Brier score is a great idea. Indeed, we believe that coverage will strengthen the evidence that current methods do not behave as we might hope they would in OOD situations. \\n\\nYour comments about hyperparameters is particularly relevant and we will include a larger discussion about HPs in future iterations of the work. Our current study was designed to assess the performance of common setups that an end-user might employ in practice, and we do believe that it has captured that scenario. We did a hyperparameter search for the final models trained in the paper and with this many methods and datasets, an exhaustive analysis of the effect of all possible HPs on coverage would likely be imfeasible. But this remark is important and we will consider how to best incorporate it in a future iteration of our work. \\n\\nYou are correct that there is an implicit assumption that the classification labels are unordered in our definition of coverage for classification. We will make that explicit in a future revision. However, for binary classification, is it true that labels are always ordered? In a dog vs. cat problem, for instance, there is not an inherent ordering.\"}",
"{\"title\": \"Figures are nontrivial\", \"comment\": \"Thank you for your thorough and helpful review. We will rework the visual representation of results for a future version of the paper. Like you said, it is indeed a hard issue to convey these results with so many axes of variation.\\n\\nWith regards to the specific definition of coverage under covariate shift, we will expand on this in future iterations. Right now, we consider \\u201ccoverage under covariate shift\\u201d to be quantified in the same way as without shift: calculating the fraction of the test prediction interval/sets that contain the true value/label, which is not the same as conditional coverage in Barber et al. 2020. Indeed, nobody is expecting coverage properties to hold under heavy distribution shift. However, this work aims to understand empirically how rapidly coverage deteriorates under shift in practice, thus we did not specify what specific distribution shifts are permitted as in real life, that is unknowable. \\n\\nWe will definitely consider an analysis of conditional coverage for future work. Thank you for the suggestion.\"}",
"{\"title\": \"A timely survey of coverage properties, but visual communication of results needs more work.\", \"review\": \"## Summary\\nThe authors compare empirical frequentist coverage of predictive intervals for several uncertainty quantification methods. The paper covers both classification and regression. The authors define an analogue of a confidence interval for classification. Coverage properties are also studied under covariate shift between training and test sets.\\n\\n## Pros\\nCoverage and width are a standard benchmarks for uncertainty quantification in statistics, and to my knowledge, this is the first work that undertakes a large-scale comparison for deep learning models. Some inspiration seems to have been drawn from Ovadia et al. 2019 in that the set of methods compared are similar and the same architectures are used. However, this work makes an important contribution in focusing on coverage / width, which I would agree are more interpretable metrics for practitioners. The set of methods spans several important strains of the literature: ensembling, Bayesian approximation, Dropout, GPs.\\n\\n## Cons\\nThe work is timely and of broad interest; however, I think the presentation of results still needs some refining. Given that this paper focuses on empirical results, I would suggest the authors spend more time developing effective visualizations to communicate their conclusions.\\n\\nTables 1 2 and 3 are perhaps necessary as a reference, but cry out for a visual aid. The authors state \\\"We see that higher coverage correlates with a higher average width.\\\" This is something that seems like it could be communicated more immediately with the right plot.\\n\\nFigure 4 conveys some visual trends, but also could be improved. The coverage plot contains mostly blank space. The dots are clustered together and impossible to differentiate. In the width plot, what is communicated is that all methods have wider intervals with more shift. However, it is hard to differentiate the methods: again, the dots are on top of each other, the colors blend such that they do not seem to actually correspond to the colors on the legend (perhaps alpha should not be used here?).\\n\\nFigure 5 has similar problems. The use of alpha means that colors blend together and cannot be looked up in the legend. As before, the dots in the legend are tiny, so it is hard to differentiate the shades even in the legend. What is visually communicated is the spread of performance, but conclusions about any particular method are nearly impossible.\", \"clearly_this_is_a_difficult_problem_to_solve\": \"there are 7 methods, and several axes of variation (coverage, width, shift). However, the plots at the moment do not convey much information aside from overall trends. Visual understanding method-specific results is not possible at the moment.\\n\\nI would also suggest that the authors devote more attention to the definition of coverage for predictive intervals, and how it relates to distribution shift. For example, the authors define coverage as equation (1) holding for any distribution P. It is not explicitly stated, but the implication here seems to be that the set $\\\\hat{C}_n(x_n)$ is determined from training data distributed as P, and coverage is measured from data drawn from the same distribution (i.e. this definition does not allow covariate shift). It would be useful for the authors to state in mathematical terms, what it means for coverage to hold under covariate shift. Is this equivalent to the notion of conditional coverage as defined in Barber et al. 2020? These are subtle enough concepts that I think they should be more precisely spelled out in the paper, even if some intuitive definition of covariate shift is widely understood. Clearly we cannot expect coverage to hold under a distribution shift that changes the conditional P(Y | X) between training and eval. What are the limits of what the authors allow?\\n\\nI would also suggest that the authors might include an explicit analysis of conditional coverage. For example, all methods seem to enjoy 95% coverage for in-distribution eval sets. However, it would be interesting to know if this coverage is uniform across classes or any other useful clustering of the data. \\n\\n### A few specifics:\\n* The authors' analogue of confidence interval for a classifier is novel to me, and is a convenient way to unify the presentation of results between classification and regression. If this is a novel definition, I would suggest the authors more explicitly point this out, as future literature may use it and should cite it.\\n* Figure 4 is out of order with figure 3 - this needs to be fixed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting but not very insightful analysis, also lacks a methodological contribution.\", \"review\": \"**Summary and key claims**\\n\\nThis paper provides a comprehensive evaluation of the empirical frequentist coverage properties of existing uncertainty quantification baselines on both regression and classification tasks. The paper focuses on frequentist coverage as a faithful metric for the quality of uncertainty estimates. The experimental evaluations in the paper imply that accurate out-of-distribution coverage is a hard target for most existing baselines; a problem exacerbated by settings were dataset shifts are prevalent. \\n\\n*The key contributions claimed by the paper are:*\\n- Introduces coverage and width as a natural and interpretable metrics for evaluating predictive uncertainty.\\n- Provides a comprehensive set of coverage evaluations for popular uncertainty quantification baselines.\\n- Examines how dataset shift affects these coverage properties.\\n \\n**Originality and Significance**\\n\\nFrequentist coverage is perhaps the most classical (and straightforward) measure of the quality of uncertainty estimates in statistics, so it's a bit odd that the authors claim the introduction of coverage and interval width as one of their key contributions. Despite not being as popular in the machine learning community, frequentist coverage has been considered in [R1] and [R2], and even coverage under dataset shifts was considered in [R3]. These existing papers not only consider frequentist coverage as a metric for uncertainty estimates, but they go as far as developing methods that provide theoretical guarantees on coverage. In fact, [R2] gives a more complete picture of uncertainty estimates by assessing both coverage and discriminative accuracy as both metrics do not necessarily correlate. \\n\\nI think that the key contribution of the paper is the experimental evaluations on many baselines and many datasets to analyze the performance of different methods with respect to coverage. While this analysis is interesting, it lacked insights into baselines' performances and the role of the evaluation metric used in assessing the comparative performances of baselines. For the most part, the experimental section was limited to reporting performance of all baselines on all datasets without providing insights into **why some methods perform better than others w.r.t this specific coverage metric** and **how the introduction of the coverage metric changes our perception on which methods are best**. I was expecting to see more evaluations that rank baselines w.r.t say calibration or Brier score, and then show that a ranking based on coverage would be significantly different, thereby motivating the usage of coverage in the uncertainty analysis toolbox. I would have also appreciated a breakdown of aleatoric and epistemic uncertainty, and how coverage may be a good metric for assessing either types of uncertainties, etc. Having read the experimental section---which is the key section in this paper---I was not exactly sure what to make of it. \\n\\nThe key take away of the experiments highlighted in the abstract and discussion is that uncertainty estimates do not generalize well to out-of-distribution samples. However, such finding is not new and has been discussed before in (Ovadia et al. (2019)). Also, it is not clear how the introduction of the coverage metric helps us arrive at this conclusion; it seems to me that the same conclusion could have been arrived at with calibration or AUC-ROC on out-of-distribution samples.\\n\\n**Technical comments**\", \"i_have_two_main_comments_on_the_technical_aspects_of_the_paper\": \"1) The authors found that GPs are clear winners when it comes to coverage. However, I am afraid that the frequentist coverage of Bayesian uncertainty (credible) intervals are extremely sensitive to the selection of the Bayesian prior (See the works of Szabo and van der Vaart in [R4] and references therein). Frequentist coverage is a specifically sensitive quantity in Bayesian analysis as a very large or very small prior length-scale of a GP kernel may give us very good or very bad coverage. The same issues are relevant (in a more subtle way) in Dropout NNs and any Bayesian NN approximation. Since most baselines considered in your frequentist analysis are actually Bayesian models, it is very important to report how robust are your findings to different selections of the priors (in this case, priors will correspond to hyperparameters). I did not find any discussion on the impact of hyperparameters in the resulting quality of uncertainty intervals and their impact on dataset shifts, despite this being a central concern in Bayesian models. A different approach for tuning hyperparameters may render models other than GPs come on top in your comparison. \\n\\n2) Frequentist coverage is a concept associated with **regression** problems: we want a **contiguous** coverage set $C$ to contain the **real-valued** prediction target $y$ with a probability $1-\\\\alpha$. \\n\\nIn $K$-classification problems, the true real-valued target is the class probabilities $p_K$, and a confidence set in this case would comprise a $K$-simplex that covers the true class probability $1-\\\\alpha$ of the time. But the true class probabilities $p_k$ are never observed; we only observe discrete values for one out of K classes. So calculating empirical coverage of class probabilities is impossible in classification problems.\", \"the_authors_extend_the_notion_of_coverage_to_classification_in_a_different_way\": \"a coverage set $C$ is a discrete set of possible labels whose sum predicted probabilities add up to $1-\\\\alpha$, and coverage is achieved if the true label belongs to this set (Equation (2)). I find this definition incomplete because your coverage set $C$ is not contiguous anymore; it may contain labels 1 and $K$ and excludes 2,...,$K-1$. As you can see, in this scenario a coverage set wouldn't make sense unless the targets 1 to $K$ are unordered. So I think you have to say that these applies only to unordered categorical targets for this to make sense.\\n \\nAlso, I do not see how this definition would work for binary classification, which is always ordered? In the case of binary classification, it seems to me that calibration is actually a more expressive metric than coverage as it accounts for class probability even when $K=2$.\\n\\n**References**\\n\\n[R1] Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, Ryan J Tibshirani, \\\"Predictive inference with the jackknife+\\\", arXiv, 2019.\\n\\n[R2] Alaa, Ahmed M., and Mihaela van der Schaar. \\\"Discriminative jackknife: Quantifying uncertainty in deep learning via higher-order influence functions.\\\" ICML (2020).\\n\\n[R3] Tibshirani, R. J., Barber, R. F., Candes, E., & Ramdas, A. (2019). Conformal prediction under covariate shift. In Advances in Neural Information Processing Systems (pp. 2530-2540).\\n\\n[R4] Botond Szab\\u00f3, A. W. van der Vaart, and J. H. van Zanten, Frequentist coverage of adaptive nonparametric Bayesian credible sets, Annals of statistics, 2015.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good experimental evaluation of uncertainty quantification (UQ) in deep learning, but questionable conclusion\", \"review\": \"Paper provides an evaluation of the reliability of confidence levels of well known uncertainty quantification techniques in deep learning on classification and regression tasks. The question that the authors are trying to answer empirically is: when a model claims accuracy at a confidence level within a certain interval , how often does the actual accuracy fall within that interval? This is conceptually similar to the recent slew of papers seeking to empirically evaluate the softmax calibration of deep models where the question there is how often do predicted probabilities of the winning class reflect the true probability of the correct answer, but in this paper the focus is on confidence level and confidence intervals.\\n\\nStudies are conducted for both regression and classification. Confidence levels and intervals are evaluated using the notion of coverage probability and width. While these have a straightforward interpretation in the regression setting, for classification the authors use the top K probabilities that captures 95% of the prediction probability mass to evaluate coverage and width. Thus for classification, the width is the number of classes over which 95% of the probability is smeared. Ideally one would want a model that has a low width, and high coverage probability (i.e a model that is both reliable and accurate). The aim of the paper is not to produce this ideal model, but rather to empirically evaluate whether the predictive uncertainty of various methods proposed in the DL literature can be relied upon. Various UQ methods are tested for both regression and classification datasets, and for the latter case, also under dataset shift.\", \"pros\": [\"Paper is well written and ideas are, for the most part, presented well.\", \"Experiments test a variety of state-of-the-art UQ methods.\", \"There has not been work looking at this specific metric -- i.e., the reliability of prediction intervals. And with increasing usage of DL in high-risk applications, an evaluation of this kind might be useful.\", \"Cons\", \"The authors appear to be conspicuously avoiding much usage of the terms \\\"confidence levels\\\" and \\\"confidence intervals\\\", but it appears that this is really what the paper is about. Justify why you are taking this stance. The section on \\\"theoretical coverage guarantees\\\" is not sufficiently explanatory or convincing in this regard.\", \"A quantitative discussion on the mismatch between the coverage probability and the quality of softmax calibration is missing.\", \"My biggest concern is the conclusion of the paper: the authors state \\\"we conclude that the methods we evaluated for\", \"uncertainty quantification are likely insufficient for use in high-stakes, real-world applications where dataset shift is likely to occur.\\\" Yes, the models' coverage probabilities are indeed significantly below the reported confidence level when data is corrupted (both for CIFAR-10 and ImageNet), but the fact that the width increases should give us an attack vector into the problem. You say this is not sufficient, but I'm not convinced this is the case. 95% of the probability mass is now smeared over a much larger number of classes. In other words, an increasing width necessarily means the predictions have increased in entropy, and also that the probability mass in the winning class is now significantly lower under data corruption than what it was for the clean set. Both of these quantities (entropy and winning softmax) can be used to filter out predictions when the model is not confident (subject to a suitable confidence threshold), at least in the non-adversarial case. And in the real-world, this could be a practical approach to ascertain when a model's predictions should be trusted or discarded.\", \"In summary, while the authors have done a commendable job with experimental evaluations, the conclusion is too strong and -- in my opinion -- incorrect to justify acceptance.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Confused definition of coverage\", \"review\": \"In the submitted manuscript, \\\"Empirical Frequentist coverage of deep learning uncertainty quantification procedures\\\", the authors propose to investigate the Frequentist coverage properties of predictive intervals by numerical experiment for a number of machine learning models applied to benchmark datasets. I can't say that I find this a strong submission because:\\n1. the authors give a confused (mis-)definition of coverage; essentially they seem to have taken Barber et al.'s definition of \\\"marginal distribution free prediction intervals\\\", mangled it and then called it Frequentist coverage citing Wasserman\\n2. the authors claim one of the contributions of this manuscript to be \\\"introduce coverage and width as a natural and interpretable metrics for evaluation predictive uncertainty\\\" but in fact these aspects of predictive intervals from ML models has been studied for many years, as a simple google search will confirm\\n3. the results shown will not generalise in any meaningful sense: for example, GPs are found to have excellent coverage over the set of regression tasks shown, but in fact GPs are themselves a case study in the difficulties of achieving Frequentist style coverage in the domain of Bayesian non-parametrics (e.g. Hadji & Szabo 2019; Neiswanger & Ramdas 2020; Rousseau 2016 ; prior over-smoothing being the root of many problems ).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
UiLl8yjh57 | Deep Reinforcement Learning For Wireless Scheduling with Multiclass Services | [
"Apostolos Avranas",
"Marios Kountouris",
"Philippe Ciblat"
] | In this paper, we investigate the problem of scheduling and resource allocation over a time varying set of clients with heterogeneous demands. This problem appears when service providers need to serve traffic generated by users with different classes of requirements. We thus have to allocate bandwidth resources over time to efficiently satisfy these demands within a limited time horizon. This is a highly intricate problem and solutions may involve tools stemming from diverse fields like combinatorics and optimization. Recent work has successfully proposed Deep Reinforcement Learning (DRL) solutions, although not yet for heterogeneous user traffic. We propose a deep deterministic policy gradient algorithm combining state of the art techniques, namely Distributional RL and Deep Sets, to train a model for heterogeneous traffic scheduling. We test on diverse number scenarios with different time dependence dynamics, users’ requirements, and resources available, demonstrating consistent results. We evaluate the algorithm on a wireless communication setting and show significant gains against state-of-the-art conventional algorithms from combinatorics and optimization (e.g. Knapsack, Integer Linear Programming, Frank-Wolfe). | [
"wireless",
"problem",
"time",
"users",
"requirements",
"solutions",
"combinatorics",
"optimization",
"deep reinforcement"
] | Reject | https://openreview.net/pdf?id=UiLl8yjh57 | https://openreview.net/forum?id=UiLl8yjh57 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"AK579phXVG",
"URedVfXVhdU",
"97gfb7iocd9",
"L19-okRRs-_",
"t8fLi8HCjrn",
"G8kdm6z_0dJ",
"LYCmU7kg0VR",
"TYXWyLwuJj",
"phqAnt6FLhf",
"F6U_XD_jIjg",
"9uAeDXOu9LE"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040348261,
1606306337672,
1606305699517,
1606305636577,
1606305595913,
1606305565098,
1605264661841,
1604065859556,
1603901377856,
1603875918626,
1603746950289
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3801/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3801/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3801/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3801/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3801/Authors"
],
[
"~Rahif_Kassab1"
],
[
"ICLR.cc/2021/Conference/Paper3801/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3801/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3801/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3801/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The reviewers mostly agree that this paper presents a new deep reinforcement learning-based approach to solving a challenging problem in the communications domain -- wireless scheduling. However, the main concern, expressed almost unanimously, is about the novelty of the ideas in the paper beyond the assembly of existing deep RL techniques and the translation of the scheduling problem to the language of MDPs in a careful manner that respects modern communication systems standards such as 5G (e.g., URLLC and eMBB traffic demands). A secondary concern, also expressed during the author rebuttal discussion, is about adequate comparison to competing approaches motivated from the literature in wireless scheduling. In view of these issues, I suggest that the author(s) explore more appropriate avenues to submit this piece of valuable translational work, including venues that address the specific topic of wireless communication where a more comprehensive evaluation and comparison could be possible.\\n\\n(NOTE: The comments and evaluation above disregard the \\\"enhanced\\\" draft submitted by the author(s) during the rebuttal phase. I was informed that the submission was reverted to the original draft due to space constraints being exceeded in the enhanced version.)\"}",
"{\"title\": \"Answers to your concerns\", \"comment\": \"We would like to thank you for your questions. We knew the publications that you mentioned but none of them does a centralized scheduling with a traffic of users belonging to diverse classes so we believe that are not closely related. Your concern about combining only existing tools is already pointed out by the last reviewer where we reply.\\nFinally you can think the classes being users from URLLC (ultra reliable low latency) who need short packet but with the minimum possible delay and eMBB (enhanced Mobile BroadBand ) where high data rates are desired without the need for strict latency constraints.\"}",
"{\"title\": \"Justifying the applicability of our paper to ICLR\", \"comment\": \"We would like to thank the Reviewer for his/her nice remarks. Indeed, we believe that the code should be available (we passed it now to an anonymoys repository) in order to be able the reproduce the results.\\nWe are also pleased to know that you agree with our motivation on using/applying a DRL approach to this problem. \\n\\nFirst, we would first to point out that it is an actual challenge to build such an agent. We have recently came through similar open challenges/competitions, such as the one of NOKIA https://github.com/nokia/wireless-suite/. The first problem \\\"TimeFreqResourceAllocation-v0\\\" is very close to our set-up and our ideas could be adapted (in a future work) to their setting. Additionally a scheduling problem with multiple classes is very relevant in many aspects. You can think of computing resources of a company (e.g. Amazon, Microsoft) where they have plenty of users (and some of them prime) needing different type of service requirements and resources. Maybe some of them belong to the class of many CPUs and other of GPUs. How to allocate them is challenging. Also other scenarios coming from finance (stocks) and smart grid could easily show resemblance to our setup. Therefore we believe we can adjust and apply our algorithms to these fields, expecting to obtain promising performance.\\n\\nAs far as the novelty is concerned we believe that combining tools that have already been proposed in a novel way (for example using the dueling architecture to bring the shape of the distributions to the zero and more easily estimate them) is a novelty. The problem was very hard and a random combination of tools doesn't work. Also we would like to emphasize that the call for papers ICLR explicitly says that relevant topics are \\\"applications in audio, speech, robotics, neuroscience, computational biology, or any other field\\\" aare welcome and we believe that our paper is relevant and fits to this venue. \\n\\nPassing to the other mentioned weaknesses, we rewrote the paper with a dedicated subsection describing the MDP problem where we also explain also the randomness coming from the traffic and channel dynamics. Also we hope that the section now where the policy network is explained is more clear. As we also mention to the first reviewer \\\"we make a one analogy with teacher-students in order to just build intuition why it might be helpful. We are fully aware though that a simplifying analogy is not enough. This is why we provided Figure 2 (Figure 3 in the revised paper). We improved the presentation (also by making the Fig 3b&3c less dense) so now it is easier to read the plots. Seeing the figure 3a we show a curve [expected] which corresponds to disregarding the distributional perspective. By using Distr. RL and adding some tricks (Dueling & Reward Scaling) it outperforms the expected. To provide further explanation on why we opt for distributional RL we provide the Fig3b&3c which shows how expected and distributional are handling the different classes. In contrast to Expected, distributional RL understands quite fast the existence of two different classes of users and steadily improves both of them.\\\"\\n\\nFinally it was right your idea to add the normalization of rewards. But we emphasize that the reward scaling is doing some different operation. Please see the figure 1 where is nicely depicted. We show in our experiments (Figure 3a) that it helps to have both dueling and reward scaling.\"}",
"{\"title\": \"We an environment using real-data and we compare also under total data rate metric.\", \"comment\": \"We are pleased to listen that you enjoyed our paper and also the interpretations that we gave.\\nDue to space limitation we could only briefly describe the conventional approaches in the main paper and we give more details in the appendixes. \\n\\nFurther we agree that we have to also compare the algorithms with metric being the total data rate. We added a new section (4.1) in which we built a similar environment that uses data coming from real world measurements. We also added a new powerful scheduling rule called exponential Rule against which we compare. In that environment we show the performance also using the total data rate. The fairness metric is actually a bit tricky. Firstly because every user has different needs so giving to all of them a fair amount of data is not actually desirable. Some of them they just don't need them. Also by construction we assume some of the classes are more important so as to account for users with more expensive contracts. At this point it actually desirable to be unfair and provide them better service.\"}",
"{\"title\": \"We thank the reviewer for giving us access to real data\", \"comment\": \"We would like to thank you for your positive review. Indeed the MDP problem was not clearly stated so we added the section 3.1.1. that now clarifies it explicitly describing the state, action and rewards.The deadline is incorporated because if a user is not satisfied within his latency constraint then it doesn't belong to set of active users $U_t^{active}$ and so the reward that could possibly the agent take from it is lost.\\nTo address the applicability issue and try to justify the possibility of such a solution being implemented in reality we used some of the traces that you provided, specifically the ones from Belgium where they used wireless networks (LTE/4G). We needed again to add some assumptions that we explain in section 4.1 but since we got high performance we are more optimistic for the real-world applicability.\"}",
"{\"title\": \"We improved the presentation and rewrote the paper with new experiments\", \"comment\": \"We agree that the problem we are tackling in this paper is challenging and hard to solve. Indeed, it was not easy to come up with an architecture and an algorithm that manages to converge to a point that exhibits very good performance. Moreover, we are targeting practically relevant scalable solutions. Our main problem was that scaling up to a large number of users (e.g., 100) substantially increases the stochasticity of the environment, resulting in various issues, including convergence problems. On top of that, we wanted to compare with and outperform strong baselines. We were finally able to come up with an architecture combining Deep Sets followed by a normalization that could show significantly better performance against the conventional methods. Finally using a distributional type of RL, we have observed consistency in training with a gradual improvement of the performance in the training.\", \"concerning_the_paper_weaknesses\": \"1)Indeed, the presentation of the paper was confusing in some points and as requested by other reviewers, several clarifications in the problem description were required. We created a subsection called \\\"MDP formulation and state representation\\\" where we briefly yet clearly descibe the problem considered here, the traffic generation model, its MDP formulation and what are the states-actions-rewards. That way, we passed information previously given in the appendix into the main text.\\n\\n2)For the DeepSets, in the paragraph where we explain them, we mention that not only they preserve the permutation equivariance which is an inherent property of our problem (i.e. swapping two users should lead to the corresponding swap of their allocations) but also brings a huge parameter reduction (thus, less prone to overfitting) since no parameter increase is required when scaling up the number of users.\\n\\nFor the Distributional RL, we make at first one analogy with teacher-students in order to build intuition why it might be helpful. We are fully aware though that a simplifying analogy is not enough. This is why we provided Figure 2 (Figure 3 in the revised paper). We improved the presentation (also by making the Fig 3b&3c less dense) so now it is easier to read the plots. In figure 3a we show a curve [expected] which corresponds to disregarding the distributional perspective. By using Distr. RL and adding some tricks (Dueling & Reward Scaling) it outperforms the expected. To provide further explanation on why we opt for distributional RL we provide the Fig3b&3c which shows how expected and distributional are handling the different classes. In contrast to Expected, distributional RL understands quite fast the existence of two different classes of users and steadily improves both of them.\\n\\n3)As mentioned previously we state them now clearly in a dedicated subsection.\\n\\n4)Now that the action space is described in a clearer manner and it is easier to see why it is impossible to apply DQN. The action space for full-CSI is $2^{Number Of Users}$ and for no-CSI it is continuous which, as we mention in the main text, prohibits the use of DQN. As suggested, we also implemented the TD3 version, tested on the environment and using real data (section 4.1). However, TD3 showed worse results than the Distributional RL approach and it has been omitted. Nonetheless we would like to point out that our main scope is to show gains against conventional approaches so we did not want to open up too much on the plenty approaches for DRL existing in literature. Finally how time efficient the algorithm is in terms of training is shown Figure 3 where it is shown that it needs around 3 million samples to reach from zero to top performance. In terms of testing, we only used around one thousand parameters for the synthetic data environment and around four thousand for the real data experiments. In contrast, the other conventional approaches (except the exponential rule) need exponential running time.\\n\\n5)We significantly revised the paper, correcting as well grammatical errors and typos.\", \"detailed_comments\": \"We replied to most of them except for \\n1.6) It is one of our future goals to also try to increase even more the number of classes. But we mostly used this setup driving from direction of 5G to add 3 classes with one being URLLC classes (Ultra reliable low latency Communications: short packets with stringent latency constraints and high reliablity) and eMBB (enhanced Mobile BroadBand : only goal is high data rates)\\n1.8)For wireless communication on the PHY - MAC layer the only paper that we know addressing the multi -class problem was the exponential rule against which we compare under an environment using real measurements.\"}",
"{\"title\": \"Wireless communications paper?\", \"comment\": \"The idea/Application of DRL for wireless scheduling has been investigated extensively in many papers in the wireless community. A simple search on google scholar shows lots papers doing so and some of them specifically using DDPG, MADDPG, DQN ...(for e.g., https://arxiv.org/pdf/2009.08346.pdf ; https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8896945; https://arxiv.org/pdf/1905.05914.pdf ; https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8757174 ...)\\n\\nThat being said, can the authors explain how is their technique novel from an \\\"ML\\\" point of view and not from a \\\"wireless communications\\\" point of view? For instance, what are the new ML features of your algorithm? \\n\\nOn another note, for what type of wireless services/classes does your model apply to ? \\nThanks.\"}",
"{\"title\": \"An interesting application of DRL, but the paper could be improved.\", \"review\": \"Paper Summary\\nThis paper investigated the problem of scheduling and resource allocation for a time-varying set of clients with heterogeneous traffic and QoS requirements in wireless networks. It proposed to solve this problem with distributional based DDPG with Deep Sets, and conducted experiments showing performance gains against conventional methods.\\n\\nPaper Strength\\n1.\\tThe paper considered a complex scheduling scenario, which is a hard problem by conventional optimization methods. The problem setting takes into account traffic model, geometry model, channel model, and rate model. Both full-CSI and partial-CSI scenarios are considered. \\n2.\\tThe paper adopted state-of-the-art techniques and works fine. Specifically, Distributional RL and Deep Sets for speeding up the convergence and reducing neural network parameters, respectively. \\n3.\\tThe proposed algorithm outperforms conventional combinatorial optimization methods.\\n\\nPaper Weakness\\n1.\\tThe presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. \\n2.\\tThe necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the illustration of Distributional RL lacks clarity.\\n3.\\tThe details of state representation are not explained clear. For an end-to-end method like DRL, it is crucial for state representation for training a good agent, as for network architecture.\\n4.\\tThe experiments are not comprehensive for validating that this algorithm works well in a wide range of scenarios. The efficiency, especially the time efficiency of the proposed algorithm, is not shown. Moreover, other DRL benchmarks, e.g., TD3 and DQN, should also be compared with. \\n5.\\tThere are typos and grammar errors.\\n\\nDetailed Comments\\n1.\\tSection 3.1, first paragraph, quotation mark error for \\\"importance\\\".\\n2.\\tAppendix A.2 does not illustrate the state space representation of the environment clearly.\\n3.\\tThe authors should state clearly as to why the complete state history is enough to reduce POMDP for the no-CSI case.\\n4.\\tSection 3.2.1: The first expression for $J(\\\\theta)$ is incorrect, which should be $Q(s_{t_0},\\\\pi_\\\\theta(s_{t_0}))$.\\n5.\\tThe paper did not explain Figure 2 clearly. In particular, what does the curve with the label \\\"Expected\\\" in Fig. 2(a) stand for? Not to mention there are multiple misleading curves in Fig. 2(b)&(c). The benefit of introducing distributional RL is not clearly explained. \\n6.\\tIn Table 1, only 4 classes of users are considered in the experiment sections, which might not be in accordance with practical situations, where there can be more classes of users in the real system and more user numbers.\\n7.\\tIn the experiment sections, the paper only showed the Satisfaction Probability of the proposed method is larger than conventional methods. The algorithm complexity, especially the time complexity of the proposed method in an ultra multi-user scenario, is not shown. \\n8.\\tThere is a large literature on wireless scheduling with latency guarantees from the networking community, e.g., Sigcomm, INFOCOM, Sigmetrics. Representative results there should also be discussed and compared with. \\n\\n======\", \"post_rebuttal\": \"My concern regarding the experiments remains. I will keep my score unchanged.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Promising Application of DRL to a classic wireless problem\", \"review\": \"This paper addresses the long standing problem of scheduling and resource allocation in wireless networks using modern Deep Reinforcement Learning techniques.\\nIt is clearly written and easy to follow but suffers from several minor typos.\\nThe methodology is well justified and thoroughly motivated.\\nExperimental evaluation seems thorough and provides convincing results.\", \"the_mdp_is_not_described_thoroughly_enough\": \"What is your reward, action space, state space, observations?\\nHow is the allocation deadline incorporated into the reward?\\nIt would be nice to have these details listed in a sub-section somewhere in Section 3.\\n\\nRegarding the evaluation, \\\"synthetic\\\" traffic patterns are used.\\nCan you use real world traces with a simulator for evaluation (similar to https://github.com/hongzimao/pensieve)?\\nAlso real world applicability is not addressed?\\nWill the inference times for the deep network lead to any significant overheads when measured at the time scale of wireless communications?\\nOverall, the evaluation setup seems preliminary to me and needs more work to provide assurance of real world usability.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper investigates the deep reinforcement learning method to schedule the traffics with heterogeneous requirements under dynamic channel environments. The proposed method is compared with its lower and upper bound methods, i.e., so called myopic Knapsack and oracle ILP. It is clear that the proposed method has the merits compared with the lower and upper bound methods in terms of performance and implementation.\", \"review\": \"Basically, it seems that the proposed method is interesting and meaningful. The scheduling problem in this paper is based on the analogy to a server having a water pitcher, and the deep reinforcement learning approach for the scheduling problem has been designed. However, the scheduling problem in wireless networks is a very famous issue. Of course, applying DRF to it is quite interesting. However, the authors need to describe the conventional well-known scheduling algorithms and compare them with the proposed scheme (now, the current paper only focuses on applying the DRF to the scheduling and evaluating its performance in aspects of an optimization problem.). Further, typically, in scheduling problems, efficiency (total data rate) and fairness are the key factors and it is needed to describe the relationship between these conventional performance metrics and the satisfaction probability.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting application but has clarity issues and perhaps not a good fit for ICLR\", \"review\": \"The authors propose a deep RL solution for the communication problem of user scheduling and resource allocation. The deep RL solution uses deterministic policy gradient + quantile regression + dueling + deep sets and the authors demonstrate it outperforms classical solutions on benchmark tasks.\", \"recommendation\": \"I quite frankly have no prior knowledge of the task this paper aims to solve. It didn't come across as a grandstanding AI challenge in any capacity (feel free to debate this) so I'm looking at this paper from the angle of significance to the deep RL community. In general, the questions I'm asking are: Does this paper introduce a novel deep RL algorithm? Is the knowledge produced by this paper generalizable to other problems or algorithms? Does this paper provide value to anyone who isn't concerned with the specific application? \\n\\nAs of now, I feel like the answer is no to all three and so I would recommend rejection to this conference. However, there are presumably venues which are a better fit and I hope the authors consider submitting elsewhere. \\n\\nThere were other issues with the paper. In general, clarity and organization were a big issue for me. The authors were very rigorous with their supplementary material, so I believe most of the information is there, but was not presented in an easily digestible manner.\", \"strengths\": [\"Thorough supplementary material and code is provided.\", \"The use of deep RL to tackle the problem felt well-motived and a good fit.\", \"The performance of the agent seems strong but I'm not clear on the significance of some of the results.\"], \"weaknesses\": [\"I think the problem set up was clear in the sense that I understood the overarching objective. However, the specifics of the problem, specifically in the context of RL was not. The paragraph about 3.2 is a generic description of the RL problem and left me wondering the connection to the actual application. I realize many of the details are contained in the supplementary material but the statement \\\"the problem can be modeled as an MDP\\\" was not defended & the following description did not clarify the problem statement. For example, immediately after, in 3.2, \\\"high variance randomness\\\" is discussed but its not clear to me why this is the case or how this randomness affects the problem- reward? transitions?\", \"The structure of the paper did not feel helpful to me. 3.2 is categorized into \\\"Policy Network\\\" and \\\"Value Network\\\" for somebody who is comfortable with RL a lot of the details felt unnecessary but more importantly, this organization doesn't provide a solid presentation of the algorithm. Somebody who is interested in the application and is not an RL expert, won't necessarily follow why \\\"Policy network\\\" is being presented or what that even means necessarily. There isn't a clear overview of the algorithm.\", \"Novelty of the algorithm is low in the sense that it is a combination of prior, existing ideas.\", \"I felt like many of the algorithmic choices were not well-justified. For example, the use of QR is justified by an analogy? The use of the dueling architecture also seems unusual when the authors also propose a much simpler solution. It was also unclear what the issue with \\\"The main problem was that the distribution Z was far away from 0 making it very difficult for the policy network to well approximate them\\\" exactly meant.\"], \"minor_comments\": [\"There are a few latex issues with reversed quotations.\", \"\\\"In Figure 2 we provide additional element to support the choice\\\" -> an additional\", \"The objective of Figure 2 is nice but its confusing to have two sets of experiments presented in the same graph. Class is not explained in the description of the graph and the significance of the graph is not explained in the figure description.\", \"**Post-Rebuttal\", \"I appreciate the authors taking the time to respond. Unfortunately, my belief that this paper is not strong enough from the deep RL perspective to warrant acceptance has not changed. If other combinations of tools do not work, then the authors should improve this justification, with ablation studies or stronger theoretical motivation. I\\u2019ll add that my score is not influenced by my concern that this paper may not be a good fit for ICLR (I leave that choice to the AC).\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
N5Zacze7uru | Neural Lyapunov Model Predictive Control | [
"Mayank Mittal",
"Marco Gallieri",
"Alessio Quaglino",
"Seyed Sina Mirrazavi Salehian",
"Jan Koutnik"
] | With a growing interest in data-driven control techniques, Model Predictive Control (MPC) provides a significant opportunity to exploit the surplus of data reliably, particularly while taking safety and stability into account. In this paper, we aim to infer the terminal cost of an MPC controller from transitions generated by an initial \emph{unknown} demonstrator. We propose an algorithm to alternatively learn the terminal cost and update the MPC parameters according to a stability metric. We design the terminal cost as a Lyapunov function neural network and theoretically show that, under limited approximation error, our proposed approach guarantees that the size of the stability region (region of attraction) is greater than or equal to the one from the initial demonstrator. We also present theorems that characterize the stability and performance of the learned MPC in the presence of model uncertainties and sub-optimality due to function approximation. Empirically, we demonstrate the efficacy of the proposed algorithm on non-linear continuous control tasks with soft constraints. Our results show that the proposed approach can improve upon the initial demonstrator also in practice and achieve better task performance than other learning-based baselines. | [
"optimal control",
"mpc",
"lyapunov neural networks",
"safe-learning",
"safety"
] | Reject | https://openreview.net/pdf?id=N5Zacze7uru | https://openreview.net/forum?id=N5Zacze7uru | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"s4YNvCtyPfS",
"uTAbStcEzcg",
"sJGzjzxFz-p",
"P6BsA-U2iOk",
"Mefbb7xFvul",
"hZ2kcSH-Qpv",
"vMhoPuyr4KM",
"aaDf7vIN4SM",
"oXjMeXt9y3v",
"UO643gGjBCC",
"aroXo9E7ags",
"eK-Vvpg17SP",
"3J_u0t9-3tX",
"I8TsjF8osRW"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040373750,
1606146466027,
1606144453352,
1606143585307,
1605963699636,
1605636692688,
1605636596287,
1605636047432,
1605610786481,
1605609754044,
1605608569432,
1604289054874,
1603947781933,
1603654440357
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3796/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3796/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3796/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors propose an MPC based approach for learning to control systems with continuous state and actions - the dynamics, control policy and a Lyapunov function are parameterized as neural networks and the authors claim to derive stability certificates based on the Lyapunov function.\\n\\nThe reviewers raised several serious technical issues with the paper as well as the lack of clarity in the presentation of the main technique in the initial version of the paper. While the clarity concerns were partially addressed during the rebuttal, the technical concerns (in particular those raised by reviewer 1) remain unaddressed - the stability certificate derived is questionable due to the fact that sampling based approaches to certifying that a function is a valid Lyapunov function are insufficient to derive any stability guarantee. Further, the experimental results are only demonstrated on relatively simple dynamical systems. Hence I recommend rejection. \\n\\nHowever, all reviewers agree that the ideas presented in the paper are potentially interesting - I would suggest that the authors consider revising the paper to address the feedback on technical issues and submit to a future venue.\"}",
"{\"title\": \"Further clarification on scope and novelty\", \"comment\": \"We thank again the reviewer for their feedback and have revised the paper accordingly.\\n\\nWe would like to further reinforce the case for the paper on novelty, which we believe goes beyond the extension of the theorems from the related work (also a contribution) but lies in the algorithmic development: the use of alternate learning with an 'epsilon extended' target ROA, the (required) cross-validation and formal verification to extend the ROA of a learned controller from an unknown demonstrator, despite the (unknown but limited) model error and the use (first in literature) of a NN Lyapunov function as the terminal cost for MPC. Lastly, the use of one step unlabelled transitions instead of long labelled sequences for learning the terminal cost. \\n\\nWe hope the reviewer could still reconsider their score. We are available for further interaction and amendments to the paper if required by the reviewer.\"}",
"{\"title\": \"Future work and clarifications on value learning\", \"comment\": \"We thank again the reviewer for their feedback. Following discussions with all the reviewers, we believe we could further highlight in the paper that the proposed approach could also be extended in future work by complementing the Lyapunov loss with a more \\\"classic RL\\\" value function learning loss. The rationale behind this would be to balance off the different terms in the error bound of Theorem 2 and perhaps tradeoff the stability (contraction) property with infinite-horizon optimality. We think this is a very interesting avenue as well as choosing the best horizon length that minimises the suboptimality bound which was proposed by Reviewer 4. We will make further amendments as soon as possible to the paper to emphasize this. In the meantime, we hope the reviewer could reconsider their score. We remain available to further discuss and amend the paper accordingly.\"}",
"{\"title\": \"Thank you for increasing the score and for the additional feedback\", \"comment\": \"We would like to thanks the reviewer for increasing their score and for their relevant feedback.\\n\\nThe reviewer is correct about the typo (we will correct as \\\"with cost (3) subject to constraints (2)\\\") and the third term in our bound being not monotonic. We will correct the claim. It was indeed the second term in the bound that motivated us for the choice of a short horizon. It is also correct and a good suggestion that this result could serve to motivate the development of a method for selecting the horizon length; for instance if an estimate is provided for the model and the value errors. We also believe this could be performed in future work. We also think the approach could be extended in the future by complementing the Lyapunov loss with a loss used for value function learning to possibly mitigate the value estimation error term. We have not investigate this yet because we wanted to demonstrate the Lyapunov function part but it appears of interest. This also reconnects to the points raised by Reviewer 1. We will propose these items for future work in the additional space. We will make these amendments as soon as possible and upload a further revision.\"}",
"{\"title\": \"Thank you for your answers\", \"comment\": \"1. Performance with surrogate models.\\n\\nI am satisfied with the authors' answer, and I now find the revised version of the paragraph much clearer.\\nI think there only remains a typo in \\\"For a task specified by the stage cost in (10)\\\", where (11) or (3) was probably meant? ((10) does not contain any stage cost)\\nI now understand that Theorem 2 effectively bounds the MPC suboptimality with respect to the model error $\\\\mu$, true/surrogate dynamics smoothness $L_f$, and the error in the terminal cost $V$ used for planning with respect to $V^\\\\star$. I also appreciate that the proof provided in Appendix A is quite easy to read.\\n\\nHowever, it is not obvious to me why the authors state that \\\"Theorem 2 shows that a discount $\\\\gamma$ or a shorter horizon $N$ can mitigate model errors\\\". Indeed, considering only the terms in the bound which depend on $\\\\mu$, it is true that the second term strictly increase with $N$, but the third term is in $\\\\gamma^N(1-L_f^N)$ which is not monotonous with $N$ in general, and converges to $0$ as $N\\\\to\\\\infty$ if $L_f<1/\\\\gamma$.\\n\\nAnd beside model errors, increasing $N$ also helps mitigating the value function error $\\\\epsilon$ (first term), which has every reason to be large since, as the authors confirmed, $V$ learned by (5) has no particular reason to be close to $V^\\\\star$.\\n\\nIn any case, I think that this result constitutes an interesting research perspective for further work, e.g. to derive an algorithm to adaptively select the optimal horizon N which adequately balances value estimation errors and model errors, if these can be reliably estimated.\\n\\n\\n2. Novelty of the theoretical results\\n\\nI thank the authors for clarifying their claimed contributions, beyond the original work of Limon et al. (2003, 2009). I understand that they mainly adjust the setting (discount, $\\\\lambda$-contraction of $V$) and then follow the original proofs. This seems quite incremental, but it is not necessarily problematic, especially since this theoretical result provides practical insights on the relationship between the scaling of $V$, the size of the associated ROA, and the interplay of model errors and the prediction horizon, which sheds light on the experimental results.\\n\\n3. Regarding the expansion of the demonstrator safe region\\n\\nI understand that resorting to function approximation can degrade results compared to the theoretical setting, and I appreciate that this is mitigated by the ability to perform online safety verifications by means of the cross-validation procedure. I initially thought that the authors claimed that the ROA was guaranteed to increase, but I now have a better understanding that Theorem 1 rather provides the existence of the ROA, and that the attempt of ROA expansion is more experimental: leverage theoretival insights and iteratively increase the MPC parameter $\\\\alpha$ with the hope of increasing $X_s$.\\n\\n\\nThe authors have properly addressed my concerns, and I have updated my score to reflect this.\"}",
"{\"title\": \"Reviewer4 Response Part 2\", \"comment\": \"#### 3. Regarding the expansion of the demonstrator safe region\\n\\nAs stated on page 2, the increase in the ROA is possible if the model is perfect, the MPC solver is perfect (K=argmin), and the Lyapunov function and the safe set are known exactly. In practice, the presence of function approximation limits the achievable results. That\\u2019s why on page 2 we also state that \\u201cwe aim to match or enlarge\\u201d the ROA of the unknown demonstrator. Further, we add to it that \\u201cassumptions are relaxed and results are presented for the use of function approximation\\u201d. The results in Theorems 1 and 2 don\\u2019t claim that we increase the ROA using our proposed algorithm. Instead, we claim that the ROA exists for a given discount, terminal penalty $\\\\alpha$, and model error bound. We have updated Theorem 1 with a computation of the model error bound. For these reasons, the proposed algorithm cannot guarantee that the size of the stable region will always increase. We try to enforce that by introducing the factor $(1+\\\\epsilon)X_s$ in the loss for the MPC parameter $\\\\alpha$, which aims to extend $X_s$ by a factor $\\\\epsilon$ (like 10%) at each iteration. This is conditioned to the result of the $\\\\alpha$ search as well as the training of $V$. While in theory the optimal value for the parameter $\\\\alpha$ can be as large as possible and will lead to a ROA increase, in practice, there is a limit beyond which the problem can become ill-conditioned for a particular MPC solver. That is why we perform the search only within a fixed range $[0, a_{\\\\textrm{max}}]$, as mentioned in the paragraph on \\u201cMPC auto-tuning\\u201d on page 5. \\n\\nTherefore, in practice, yes, the ratio of verified points can decrease over iterations. This is also why we perform cross-validation on our Lyapunov function and we choose the best iterations. We performed only three iterations due to limited computation but reported all of them. As can be seen from Table 1, for the pendulum, two iterations reached the largest achievable result, after which it is the same. However, for the car example, it does not improve at all times. The third iteration in Table 3 shows a decrease in the verifiable region when the surrogate model is used. This is not the case when a perfect model is used, which seems to confirm the limits from function approximation. We always pick the best iteration from these tables as well as cross-validating the Lyapunov function. We will add this explanation to the revised upload of the paper.\\n\\n\\nReferences\\n\\n1. Limon, D., T. Alamo, and E. F. Camacho. \\\"Stable constrained MPC without terminal constraint.\\\" Proceedings of the 2003 American Control Conference, 2003.. Vol. 6. IEEE, 2003.\\n2. Limon, D., et al. \\\"Input-to-state stability: a unifying framework for robust model predictive control.\\\" Nonlinear model predictive control. Springer, Berlin, Heidelberg, 2009. 1-26.\"}",
"{\"title\": \"Reviewer4 Response Part 1- Thank you for the feedback\", \"comment\": \"We thank the reviewer for thoroughly reading the paper and the positive and constructive feedback. We particularly appreciate that our work was considered \\u201cclear\\u201d, \\u201cwell-motivated\\u201d, \\u201chonest\\u201d, \\u201ctechnically correct\\u201d and its relevance was recognized. We have tried to address all the concerns by the reviewer in the following but are open to further discussions. We will fix the typos and make the required additions to the revised upload of the paper. We hope we convince the reviewer to possibly further improve their score.\\n\\nPlease find our comments on the specific questions/concerns that have been raised:\\n\\n\\n#### 1. Regarding performance with surrogate models\\n\\n\\nYes, by using the Lyapunov function as the terminal cost, we want to approximate the tail of an infinite-horizon control formulation that uses the stage cost defined in (3). We do not expect the error to be small with respect to the correct value function. The error can, however, be upper-bounded since all considered functions, the system dynamics, are Lipschitz and the constraints sets are bounded. Lemma 2 aims to break down the different components that affect performance and communicate the role of horizon length in the presence of model uncertainty. It would be possible in principle to combine the approach with value estimation to reduce the error but this is beyond the scope of the paper. The reason for not using the stage cost directly is that the unknown controller might not be optimal with respect to this stage cost (reward) but it is generally an engineered working solution. This choice was highly beneficial for instance in the car scenario, where our initial long horizon MPC used to generate the data produced suboptimal solutions due to the optimizer tolerances, correct decrease rate cannot be guaranteed, differently from an LQR. Moreover, the long horizon MPC trajectories converge only to a neighborhood of the origin for the car. Despite this, our algorithm learns a Lyapunov function that can provide the system to even outperform the initial MPC, as kindly pointed out by the reviewer, in terms of convergence to a smaller neighborhood. \\n\\n#### 2. Regarding the novelty of the theoretical results \\n\\nIn the revised paper, we will rectify calling the theoretical results as \\\"lemmas\\\" instead of \\\"theorems\\\". \\n\\nWe mainly extend existing results in Theorem 1 by considering the discount factor in the formulation and the contraction factor in the Lyapunov inequality instead of the loss. For the rest of Theorem 1, we build and proceed upon the same steps of the cited articles from Limon et al. (2003, 2009). As also remarked by the reviewer, we state the same on page 4 in the paragraph on \\u201cstability and safety\\u201d. We did not mention the contraction instead of the loss but we will make this clearer in the revised upload. \\n\\nOverall, Theorem 1 aims to motivate our proposed algorithm, which is our main contribution in this work. We thought it was important, however, to state the theorem to mark the relevance of learning a Lyapunov function instead of just a value function surrogate. In Theorem 2, we instead aim to close the gap with Lowrey et al. (2018) in terms of performance evaluation. This is also useful to understand the contribution of the horizon length and model error. With a perfect model, a longer horizon is indeed beneficial as claimed by Lowrey et al. (2018), however, this is true only for a perfect model. In this case, it is also better in terms of the size of the stable region of the MPC. However, having a Lyapunov function that induces a larger safe set allows one to employ a shorter horizon and retain the same stable region with a shorter horizon. We reinforce the statement by demonstrating that, with function approximation in the model, a shorter horizon can become much more beneficial also in terms of optimality. The bound in Theorem 2 can be used to estimate the effect on convergence by model error and horizon length. We will clarify this further in the revised upload.\"}",
"{\"title\": \"Reviewer2 Response Part 2\", \"comment\": \"#### 5. Comparison to work by Koller et al. (2018)\\n\\nKoller et al. (2018) rely on Gaussian Process (GP) and RHS kernels to obtain conservative closed-form results. GPs typically do well in the low-data regime and are generally limited to low-dimensionality.\\nWhile we consider the work by Koller et al. (2018) as an important contribution (and will add to the related works), we believe comparing our approach to it is not straightforward since the two look at orthogonal problems. In our work, we train the dynamics model (a NN) and Lyapunov network offline on a larger dataset. We don\\u2019t perform exploration or online learning. On the other hand, Koller et al. (2018) focus more on safe exploration with very few data points collected online which are used to update a GP model. For a comparison of NNs and GPs in the context of online learning, we instead refer the reviewer to Gal et al. (2016).\", \"references\": \"1. Gaitsgory, Vladimir, Lars Gr\\u00fcne, and Neil Thatcher. \\\"Stabilization with discounted optimal control.\\\" Systems & Control Letters 82 (2015): 91-98.\\n2. Gal, Yarin, Rowan McAllister, and Carl Edward Rasmussen. \\\"Improving PILCO with Bayesian neural network dynamics models.\\\" Data-Efficient Machine Learning workshop, ICML. Vol. 4. 2016. URL: http://mlg.eng.cam.ac.uk/yarin/PDFs/DeepPILCO.pdf \\n3. Grune, Lars, and Anders Rantzer. \\\"On the infinite horizon performance of receding horizon controllers.\\\" IEEE Transactions on Automatic Control 53.9 (2008): 2100-2111.\\n4. Koller, Torsten, et al. \\\"Learning-based model predictive control for safe exploration.\\\" 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018.\\n5. Limon, D., T. Alamo, and E. F. Camacho. \\\"Stable constrained MPC without terminal constraint.\\\" Proceedings of the 2003 American Control Conference, 2003.. Vol. 6. IEEE, 2003.\\n6. Lowrey, Kendall, et al. \\\"Plan online, learn offline: Efficient learning and exploration via model-based control.\\\" arXiv preprint arXiv:1811.01848 (2018).\\n7. Richards, Spencer M., Felix Berkenkamp, and Andreas Krause. \\\"The lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems.\\\" arXiv preprint arXiv:1808.00924 (2018). URL: https://arxiv.org/abs/1808.00924\"}",
"{\"title\": \"Reviewer1 Response Part 2\", \"comment\": \"#### 4. Regarding boundedness of model error\\n\\nWe justify the boundedness assumption on the surrogate dynamics model by assuming local Lipschitz-ness and bounded domains. In practice, we need milder conditions (like local continuity) to have a bounded error. The existence of these margins is important to characterize the performance of a controller. \\n\\nIn our algorithm, we account for the model error implicitly through the formulation of the MPC. As discussed in Theorem 1.6, the ISpS stability gain of the MPC (which is related to the performance bound of Theorem 2) increases monotonically with the model error and horizon length. In our experiments, we show that learning a Lyapunov function makes it possible to reduce the horizon length of the MPC to as low as one, thereby mitigating the accumulation of errors due to an imperfect dynamics model. We will clarify the relation between Theorems 1 and 2 in the updated version.\\n\\n#### 5. Regarding the loss functions used for training\\n\\nThe intuition behind the loss function used for training the Lyapunov NN is justified more clearly in prior work by Richards et al. (2018). We would like to refer the reviewer to the paper as well as the comments provided to Reviewer 2 for more details. In the existing literature, we are not aware of any other loss functions that allow learning a safe Lyapunov level set. However, we are open to suggestions regarding the same from the reviewer.\\n\\n#### 6. Comparison between value function and Lyapunov function\\n\\nWe disagree with the reviewer here. A \\u2018learned\\u2019 value function surrogate is not always an exact value function. Thus, it is not guaranteed to be a valid Lyapunov function unless one can prove the learning convergence, which is hardly possible using function approximation. In principle, one can learn a value function and apply formal verification to it; however, we believe incorporating the stability verification directly in the learning objective (when possible) is more direct. Learning a value function is not simple in general, and large errors could occur for simple biases, which are not essential in the context of the Lyapunov function, where all that matters is the decrease and not the value itself. This decrease is vital for safety and cannot be guaranteed by value learning using function approximation and finite computation, as far as the authors are aware.\\n\\n#### 7. Regarding experiments details\\n\\nDue to page constraints, we provided detailed descriptions of the environment models in Appendix C. However, we state in the main paper that the pendulum is torque-limited and can be controlled by starting only within $\\\\pm60$ degrees from the upper position. This is achieved better by our controller in comparison to other baselines. The car may seem simple however its non-linearity and the constraints make it particularly challenging for the baselines to solve.\", \"references\": \"1. Bobiti, Ruxandra Valentina. Sampling\\u2013driven stability domains computation and predictive control of constrained nonlinear systems. Diss. PhD thesis, 2017. URL: https://pure.tue.nl/ws/files/78458403/20171025_Bobiti.pdf\\n2. Limon, D., T. Alamo, and E. F. Camacho. \\\"Stable constrained MPC without terminal constraint.\\\" Proceedings of the 2003 American Control Conference, 2003.. Vol. 6. IEEE, 2003.\\n3. Limon, D., et al. \\\"Input-to-state stability: a unifying framework for robust model predictive control.\\\" Nonlinear model predictive control. Springer, Berlin, Heidelberg, 2009. 1-26.\\n4. Richards, Spencer M., Felix Berkenkamp, and Andreas Krause. \\\"The lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems.\\\" arXiv preprint arXiv:1808.00924 (2018). URL: https://arxiv.org/abs/1808.00924\"}",
"{\"title\": \"Reviewer1 Response Part 1- Thank you for the feedback\", \"comment\": \"We thank the reviewer for their feedback and hope that we address the concerns raised in our reply.\\nOur work addresses the problem of designing a controller to maximize the stability of the closed-loop system. The objective in this work is to combine learning from data and control theory to train a controller that has provable safety certificates in terms of Lyapunov functions.\\n\\nRegarding clarity, the paper borrows terms and definitions from control theory which are instrumental to motivate the proposed algorithm. While it is not practical to write explanations to all the standard results in control systems and Lyapunov stability, we have tried our best to elucidate the relevant background in Section 2. For more information on Lyapunov-based control methods and certification, we would like to refer the reviewer to Bobiti (2017) and Limon (2003, 2009) which we have also cited in the paper. We are open to further suggestions by the reviewer on how to improve the readability of the work.\\n\\nPlease find our comments on the specific questions/concerns that have been raised:\\n\\n#### 1. Regarding the problem statement\\n\\nRestating the approach rationale specified on page 2, our work aims to match or enlarge the safe region of an unknown controller. We aim to design a stabilizing controller using data collected (one-step demonstrations) from an unknown controller (demonstrator) such that the largest possible stability region is obtained. In an ideal case, the region obtained by the new controller should match or extend the one from the original unknown one (demonstrator). However, due to function approximation, this may not always be possible. In this work, we offer a framework to produce a verifiably safe new controller while at the same time attempting to match the size of the stable region of the demonstrator as much as possible.\\n\\n#### 2. Regarding learning of the Lyapunov function in the algorithm\\n\\nAs specified in Section 3.1, we design the Lyapunov neural network (NN) such that it is Lipchitz and satisfies the Lyapunov conditions (4). This is described in more detail in the work from Richards et.al (2018). \\n\\nProvided that condition (5) is verified, the stability guarantees provided in Theorem 1 (formerly called Lemma 1) are that for an MPC which uses the Lyapunov NN function as its terminal cost. Proving that a NN trained via SGD is \\u201cexact\\u201d at inference time or bounding its test error during training are open areas of research. As stated on page 3, in our work, we use a-posteriori sampling-based verification of the neural Lyapunov function to verify (5) at each stage of our alternate learning algorithm. This provides a high probability certificate that the network is a Lyapunov function according to the conditions (4) and (5), as described in Bobiti (2017). Due to page constraints, a detailed description of the algorithm used for verification is specified in Appendix E.\\n\\n#### 3. Regarding robustness margins to model errors\\n\\nWe thank the reviewer for bringing this to our notice. In the appendix, we have shown a bound for the maximum model error for which the closed-loop system can retain a given Input-to-State Stability (ISS), i.e. robustness to additive errors. Due to space limitations, we had not included this bound as part of the theorem in the main paper. However, we recognize that it would be beneficial to state this result and will do so in the updated version of the paper.\\n\\nThe bound on the maximum model error for ISS tells us that, for a stronger contraction factor and a greater safe set size, we can proportionally tolerate more uncertainty on the model. However, the effect on the stability performance of the controller is highlighted in Theorem 2 (formerly Lemma 2), where we discuss how the model error affects the final radius of convergence of the cost (and implicitly the control error). The bound presented there shows that it is beneficial to decrease the horizon length if the uncertainty in the model is large and the system to be controlled has large Lipschitz constants. In our experiments, we use this result in designing the controller by reducing its horizon length to one and still showing that ensures stability.\"}",
"{\"title\": \"Reviewer2 Response Part 1- Thank you for the feedback\", \"comment\": \"We thank the reviewer for their constructive feedback and are pleased to hear that our proposed algorithm was found interesting. We address the raised points in the following reply and will make the necessary changes in the uploaded version as well. We hope these answer the remarks by the reviewer and convince them to improve the score. We are of course open to further iterations and amendments to the paper.\\n\\nPlease find our comments on the specific questions/concerns that have been raised:\\n\\n#### 1. Regarding the novelty of the work\\n\\nAs mentioned in the paper, we extend the theoretical results from Limon et al. (2003) to the discounted setting and the use of lambda-constractive Lyapunov functions; those from Lowrey et al. (2018) are extended to the uncertain model setup. The theorems presented in our paper aim to motivate the proposed algorithm which we believe has never been presented in the literature. \\n\\n#### 2. Regarding loss function for learning Lyapunov NN:\\n\\nThe loss proposed in our paper is similar to the one by Richards et al. (2018). The first part encourages the function to decrease over trajectories from within the estimated safe set. The second part is aimed at estimating the function level that defines the safe set. The main difference in this loss function from that by Richards et al. (2018) is in this second part. Instead of using stability labels obtained by performing a forward propagation of T steps and verifying convergence, our algorithm only performs a unitary time-step forward propagation. This is mentioned at the beginning of page 6 in Section 3.3, however, we will reinforce the statement in the new upload. We use the Lyapunov network itself to generate pseudo-labels (stability certificate via $sign(\\\\Delta V)$). This is more data-efficient as we don\\u2019t need to have all trajectories reach completion, and is less prone to error accumulation than using a long-horizon simulation with an imperfect surrogate model (Richards et al. (2018) use the true dynamics model for forward propagation). \\n\\nBesides that, in the training procedure, we verify the Lyapunov network on a validation dataset and perform cross-validation based on the number of verified and not verified points. We also perform a posteriori formal verification (the algorithm for this is presented in Appendix E). We will make these points clearer in the updated version.\", \"notation_wise\": \"$ReLU(x)=max(x,0)$ and $sign(\\\\cdot)$ is the sign of the quantity which returns either 1 or -1. We will clarify this in the paper as well.\\n\\n#### 3. Regarding surrogate dynamics model: \\n\\nWe do not require the surrogate dynamics model to be trained in a particular way. Further, it is not essential that the model is parametrized by a neural network, as long as its function class has Lipschitz continuity. Since all the sets are bounded, the model\\u2019s error bound can be inferred based on the Lipschitz constants (as discussed on page 3). However, in our algorithm, we assume that the learned dynamics model is given and its error bound is known. \\n\\nWe will add a worst-case one-step error bound for ISS of the controller in the revised version of Theorem 1 (formerly Lemma 1). This margin was previously stated in the proof of the theorem (in Appendix A), however, we will move it to the main paper.\\nDue to page constraints, we specified the details about the system models and the training of the dynamics model in Appendix D, while providing the information about the setup in the main paper (Section 4). The NN-dynamics surrogate model $\\\\hat{f}$ is trained using transition tuples (this is stated above eq. (8) on page 3). The data to train the model is collected using a random policy as typically done in system identification. This is mentioned on pages 6-7 for both experiments. \\n\\n#### 4. Regarding related work on MPC suboptimality:\\n\\nWe thank the reviewer for pointing us to this important paper. We cited a different work from these authors on the stability of optimal control with discount factors (Gaitsgory et al., 2015) which is closely related to our discounted setup. The work by Gruene and Rantzer (2008), however, is indeed also related and offers a key set of results for MPC suboptimality under the assumption that the loss satisfies an exponential controllability condition. In our paper, we consider a discounted setting which is more common in RL than in controls (because of stability limitations as we highlight in Theorem 1). Our results are built upon those from Lowrey et al.( 2018). However, we will add the missing reference (Gruene and Rantzer, 2008) to our related work section in the revised upload.\"}",
"{\"title\": \"Paper presents solid control theoretical foundation, though unsure how novel the ideas are\", \"review\": \"In this paper the author proposed an MPC algorithm in which both the dynamics function and the Lyapunov function are parameterized with neural networks.. Specifically leveraging the results of Lyapunov networks (2018 CORL paper: https://arxiv.org/abs/1808.00924) for learning Lyapunov functions, the authors derived an MPC algorithm for quadratic cost/reward problems and also proved the stability, robustness, and sub-optimality performance. To demonstrate the effectiveness of the algorithms, the authors also evaluated this approach on the simple inverted pendulum and car kinematics tasks.\\n\\nIn general I find this paper presents a comprehensive results of a model-based control method that is very popular in the control theory community. To justify their algorithms they also proved several standard properties (stability, sub-optimality performance) in control, which I appreciate their efforts. However, I do have severals questions/concerns regarding the details of their approach:\\n\\n1) The presentation of the loss function of Lyapunov network is not easy to parse, especially there are couple terms that contain specific mathematical operators (sign, ReLU). Can the authors explain each term in the loss and why such choices of loss terms are necessary. Is this loss function identical to the Lyapunov network 2018 CORL paper?\\n\\n2) From the main paper it is unclear how the NN-dynamics model \\\\hat f is learned. Does it just train based on prediction loss? More importantly, while the MPC algorithm uses the learned model how does the dynamics model error affect the stability/robustness/performance bounds of the control algorithm? I cannot immediately find this information in lemma 1 and lemma 2, which makes me worried about the correctness of these results. (Unfortunately I haven't had a chance to check the appendix for proofs)\\n\\n3) Having sub-optimality performance for MPC algorithms is a nice result, as not many MPC algorithms have performance guarantees. However these kind of results are also not new (for example, see https://ieeexplore.ieee.org/document/4639448). How does the MPC performance result here compared with the ones by Grune and Rantzer? \\n\\n4) Among various safe MPC papers, how does the proposed one in this paper compared with this safe MPC algorithm: https://arxiv.org/pdf/1803.08287.pdf, which is also proposed by Andreas Krause's group (that proposed the Lyapunov network)? At least experimentally how does the proposed algorithm compare with other safe MPC baselines (such as the one above) on the standard benchmark tasks (for example the above work also tested the algorithms on the pendulum task).\\n\\nOn the overall, I find this paper's algorithm interesting. However, there are several technical question listed above, and one high-level concern is its novelty. Without further discussions, it appears to me that the work combines several existing results on Lyapunov network and MPC, for which the contribution is rather incremental.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Poorly motivated and unclear contributions\", \"review\": \"This paper proposes an MPC algorithm based on a learned (neural network) Lyapunov function. In particular, they learn both the Lyapunov function and the forward model of the dynamics, and then control the system using an MPC with respect to these models.\\n\\nCons\\n- Poorly written\\n- Unclear connections to related work\\n- Weak experiments\\n\\nIt is unclear exactly what problem the authors are attempting to solve. In general, the authors introduce a large amount of notation and theory, but very little of it appears to be directly related to their algorithm. For example, they refer to the stability guarantees afforded by Lyapunov functions, but as far as I can tell, they never prove that their algorithm actually learns a Lyapunov function (indeed, Lemma 1 starts with \\u201cAssume that V(x) satisfies (5) [the Lyapunov condition] ...\\u201d).\\n\\nSimilarly, they allude to \\u201crobustness margins to model errors\\u201d, but nothing in the algorithm actually takes into account model errors. Is the point of these margins just to show that they exist? If so, it\\u2019s not clear the results (either theoretical or empirical) are very meaningful, given that they depend on the unknown model error (which they assume to be bounded).\\n\\nIn addition, the different loss functions they use (e.g., (10)) are poorly justified. Why is this loss the right one to use to learn a Lyapunov function?\\n\\nFurthermore, the authors\\u2019 approach is closely related to learning the value function and planning over some horizon using the value function as the terminal cost (indeed, the value function is a valid Lyapunov function, but not necessarily vice versa). For instance;\\n\\nBuckman et al., Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion. In NeurIPS, 2018.\\n\\nThe most closely related work I\\u2019m aware of is the following:\\n\\nDeits et al., Lvis: Learning from value function intervals for contact-aware robot controllers. In ICRA, 2019.\\n\\nThe authors should clarify their contributions with respect to these papers. More importantly, the authors should discuss their motivation for indirectly learning a Lyapunov function instead of simply learning the value function (which appears to be more natural and potentially more effective).\\n\\nNext, the authors\\u2019 experiments are very weak. They only consider two environments, the inverted pendulum and car, both of which are very simple. The inverted pendulum starts near the unstable equilibrium, which further trivializes the problem. In addition, they do not even appear to give the dynamics model of the car they are using (or the state space).\\n\\nFinally, this paper is poorly written and hard to follow. They provide a lot of definitions and equations without sufficient explanation or justification, and introduce a lot of terminology without giving sufficient background.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------\", \"post_rebuttal\": \"While I appreciate the authors' comments, they do not fundamentally address my concerns that the paper is too unclear in terms of the meaning of its technical results to merit acceptance. As a concrete example, in their clarification, the authors indicate that they obtain \\\"probabilistic safety guarantees\\\" by checking the Lyapunov condition (5) using sampling. However, at best, sampling can ensure that the function is \\\"approximately\\\" Lypaunov (e.g., using PAC guarantees) -- i.e., satisfies (5) on all but 1-\\\\epsilon of the state space.\\n\\nUnfortunately, an \\\"approximately\\\" Lyapunov function (i.e., satisfies the Lyapunov condition (5) on 1-\\\\epsilon of the state space) provides *zero* safety guarantees (not even probabilistic safety at any confidence level). Intuitively, at each step, the system has a 1-\\\\epsilon chance of exiting a given level set of the Lyapunov function. These errors compound as time progresses; after time horizon T, only 1 - T * \\\\epsilon of the state space is guaranteed to remain in the level set, so eventually the safety guarantee is entirely void.\\n\\nOne way to remedy this is if the Lyapunov function is Lipschitz continuous. However, then, the number of samples required would still be exponential in the dimension of the state space. At this point, existing formal methods tools for verifying Lyapunov functions would perform just as well if not better, e.g., see:\\n\\nSoonho Kong, Sicun Gao, Wei Chen, and Edmund Clarke. dReach: \\u03b4-Reachability Analysis for Hybrid Systems. 2015.\\n\\nThis approach was recently applied to synthesizing NN Lyapunov functions (Chang et al. 2019). My point isn't that the authors' approach is invalid, but that given the current writing it is impossible for me to understand the theoretical properties of their approach.\\n\\nOverall, I think the paper may have some interesting ideas, but I cannot support publishing it in its current state\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Promising work, but unclear theoretical novelty and empirical achievements\", \"review\": \"This paper addresses the question of how to stabilize a system in a vicinity of an equilibrium. While the majority of reinforcement learning algorithms rely on trial and error, which may damage the system, the authors introduce an algorithm for safe exploration and control. A traditional approach in model-based RL is to use MPC with a surrogate forward model to minimize a planning objective comprising a sum of stage costs along with a terminal cost, often chosen as an approximated value function -i.e. the optimal expected cost-to-go- which can be learned by a Bellman equation. Instead, this work is placed in the framework of Robust MPC, where this value function is replaced by a Luyapunov function $V$, which is related to the notion of stability and is only constrained to decrease along trajectories. Such a Luyapunov function, when available, provides both a safe region, defined as a level-set of V, and a MPC policy for which stability analyses have been developed: the authors extend a result from Limon et al. (2003; 2009) to show that this MPC policy enjoys asymptotic stability in general, and input-to-state stability in the presence of small enough model errors. Accordingly, the authors propose a scheme allowing to learn a Lyapunov function $V$ from demonstration data only, through a loss function that penalizes increments of $V$ along one-step transitions. A regularization parameter $\\\\alpha$ of this MPC, which balances stability with constraints satisfaction and stage costs, is also learned jointly by an alternative training procedure. This approach is evaluated empirically on two standard constrained non-linear continuous control tasks.\", \"strong_points\": \"1. This paper is clearly written, well motivated, honest about its place in the literature, and all derivations seem technically correct.\\n2. By bringing together existing results and techniques (MPC stability analyses, value-based RL analyses, LuyapunovNet), the authors manage to relax several assumptions of prior works (no need for access to the perfect dynamics or a stabilizing policy, but only to a demonstration dataset) which makes the approach more practical.\\n3. Empirically, the learned Luyapunov functions seem to effectively capture useful stability information, since the proposed approach outperforms a standard MPC with a longer planning horizon. Even better, this observation is theoretically justified by Lemma 2: errors of the surrogate model compound when used in an MPC, which is detrimental for long-term planning. Conversely, if $V$ contains long-term information, it can directly be used for short-sighted planning, similarly to being greedy with respect to a value function.\", \"weak_points\": \"1. The part which was the least clear to me is the *Performance with surrogate models* paragraph, with Lemma 2. The authors draw a parallel between Luyapunov functions in control theory and value functions in RL, but the latter are not really defined clearly in the text. The authors state in their introduction that they treat \\\"the learned Lyapunov NN as an estimate of the value function\\\" and later they mention a \\\"correct\\\" value function $V^*$ for optimal \\\"expected infinite-horizon performance\\\", but this quantity is nowhere defined. I suppose $V^*$ is an expected infinite sum of discounted stage costs, but which costs? The same as in equation (3)? If so, I find it hard to believe that the assumption of Lemma 2 should be satisfied ($V^*$ close to $\\\\alpha V$), given that the stage cost $l$ of (3) used to define $V^*$ does not appear in the loss (10) used to learn the Lyapunov function $V$.\\n2. I find it difficult to assess the novelty of the theoretical results. The authors are honest in stating that they are extending/adapting known results, but it is not precisely stated what their added value is. Moreover, the abstract mentions that \\\"we also present theorems\\\" but in the article these results are presented as lemmas, which to me usually suggests that they are either instrumental to another result (which they are not), or of minor importance.\\n3. One of the main claim of the paper is the ability of the method to expand a demonstrated safe region. However, this is not really observed consistently across tasks and iterations. For instance, it is not the case in Figure 3. Likewise, the Table 1 states that \\\"With iterations, the number of points verified by the controller increases\\\", but only a two iterations is provided (i.e. a single opportunity to increase/decrease). This seems a bit suspicious and suggests that the ratio of verified points may actually decrease on subsequent iterations, as it does on iteration 3 of Table 3.\\n\\n\\nTo conclude, I lean toward recommending acceptance, but I am ready to increase my score provided that the authors improve the clarity on both the analogy between Lyapunov and value functions, and state more clearly the novelty of their theoretical contributions.\", \"minor_remarks_and_typos\": [\"I do not really see the relevance of the safety performance metric in the Inverted Pendulum experiment (Fig. 3): the state constraints are loose enough that every successful trajectory is also considered safe, so this safety plot (right) does not really bring any new information to the table.\", \"p2, Equation (5): X_s\\\\ **X_T**\", \"p3, Learning and Safety Ver**i**fication\", \"p6, to minimize the loss defined in **(11)**\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
hbzCPZEIUU | Connecting Sphere Manifolds Hierarchically for Regularization | [
"Damien Scieur",
"Youngsung Kim"
] | This paper considers classification problems with hierarchically organized classes. We force the classifier (hyperplane) of each class to belong to a sphere manifold, whose center is the classifier of its super-class. Then, individual sphere manifolds are connected based on their hierarchical relations. Our technique replaces the last layer of a neural network by combining a spherical fully-connected layer with a hierarchical layer. This regularization is shown to improve the performance of widely used deep neural network architectures (ResNet and DenseNet) on publicly available datasets (CIFAR100, CUB200, Stanford dogs, Stanford cars, and Tiny-ImageNet). | [
"Hierarchy",
"Manifold",
"Classification"
] | Reject | https://openreview.net/pdf?id=hbzCPZEIUU | https://openreview.net/forum?id=hbzCPZEIUU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"kPMp7dDc27",
"wBTzL9kHwMa",
"mS9kbsYdJVA",
"e-kfRohuAkw",
"6IrD26kEdrM",
"nrEla74jOMk",
"xL4MDtrefDE",
"L1eSVpZzbKk",
"ITMCTrwF2s8",
"lrfxt72LwEy",
"RETrq3vRt1I",
"Yyp3YRmxAY1"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040409286,
1606152222811,
1606036897020,
1606036654223,
1606036606596,
1606036127487,
1606035950387,
1606035054780,
1604389607329,
1604136011894,
1603920985785,
1603736016907
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3793/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3793/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3793/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3793/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper introduces a method for hierarchical classification with deep networks. The idea is interesting, and as far as I know novel: namely, the authors add a regularizer to the last layer in order to enforce a hierarchical structure onto the classifiers. The idea of placing spheres (with a fixed radius) around each classifier and forcing the child-classifiers to lie on these spheres is quite clever.\\nThe reviewers have pointed out some concerns with this paper. Some had to do with terminology (which the authors should fix but which is no big deal), but the main weakness are the experimental results and the ablation study. The reviewers were not convinced that the optimization in the Euclidean space wouldn't be sufficient. A more thorough ablation study could help here. \\n\\nThis is the kind of paper that I really want to see published eventually, but right now isn't quite ready yet. If you make one more iteration (in particular adding a stronger ablation study) it should be a strong submission to the next conference. Good luck!\"}",
"{\"title\": \"Newly added experiments\", \"comment\": \"We added new experiments to address reviewer\\u2019s all comments on the experimental section regarding 1) (learnable) radius decay [R1, R2, R3, R4], 2) random hierarchy (usefulness of the hierarchical structure) [R3, R4], 3) classification performance of superclass [R3], 4) visualization of the feature space [R3, R4]. Experimental results are added in Appendix of our revised manuscript (texts in blue). A summary of results is as follow:\\n\\n#### **1)\\tLearning radius decay** (using ResNet-18, please see C.5 and Table 5 in the revised manuscript in detail) \\n\\nA learnable radius without constraints does not outperform the classification performance using the predefined radius decay (numbers shown in parentheses). \\n\\n| Dataset | Hierarchy | +Manifold | +Riemann|\\n| :---------|:-----------:|:-----------:|:-----------:|\\n| CUB200 | 53.40 (58.28) | 58.35 (60.42) | 58.24 (60.98) |\\n| Cars | 81.52 (84.96) |82.54 (84.74) | 82.40 (84.16) |\\n\\n#### **2) Learning with random hierarchy** (using ResNet-18, please see C.6 and Table 6 in the revised manuscript in detail) \\n\\nMethods using a random hierarchy show considerably degraded performance compared to that using a manually annotated hierarchical information (numbers shown in parentheses).\\n\\n| Dataset | Multitask | Hierarchy | +Manifold | +Riemann|\\n| :---------|:-----------:|:-----------:|:-----------:|:-----------:|\\n| CUB200 | 47.55 (53.99) | 50.28 (58.28) | 56.96 (60.42) | 56.43 (60.98) |\\n| Cars | 79.98 (82.85) |81.07 (84.96) |82.02 (84.74) | 81.84 (84.16) |\\n \\n#### **3) Superclass categorization** (using ResNet-18, please see C.7 and Table 7 in the revised manuscript)\\n\\nOur proposed methods (Hierarchy, +Manifold, and +Riemann) outperform a multitask (multilabel) classification based method.\\n\\n| Dataset | Multitask | Hierarchy | +Manifold | +Riemann|\\n| :---------|:-----------:|:-----------:|:-----------:|:-----------:|\\n| CUB200 | 53.68 | 58.87 | 61.17 | 62.22|\\n| Cars | 86.88 | 87.91 |91.23|90.97 |\\n \\n\\n\\n#### **4) Visualization of feature space** (please see C.8 and Figure 4, 5, and 6 in the revised manuscript)\\n\\nWe observed a distribution of learned two-dimensional embedding vectors, which are input vectors of the last classifier layer, as [R3] suggested. Moreover, we observed a distribution of multidimensional embedding vectors used in our other experiments, by reducing their dimensions using different popular techniques (t-SNE and PCA). Embedding vectors (two-dimension) of our proposed methods are well separated (clustered) compared to that the baselines.\\n\\nWe will add more results using more methods (e.g. ResNet-50) and using other datasets in the revision.\"}",
"{\"title\": \"Responses to all reviewers\", \"comment\": \"We appreciate reviewers for their valuable and high-quality reviews, as well as for their constructive feedback. We addressed all individual reviewer\\u2019s comments.\\n \\n*Summary*: We propose to model the last layer of a neural network by a combination of a hierarchical layer, and a spherical fully-connected layer, to regularize the network w.r.t. a given hierarchy. The Hierarchical layer encodes the hierarchy since it represents the adjacency matrix of the hierarchical tree, while the spherical layer forces the hyperplanes of similar classes to be close to each other. We formulate this reparametrization with a matrix-matrix product for efficient optimization with a standard pipeline. We also discuss the optimization of the neural network with the Riemannian gradient descent and show some empirical improvement.\\n \\nWe now list, then address, major concerns raised by most of the reviewers, about 1) the extension to non-mutually exclusive hierarchy, 2) the experimental part of the paper, and 3) the hyperparameters \\u201cR0\\u201d and \\u201cRadius decay.\\u201d \\nWe sincerely try to address major issues in the paper raised by the reviewers. We hope this will affect positively their vision of our paper.\\n \\n#### **1) Extension to hierarchical graphs**\\nA concern shared by many reviewers is the extension to a mutually non-exclusive hierarchical structure. In the paper, we considered trees for simplicity. However, we believe it is possible to extend the idea to graphs by considering a hierarchical layer that encodes the adjacency matrix of a graph instead. This means we have to rethink a bit some aspects of the methods, but we believe that such an extension is possible (with some additional modifications).\\n\\n\\n#### **2) Experiments and empirical evaluation**\\nWe do agree with the reviewers that the weaker side of our paper is the section with numerical experiments, but we also believe they are representative of the efficiency of the approach - with the addition of a negligible number of learning parameters, we improve substantially the network accuracy, thanks to our parametrization.\", \"we_point_out_that_is_hard_to_compare_fairly_with_prior_work_in_the_field\": \"we only slightly modify the neural network with the hierarchical layer, while other techniques usually reparametrize the entire network. Moreover, our reparametrization is rather flexible and can be combined with other approaches that also encode hierarchical information in the network.\", \"the_reviewers_presented_many_suggestions_to_go_further_in_the_analysis_of_the_benefits_of_our_technique\": \"For instance,\\n- [R2] proposed to use a soft regularization with the ell-2 norm rather than spheres. \\n- [R3] proposed to include the accuracy for super-classes or to analyze the difference in the feature distribution between the plain network and the reparameterized networks (using, for instance, feature space visualization techniques)\\n- [R1] also proposes numerous ways to improve the clarity of our section describing experiments - which will be implemented in the paper.\\n***\\ud83e\\udc06 Please see responses below (https://openreview.net/forum?id=hbzCPZEIUU¬eId=wBTzL9kHwMa) to find those experimental results.***\\n\\n\\n\\n#### **3) Hyperparameters**\\nWe noticed that the reviewers have some questions about the hyper-parameter R0 (the initial radius of the sphere) and the \\u201cRadius decay.\\u201d \\nThe initial radius R0 plays a little role in the performance, as changing R0 (says, doubling it) scales the entire last layer (i.e., double all the neural network outputs). We have seen that, experimentally, changing R0 does not play a role in the network accuracy and can be safely fixed to R0=1.\\nThe radius decay parameter is indeed a hyper-parameter that needs to be tuned (see, for instance, table (4). The reason we add radius decay is well summarized by R3: \\u201clabels in finer-grained level should be modeled with less capacity.\\u201d We noticed that the network performance is not too sensitive w.r.t. the radius decay parameter: for instance, for tiny-ImageNet in table 4, the network accuracy drops only by 0.5% if we set the radius decay at 0.8 instead of 1 (the best value we found). We agree with the reviewers that knowing the radius decay in advance is hard and left as an open question - however, this is the only parameter we need to tune in the model, and a good approximation of its optimal value can be easily found by cross-validation on a smaller network.\\n\\nWe hope this answer successfully the reviewer's concerns.\"}",
"{\"title\": \"Responses to AnonReviewer1 [R1] (Part 2)\", \"comment\": \"> When authors say \\\"input size...224x224\\\", do the authors mean that Tiny-ImageNet images are resized from 64x64 to 224x224?\\n\\nWe used the Tiny-ImageNet dataset with a 224x224 resolution image which is cropped from 256x256 (resized from 64x64 first). We will revise it.\\n\\n> The paper trains all networks from scratch by explaining \\\"Dogs and Tiny-ImNet are parts of ImageNet\\\". Does it mean that images from datasets Dogs and Tiny-Imagenet are part of ImageNet? \\n\\nAs stated in Stanford Dogs, their images and annotation from ImageNet. And Tiny-ImageNet uses images from ImageNet. Therefore, we cannot use pre-trained networks, as they are usually trained on ImageNet.\\n\\n> How to define \\\"plain networks\\\"?\\n\\nBy \\u201cplain network,\\u201d we meant we used the network without any modification, i.e., we used the same original architecture, initialization, hyper-parameter, and optimizer.\\n\\n> Once the authors state that the parameters are \\\"probably sub-optimal for our proposed methods,\\u201d it also implies that the parameters may be even more sub-optimal to the compared methods?\\n\\nThis means that we kept most hyperparameters unchanged, for instance, the learning rate schedule or the initialization. Those hyperparameters, which come along with a specific architecture, are usually optimized for this architecture, but not to our. This means that there is a potential improvement if we optimize over the other hyperparameters.\\n\\n> When the paper claims \\\"high efficiency of our approach,\\u201d it does not justify the \\\"efficiency part.\\u201d How to tell if the training or inference efficiency is higher than other methods?\\n\\nWe meant \\u201cefficiency\\u201d in terms of the depth of the networks. The performance of our proposed method with shallower layers show a better generalization performance compared to that of deeper baseline networks.\\n\\n> Figure 2 right depicts the Riemannian gradient and \\\"projected gradient,\\u201d but the paper does not formally compare them. It is not clear which one may be better than the other? [...] Moreover, given that the two methods are so different, setting the same learning rate schedule (cf. Section 4.1) is not sensible because they can perform quite differently with different learning rates.\\n\\nWhile we provide some more technical detail on the Riemannian gradient in Appendix B, we provide a comparison in detail between Riemannian (in Section 3.2.2.) and Projected gradient (Section 3.2.1.). Technically, Riemannian gradient descent has more desirable theoretical properties than gradient descent - but those results do not really apply to neural networks, as they are highly non-convex and non-smooth. We did not perform an extensive comparison between projected and Riemannian gradient descent as this is out of the scope of the paper, but our experiments suggest that Riemannian gradient descent tends to perform slightly better than its projected counterpart.\\n\\n> The paper does not discuss how the proposed method may work if classes do not follow a tree hierarchy, although C.3 talks a bit in the context of Tiny-ImageNet. As the paper focuses on a generic regularization based on hierarchical information, it may also need to discuss how this can be applied in multi-label classification problems.\\n\\nA possible way to improve the method to non-mutually exclusive hierarchy would be to consider the adjacency matrix of a graph of hierarchy rather than a tree. We plan to investigate this direction in future work.\"}",
"{\"title\": \"Responses to AnonReviewer1 [R1] (Part 1)\", \"comment\": \"We thank the reviewer for his detailed and careful review.\\n\\n> The third paragraph is confusing. It is not clear why Euclidean distance is not sufficient for learning with such a hierarchical regularization.\\n\\nWe illustrate an example of why Euclidean distance may not be representative of the distance between two classifiers in section 2.3. \\n\\n> From the second last paragraph, it reads that the paper does not train the neural network but adopts a pre-trained off-the-shelf network and works on its last layer. [...] However, in Section 4.1, the paper says all networks are trained from scratch. [...]\\n\\nWe applied end-to-end training without the pre-trained models. The meaning of \\u201cwe do not change prior layers of the deep neural network\\u201d in the second last paragraph is that we do not change the architecture of the network except the last layer. We will make this statement more clear in the revised version\\n\\n> How to define \\\"separator\\\"? Moreover, \\n\\nWe will revise \\u201cseparator\\u201d with \\u201chyperplane.\\u201d\\n\\n> the sentence is confusing, \\\"the parameter of the classifiers that identify the dog\\u2019s breed should also be similar.\\u201d If similar, how can they differentiate dog breeds?\\n \\nThe hypothesis we make in the paper is that two dogs with breeds from the same family (i.e. super-class) should share more common characteristics than two dogs from different classes. Hence, since they share similar characteristics, the two separating hyperplanes should point roughly in the same direction, except for a few features that allow their distinction.\\n\\n> Is radius decay a new method proposed in the paper? If so, what is the rationale behind this design? [...] Why not use other linear decay methods?\\n\\nThe rationale behind the radius decay is similar to a \\u201chard\\u201d l2 regularisation. We tried, as suggested As pointed by R.3, \\u201clabels in finer-grained level should be modeled with less capacity.\\u201d \\n\\nA linear shrinking can be applied too. The major reason we choose the geometrical decrease is due to the main reason that we wanted the maximum radius to be bounded even for deep hierarchy.\\n\\n> What is the optimization method used for learning other layers?\\n\\nWe used SGD in Euclidean space for the whole model except the proposed last layer. The parameters for SGD are the same as the one used for the original network.\\n\\n> To construct the spherical fully-connected layer, isn't it equivalent to learn a normal fully-connected layer followed up by L2-normalization and a scaling operation?\\n\\nIndeed, as suggested by the reviewer, it is possible to include normalization and scaling layers to the network to avoid the usage of another optimization algorithm (in the case where we use SGD). However, this does not apply to Riemannian gradient descent.\\n\\n> It is disappointing that the paper does not present the hierarchical structure in a nice way. Perhaps a visualization on the class labels w.r.t the hierarchy serves the paper better?\\n\\nWe will work on the visualization of the hierarchy for the revised version.\"}",
"{\"title\": \"Responses to AnonReviewer4 [R4]\", \"comment\": \"We thank the reviewer for his careful review and insightful comments.\\n\\n> 1. The reviewer finds Section 2 is not easy to follow:\\n> Authors may consider giving more specific definitions to terms such as classifiers, separators, etc. For example, in (2), Wp and > Wpi are called classifiers. What are they, hyperplanes?\\n> In Definition 1, authors may like to give some early examples of P and L. Otherwise, it is not easy to interpret the matrix H.\\n> The authors may consider using a different notation for Delta in (8), as Delta may remind an operator on H in (9).\\n\\n1. Separator means a classifier, which is a form of hyperplanes in the vector space. We will use one term \\u201chyperplane\\u201d consistently and introduce a proper definition in the paper.\\n2. Meanwhile, we already defined the sets P and L in Section 2.1 prior to Definition 1. There is also an example of H, P, and L in Appendix A - Example of hierarchical structure.\\n3. We understand that the notation Delta may be confusing - we will make this clearer in the revision.\\n\\n> 2. In (9), do we require or observe deltas in the same subtree roughly the same direction?\\n\\nNot especially. In fact, we expect to observe that the delta\\u2019s to point toward different directions, as they represent the differences between the classifier of a class with the classifier of its superclass. This links to a remark made by R. 3 about diversity, where we can expect to have better performance by maximizing the distance between the delta of the same superclass.\\n\\n> 3. In Section 3, it is claimed no hyperparameters are added. However, it seems that the initial radius R0, radius decay parameter, even how to organize classes may all be considered as additional hyperparameters.\\n\\nThe initial radius R0 does not play a role, as this scale the entire output by a constant. However, the radius decay is (as the reviewer mentioned) a hyperparameter, which can be determined by simple cross-validation. \\n\\nWe understand that our statement is misleading, and this will be corrected in the revised version.\\n \\n> 4. In reality, it can be non-trivial, or even impossible, to define mutual exclusive class partitions to form the required class tree in Figure 1. The authors may discuss how different class hierarchy adopted affects classification accuracy, e.g., in Table 2.\\n\\nThis is a critical point that is directly connected to explain why the generalization performance of object classification using mutual exclusive classes in Table 2 is not compared to that of fine-grained classification in Table 1. As [R3] also mentioned the importance of the hierarchy definition, we are going to observe the generalization performance along with different hierarchy settings.\"}",
"{\"title\": \"Responses to AnonReviewer3 [R3]\", \"comment\": \"We thank the reviewer for its positive review. We will clarify the concerns raised by the reviewer.\\n\\n> (1) The empirical evaluation is relatively weak, and the evaluation metric seems not to well reflect the advantages of hierarchically modeling the label space. For example, I think it will be more informative to incorporate the classification accuracy of the super-classes. [...] An intuitive visualization of the feature space will be of great interest. [...]\", \"classification_accuracy_of_the_super_classes_and_visualization_of_embedding_vectors\": \"we appreciate the reviewer\\u2019s constructive suggestion. We are going to observe it in the revision. Please see experimental results shortly and C.7 and Table 7 for classification accuracy of superclasses and C.8 and Figure 4 for visualization in the revised manuscript.\\n\\n> (2) Some important ablation studies to justify some heuristic designs are very important and necessary. For example, there is a hyperparameter in the radius decay. How it will affect the performance is crucial. Potentially, the authors can also evaluate what if no sphericity constraint is applied, or what if no radius decay is used, etc. Since this paper proposes a number of heuristic designs, it is very important to justify them (either from a theoretical perspective or from empirical evaluations).\\n\\nRadius could be a learnable parameter that can be optimized. We have an experiment along with different radius decay in the appendix (Table 4.) Also, the third column of table 1 and 2 corresponds to the no sphericity constraint (i.e., only the hierarchical layer is used). Finally, if we do not use the radius decay, we refer the reviewer to the last line of Table 4, and we clearly see that the performance of the network is greatly affected. Please see experimental results above and C.5 and Table 5 in the revised manuscript in detail.\\n\\n> (3) Although I believe it is useful to model the hierarchical label space in an explicit way, the empirical evaluation does not really convince me on that, especially experiments on CIFAR-100 and Tiny-ImageNet. The method uses additional prior knowledge on the label space but only yields very limited performance gain. I think using some other SOTA regularization can easily improve more. What is the underlying reason? I think more discussions and insights will be useful.\\n\\nAs described in the manuscript, general object classification datasets have less similar classes compared to the fine-grained ones. This partially breaks our hypothesis that elements from similar classes should share similar features. One other reason may be that using semantic hierarchy does not reflect the similarity between classes in the Imagenet dataset.\\n\\n> (4) The usefulness of the hierarchical label structure should be evaluated and verified in the first place. A simple way to evaluate it is to use some random assignment or simple K-means assignments for the super-classes. [...] I highly suggest the authors conduct such an experiment.\\n\\nWe agree that a definition of the hierarchy is one of the important factors to have a better generalization performance. We appreciate the reviewer\\u2019s interesting study suggestion. We also do believe that combining a method that models the hierarchy based on inter-class similarities will substantially improve the performance of our method. Please see experimental results above and C.6 and Table 6 in the revised manuscript in detail.\\n \\n> (5) I cannot find the Spherical CNN from Xie et al. (2017) on page 1 [...]\\n\\nWe added it to the revision (in introduction).\\n\\n> (6) Since the authors consider regularizations for the intra-class hierarchical label structure, it will be interesting to see whether the regularization on the inter-class regularization will be beneficial or not. For example, the authors can use some diversity regularization on the sphere to push away classifiers from different superclasses.\\n\\nSince we applied regularization along with the depth of hierarchy, we applied regularization to both inter-class and intra-class. The idea of diversity is interesting - we actually have envisaged exploring this direction - but we finally decided to focus more on our contribution instead, as diversity was out-of-the-scope of our study,\"}",
"{\"title\": \"Responses to AnonReviewer2 [R2]\", \"comment\": \"We thank the reviewer for his careful review and insightful comments. It seems that the major concerns are 1) the experiments, 2) the hyperparameters and 3) the non-learnable hierarchy. We will answer all concerns below. Regarding 1) and 2), a response can be found in the common answer above (https://openreview.net/forum?id=hbzCPZEIUU¬eId=mS9kbsYdJVA).\\nWe hope that our responses are satisfying the reviewer's questions.\\n \\n> On the weakness side:\\n> I think the experiment presents the most significant weakness of this paper: 1) The comparison is rather weak without any reference to existing prior arts such as [1]. [...] 2) I remain skeptical about the solidness of the baseline performance as they show considerable gaps to the standard baseline [...]. 3) The performance gain diminishes very quickly on bigger datasets such as Tiny ImageNet.\\n\\n1) Prior arts HSE in [1], the authors use an input resolution of 448x448 with the pre-trained model for their experiments (https://github.com/tianshuichen/HSE/tree/master/code/CUB_200_2011/HSE), which is one factor that explains their high accuracy on the baseline. Furthermore, in [1], an input image is used without cropping the object (i.e., bird) which might involve background rather than the bird itself. The improvement (max 12.6%, min 5.92%) using our proposed methods shows more than that of the HSE method [1] (2.9%), compared to baseline, respectively.\\n\\n\\n| \\t\\t order | family | genus | class|\\n| ------------- |:-------------:| -----:| -----:|\\n|baseline |\\t98.8\\t|\\t95.0\\t |\\t91.5\\t|\\t85.2|\\n|HSE(ours) | \\t98.8\\t|\\t95.7\\t|\\t92.7\\t|\\t88.1|\", \"table\": \"performance on [1].\\n \\n2) In https://github.com/weiaicunzai/pytorch-cifar100, there are additional ways, which are not used in our experiments, such as the warm-up epoch, which uses a bigger learning rate during the first few epochs can improve the learning convergence, and the mean and standard variation calculated using CIFAR100 gives a better generalization accuracy.\\n\\n3) The performance diminishes on object classification, which hierarchy could not be mutually exclusive, but this is not due to the number of samples. We observed that the performance gain on ImageNet is similar to TinyImageNet. We will add it in the revision.\\n\\n> The proposed method depends on a pre-defined semantic hierarchical graph rather than a learned one\\n\\nWe suppose we can fix the problem of mutually exclusive hierarchies by considering a graph rather than a tree. We do think this could improve the performance of our approach on datasets like ImageNet. We left this part for future work.\\n\\nWe also agree with the reviewer that the fixed hierarchy is a limiting factor in our approach. As seen in the experiments, the semantic hierarchy is definitively not the best to classify objects in datasets such as ImageNet. However, in the case where the hierarchy is based on features, such as a dog\\u2019s breed, there is a clear, big gap in the performance between standard and modified networks. We also suppose it may be possible to combine methods that learn the hierarchy with our that exploit the hierarchical structure.\\n\\nCombining our approach with dynamic hierarchy is definitely a very interesting way to explore, but this is out-of-the-scope of the paper and this is left for future work.\\n\\n> I have some concerns about the selection of initial radius R0 and its decay policy. I think this parameter should be dataset dependent due to different numbers of categories and the densities of class distributions. As a result, how such parameters and policies can be optimally determined becomes a question.\\n\\n- The initial radius does not play a role, as this scales the entire output. This is why we fixed it to R0 = 1.\\n- The radius decay, however, is a learnable parameter that can be optimized. We added an experiment along different radius (decay) in the appendix (Table 4.)\\n\\n> Finally, forcing a fixed radius does not sound as reasonable as allowing a learnable radius with soft regularization.\\n\\nWe agree with the reviewer that a learnable radius is an interesting direction. We will try a learnable radius for further improved performance. However, forcing the radius may also help to regularize the network so that it generalizes better.\"}",
"{\"title\": \"Interesting method but rather weak experiment\", \"review\": \"In this paper, the authors proposed a novel reparameterization framework of the last network layer that takes semantic hierarchy into account. Specifically, the authors assume a predefined hierarchy graph, and model the classifier of child classes as a parent classifier plus offsets $\\\\delta$ recursively. The authors show that such hierarchy can be parameterized a matrix multiplication $\\\\Delta \\\\mathbf{H}$ where $\\\\mathbf{H}$ is predefined by the graph. In addition, the authors further propose to fix the norm of $\\\\delta$ in a decaying manner with respect to path length. The resulting spherical objective is optimized via Riemannian gradient descent.\\n\\nThe strengths and weaknesses are very obvious in this paper.\", \"on_the_strength_side\": [\"The paper itself is very well written. The notations are well defined and the methods are very clearly explained. The presentation is fluent.\", \"The proposed method seems novel and interesting. The derivations are technically correct.\", \"Experiments show performance improvement over baselines, especially on CUB200/Dogs/Cars.\"], \"on_the_weakness_side\": [\"I think experiment presents the most significant weakness of this paper: 1) The comparison is rather weak without any reference to existing prior arts such as [1]. A simple search with respect to the 5 experiment datasets also show significant performance gaps between the proposed method and latest methods. 2) I remain skeptical about the solidness of the baseline performance as they show considerable gaps to standard baseline training without bells and whistles (https://github.com/weiaicunzai/pytorch-cifar100). 3) The performance gain diminishes very quickly on bigger dataset such as Tiny ImageNet. What about the results on ImageNet?\", \"The proposed method depends on a pre-defined semantic hierarchical graph rather than a learned one, which potentially limits the technical value of this work. In certain cases, semantic hierarchy may not always be a reasonable choice to guide the learning of visual embedding.\", \"I have some concern about the selection of initial radius $R_0$ and its decay policy. I think this parameter should be dataset dependent due to different numbers of categories and the densities of class distributions. As a result, how such parameter and policy can be optimally determined becomes a question.\", \"Finally, forcing a fixed radius does not sound as reasonable as allowing a learnable radius with soft regularization.\", \"[1] Chen et al., Fine-grained representation learning and recognition by exploiting hierarchical semantic embedding, ACM-MM 2018\", \"========================== Post Rebuttal ==============================\", \"The authors did a good job in addressing some of my concerns in the rebuttal. Thus I am increasing the score in response to the clarifications. However, I feel there is still some improvement space for the experiment part of this section, and I encourage the authors to incorporate the changes, including ImageNet experiment and following stronger baselines to make the results more solid and convincing.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea with some concerns\", \"review\": \"I generally like the neat idea of introducing hierarchical spheres to model the intra- and inter-class relationships among the hierachical labels. It is naturally motivated to combine the hierarchical structure in the label space as a prior knowledge to supervise the training of neural networks. The overall idea is simple and easy to understand. The method models the labels at different levels by adding a free vector that is constrained on a specific hypersphere, then uses a smart way of formulating this procedure with simple matrix multiplication, and finally considers an alternative manifold optimization method to train the neural network in an end-to-end fashion. As far as I'm concerned, the intuition has also been explored in [Deep neural decision forests, ICCV 2015], but differently, the ICCV 2015 paper considered the hierarchical label structure with a decision forest. I believe this direction is of sufficient significance to the ML community.\", \"this_paper_has_several_aspects_that_i_found_most_interesting\": \"(1) The formulation is interesting and is novel from my perspective. Modeling the fine-grained classes by successively adding a \\\"perturbation\\\" vector makes sense to me. Then, the authors are able to formulate this in a matrix multiplication, which is basically a linear matrix factorization that over-parameterizes the classifiers. Although technically the linear matrix multiplication is still equivalent to a linear classifier, the fact that it can still improve the network generalization is interesting and is partially verified by a number of theory works. Besides, as a way to combine prior knowledge to supervise the neural networks, such a simple linear matrix factorization (with some constraints like sphericity, radius decay, etc.) provides a potentially useful way to incorporate some regularization priors.\\n\\n(2) The use of spherical constraints is interesting and empirically make senses to me. By constraining the learning on the spherical space can ease the training difficulties of the over-parameterized classification layers. This is, in fact, also observed and verified by [Neural Similarity Learning, Neurips 2019]. It will be potentially interesting to connect these two papers and have some discussions. The radius decay for the hierarchical spheres is also novel to me, because labels in finer-grained level should be modeled with less capacity.\\n\\nDesipte these interesting aspects, I also have a few concerns and suggestions to improve the paper:\\n\\n(1) The empirical evaluation is relatively weak and the evaluation metric seems not to well reflect the advantages of hierarchically modelling the label space. For example, I think it will be more informative to incorporate the classification accuracy of the super-classes. It will make this paper more interesting to have more experiments that analyzes the difference in feature distributions between normally trained neural networks and the hierarchically trained neural networks. For example, an intuitive visualization of the feature space will be of great interest. An easy way for the visualziaiton is to set the outpute feature dimension as 2 and directly plot them, similar to [A Discriminative Feature Learning Approach for Deep Face Recognition, ECCV 2016] and [Large-Margin Softmax Loss for Convolutional Neural Networks, ICML 2016].\\n\\n(2) Some important ablation studies to justify some heuristic designs are very important and necessary. For example, there is a hyperprameter in the radius decay, how it will affect the performance is crucial. Potentially, the authors can also evaluate what if no sphericity constraint is applied, or what if no radius decay is used, etc. Since this paper proposes a number of heuristic designs, it is very important to justify them (either from theoretical perspective, or from empirical evaluations).\\n\\n(3) Although I believe it is useful to model the hierachical label space in an explicit way, the empirical evaluation does not really convince me on that, especially experiments on CIFAR-100 and Tiny-ImageNet. The method uses additional prior knowledge on the label space, but only yields very limited performance gain. I think using some other SOTA regularization can easily improve more. What is the underlying reason? I think more discussions and insights will be useful.\\n\\n(4) The usefulness of the hierachical label structure should be evaluated and verified in the first place. A simple way to evaluate it is to use some random assignment or simple K-means assignments for the super-classes. If using the ground truth hierachical strucutre can consistently outperform the random or K-means super-class assignment, then one can believe that incorporating the ground truth hierachical label structure is indeed useful. Until then, it makes little sense to argue it is beneficial to generalization to combine the ground truth hierachical label structure. I highly suggest the authors conduct such an experiment.\", \"some_minor_concerns_and_suggestions\": \"(5) I cannot find the Spherical CNN from Xie et al. (2017) on page 1. I think the paper is more closely related to [Deep Hyperspherical Learning, Neurips 2017] in terms of the spherical regularization. The authors may discuss the connections and differences to this paper.\\n\\n(6) Since the authors consider regularizations for the intra-class hierachical label structure, it will be interesting to see whether the regularization on the inter-class regularization will be beneficial or not. For example, the authors can use some diversity regularization on sphere to push away classifiers from different super classes. A potential regularization for this is [\\nLearning towards Minimum Hyperspherical Energy, Neurips 2018]. I want to note that it is a suggestion for the paper rather than a weakness.\\n\\nTo summarize, I think the paper proposes a very interesting and potentially widely useful method to incorporate the hierachical label structure to train neural networks. Currently, I feel postive to accept this paper, and I am sitting between 6 and 7 (I give a 6 for now). I will consider to increase my score if the authors well address the concerns.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper proposes a technique to classify hierarchically organized classes.\", \"review\": \"The idea of introducing class hierarchy as a regularization into deep networks seems to be novel.\", \"the_following_comments_could_be_relevant\": \"1. The reviewer finds Section 2 is not easy to follow:\\n- Authors may consider to give more specific definitions to terms such as, classifier, separators, etc. For example, in (2), Wp and Wpi are called classifiers. What are they, hyperplanes? \\n- In Definition 1, authors may like to give some early examples about P and L. Otherwise, it is not easy to interpret the matrix H.\\n- Authors may consider to use a different notation for Delta in (8), as Delta may remind an operator on H in (9).\\n\\n2. In (9), do we require or observe deltas in the same subtree roughly the same direction? \\n\\n3. In Section 3, it is claimed no hyperparameters are added. However, it seems that, initial radius R0, radius decay parameter, even how to organize classes may all be considered as additional hyperparameters.\\n\\n4. In reality, it can be non-trivial, or even impossible, to define mutual exclusive class partitions to form the required class tree in Figure 1. Authors may discuss how different class hierarchy adopted affects the classification accuracy, e.g., in Table 2.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"misleading statements and justifications should be addressed\", \"review\": \"Section 1: The third paragraph is confusing. It is not clear why Euclidean distance is not sufficient for learning with such a hierarchical regularization. Can't the parent class mean just the mean of all its children?\", \"section_1\": \"From the second last paragraph, it reads that the paper does not train the neural network, but adopts a pre-trained off-the-shelf network and works on its last layer. While this makes the method simple and compatible to other network architectures, is there a reason not to train an end-to-end network with the proposed technique? However, in Section 4.1, the paper says all networks are trained from scratch. Does the training of the whole model follow end-to-end training or stage-wise training?\\n\\nSection 2.2: How to define \\\"separator\\\"? \\\"Separator\\\" does not seem like a formal word. Moreover, the sentence is confusing, \\\"the parameter of the classifiers that identify dog\\u2019s breed should also be similar\\\". If similar, how can they differentiate dog breeds?\\n\\nSection 3.1: Is radius decay a new method proposed in the paper? If so, what is the rationale behind this design? If not, are there any related work adopting this method? Why not use other linear decay methods, e.g., $R_p=R_0*|p|$?\\n\\nEq.(16) implies that the whole model (backbone and the proposed spherical fully-connected layer) is end-to-end optimized. What is the optimization method used for learning other layers?\\n\\nMoreover, the paper says \\\"the most direct way to optimize over a sphere is to normalize the columns of \\u2206 by their norm after each iteration. However, this method has no convergence guarantee, and requires a modification in the optimization algorithm\\\". To construct the spherical fully-connected layer, isn't it equivalent to learn a normal fully-connected layer followed up by L2-normalization and a scaling operation?\\n\\nEven though the authors provide C.2, C.3 and plain files on the prepared hierarchies in the datasets, it is disappointing that the paper does not present the hierarchical structure in a nice way. Perhaps a visualization on the class labels w.r.t the hierarchy serves the paper better?\\n\\nSection 4.1: Tiny-ImageNet dataset seems to have lower resolution of images (64x64). When authors say \\\"input size...224x224\\\", do the authors mean that Tiny-ImageNet images are resized from 64x64 to 224x224? \\n\\nSection 4.1: The paper trains all networks from scratch by explaining \\\"Dogs and Tiny-ImNet are parts of ImageNet\\\". Does it mean that images from datasets Dogs and Tiny-Imagenet are part of ImageNet? Or Does it mean that classes in the two datasets are included in the set of ImageNet classes?\\n\\nSection 4.1: How to define \\\"plain networks\\\"? The paper uses two networks ResNet and DenseNet, then what is \\\"plain networks\\\"? Once the authors state that the parameters are \\\"probably sub-optimal for our proposed methods\\\", it also implies that the parameters may be even more sub-optimal to the compared methods?\\n\\nSection 4.2.1: When the paper claims \\\"high efficiency of our approach\\\", it does not justify the \\\"efficiency part\\\". How to tell if the training or inference efficiency is higher than other methods? \\n\\nFigure 2 right depicts Riemannian gradient and \\\"projected gradient\\\", but the paper does not formally compare them. It is not clear which one may be better than the other? For learning, gradient guides the direction to update parameters, but the scale in the update also matters. Is there a discussion on which one may be more efficient (or compute time) during a training iteration? Section 4.2 explicitly notes the performance difference between the two methods. The paper should discuss this further for better understanding. Moreover, given that the two methods are so different, setting the same learning rate schedule (cf. Section 4.1) is not sensible, because they can perform quite differently with different learning rates.\\n\\nThe paper does not discuss how the proposed method may work if classes do not follow a tree hierarchy, although C.3 talks a bit in the context of Tiny-ImageNet. As the paper focuses on a generic regularization based on hierarchical information, it may also need to discuss how this can be applied in multi-label classification problems.\\n\\n\\n---------------------------------\\npost-rebuttal\\n---------------------------------\\nI appreciate that authors have provided rebuttal that addresses many of my questions, though I'd like to maintain my initial rating due to the following comments. I think this paper is at the borderline.\\n\\nIn terms of explaining why \\\"Euclidean distance is not sufficient for learning such a hierarchical regularization\\\", I don't find the illustration example in Section 2.3 intuitive or concrete. I don't think Eq3 adds much as the paper does not explain further. Perhaps the confusion is from that the paper does not explicitly explain what \\\"optimal classifier\\\" mean in terms of Eq3.\\n\\nThe authors only say \\\"those parameters and schedule were optimized for SGD on plain networks, probably sub-optimal for our proposed methods.\\u201d It is not clear whether other methods suffer severely from the choice of learning rate and scheduler. As far as I know, SGD is sensitive to the initial learning rate. So I am worried that setting the same learning rate is not fair to comparing different models that have different structures.\\n\\nFrom the updated paper, I find the blue line in Page-2 confusing. It is not clear about the logic: why diversity reduces over-fitting. (Xie et al. 2017) studies this point with a complete paper. But the way that authors simply put it is quite unclear how this statement is related in the context.\\n\\nVisualization is interesting to look at. But it should be better analyzed. For example, visually all methods produce similar tSNE visuals in Figure 5. But are there any essential difference?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
F8xpAPm_ZKS | Model-Free Counterfactual Credit Assignment | [
"Thomas Mesnard",
"Theophane Weber",
"Fabio Viola",
"Shantanu Thakoor",
"Alaa Saade",
"Anna Harutyunyan",
"Will Dabney",
"Tom Stepleton",
"Nicolas Heess",
"Marcus Hutter",
"Lars Holger Buesing",
"Remi Munos"
] | Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards.
In particular, this requires separating \emph{skill} from \emph{luck}, ie.\ disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup.
The key idea is to condition value functions on \emph{future} events, by learning to extract relevant information from a trajectory. We then propose to use these as future-conditional baselines and critics in policy gradient algorithms and we develop a valid, practical variant with provably lower variance, while achieving unbiasedness by constraining the hindsight information not to contain information about the agent’s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative problems. | [
"credit assignment",
"model-free RL",
"causality",
"hindsight"
] | Reject | https://openreview.net/pdf?id=F8xpAPm_ZKS | https://openreview.net/forum?id=F8xpAPm_ZKS | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"5sGaISizZu9",
"zEUKbSupj-S",
"nElbFAE392r",
"UuHKtu0-uHc",
"-iRiMx_JYGJ",
"00nQN8Icmfx",
"J9NcIYtgW-",
"HICO0C8TygJ",
"_hzrQX2SYC0",
"dAa4ghG2vzi",
"9Rg80nPJbK",
"uG0Ad1t40Iv",
"C9i2TbS9Hnp",
"0KnNNngCzf",
"KpE8N4puPzU",
"hHRwqN8KB9t",
"PZ778K9KN2p",
"K41vnJMig7f",
"O3f9oS_X1Rj"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040389167,
1606260829384,
1606259457613,
1606257528156,
1606162895096,
1606162411152,
1605979478191,
1605831530506,
1605831447493,
1605831309895,
1605831250579,
1605830875924,
1605830640259,
1605830509608,
1605830142456,
1603931079444,
1603892503830,
1603862958408,
1603861599478
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3792/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3792/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3792/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3792/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"In this paper, the authors aim to develop a new method for credit assignment, where certain types of future information is conditioned on. The authors are well-aware that naive conditioning on future information introduces bias due to Berkson's paradox (explaining away), and introduce a number of corrections (described in section 2.4 and 2.5).\\n\\nThe authors illustrate their approach via a number of simulation studies and constructed problems.\\n\\nI think it would be nice if the authors found a way of connecting their notion of counterfactual to one used in causal inference (for instance, I think there is a connection via e.g. importance correction terms).\\n\\nReviewers were worried about the contribution being incremental given existing work (from 2019), and relative simplicity of the evaluation of the approach, compared to existing similar work.\"}",
"{\"title\": \"comment\", \"comment\": \"Regarding your first paragraph, this is a fair point. While common RL benchmarks may have limited credit assignment issues, we could modify a common RL environment to exacerbate those issues (such as assigning all rewards to the final state, or adding high variance perturbations to the dynamics).\\n\\nFor your second paragraph, would \\u201cWe additionally assume that at training time, a hindsight network processes the entire trajectory to compute hindsight statistics Phi [\\u2026]. These statistics are then used to compute the hindsight value function V_\\\\theta(X_t,Phi_t)\\u201c. If not, what is the source of confusion exactly? Does the diagram in the appendix help?\"}",
"{\"title\": \"Comments on revisions and Environments\", \"comment\": \"It is true that using simpler, more pure environments help gauge if the method is performing as intended. However, to indicate that the method is of larger interest to the community it is beneficial to display that it improves performance on common environments in the community or even environments that are currently difficult.\\n\\n\\nSection 2.6 provides many important details in understanding how the method works and is actually trained. The terminology of \\\"we assume the agent...\\\" makes it difficult for the reader to clearly understand what the agent is truly doing or what exactly has been implemented. However, with a careful read, it can be understood.\"}",
"{\"title\": \"Revision\", \"comment\": [\"Dear reviewers,\", \"Thank your for your feedback which we have incorporated in a revised version of the paper. Based on your suggestions, we have :\", \"clarified the text and notations in several places.\", \"cited relevant references earlier in the paper and in the literature section.\", \"brought significant implementation details of the CCA-PG algorithm from the appendix into the main text (please read sections 2.5 and 2.6 in particular to find the new material).\", \"Added back the proof for reduced variance in the appendix.\"]}",
"{\"title\": \"Environments\", \"comment\": \"We chose to keep the environments relatively simple visually and in terms of control in order to tease out the credit assignment aspects. Combining all aspects is at this point too difficult to solve in our mind (their combinations would constitute interesting environments, but they prove too challenging for typical RL setups at the moment).\\n\\nIf you believe no environments without complex visuals can be interesting, we will only have to agree to disagree. Otherwise, we would welcome you to elaborate on why, precisely, you find these environments not interesting? It is difficult to address the criticism or come up with alternative environments if we are not provided with more details.\", \"regarding_the_training_of_phi\": \"All the details for training Phi can be found in the appendix 1. We will shortly submit a version with those details (slightly abridged) in the main text. Note that \\u2018modifications to baseline and q-functions\\u2019 are very common research topics in methodological RL and approximate inference papers. But the modification is by no means trivial. Consider a general learning rule for providing signal at all actions (\\u2018counterfactual learning\\u2019, so to speak):\\n\\\\sum_a \\\\grad \\\\pi(a|x) S(a), where S(a) is the learning signal for action a. To our knowledge, very few RL papers offer any generally valid rule beyond using the vanilla Q function S(a)=Q(s,a).\\n\\nConditioning on any arbitrary information Q(x,a,phi) will generally not work. We identify the criterion to make these updates be correct. We are not aware of other RL work that provide alternative forms of the learning signal for all actions, with the exception of HCA, which provides a single alternative rule (instead of a family of rules), with no clear guarantees about its performance.\"}",
"{\"title\": \"Reply\", \"comment\": \"Do you have additional questions / clarifications needed, or is the paper clearer at this point?\"}",
"{\"title\": \"Environments and more\", \"comment\": \"While I agree with the comments on how many current RL benchmarks are lacking this does not convince me that the environments used in the paper are much better. If this can be addressed in the updated version of the paper this can help the readers understand the more general benefit of the work.\\n\\n\\\" a modification to a q function\\\"\\nAs R1 has noted the method adds \\u03a6t to the Q function Q(Xt , \\u03a6t , a) that can be learned. It is not clear how this is learned. It would help to convincingly outline how this modification is important and as some of the other reviewers have noted compare to prior methods.\"}",
"{\"title\": \"General response\", \"comment\": \"We thank the reviewers for their thoughtful comments on our work.\\nMost reviewers agreed our paper presented a theoretically grounded, novel algorithm with strong performance compared to baselines that include recent related work. \\n\\nNext week, we will upload an updated version with the following changes:\\n* More implementation details regarding the independence maximization loss will be included in the main text.\\n* Clarify some of the writing and fix typos.\\n\\nSome concerns were raised regarding baselines and environments; we commented on these to the relevant reviewers. If we agree on which experiments would make the most sense to include, we would add them in the final version of the paper, but unfortunately cannot commit to including these in next week\\u2019s version, due to the time required to run these experiments.\"}",
"{\"title\": \"Response\", \"comment\": \"We firmly believe that whether we study reinforcement learning as a model for human cognition, or as a toolbox for solving complex problems, our problems are in some aspects more realistic than classical RL environments. Classical RL environments exhibit few of the credit assignment issues associated with real world problems: most benchmarks are deterministic or nearly so; and the agent is in a very controlled environment where its action directly affects the outcome; no externalities or exogenous affect the outcome; different tasks are clearly separated and not interlaced; and in most setups, the agent is the sole actor in the environment. None of these assumptions are verified in reality.\", \"detailed_notes\": \"*The introduction does not state that the particular credit assignment problems being looked into is that of partially observed environments*\\nOur approach is not limited to partially observed environments. \\n\\n*it sounds like the method is just going to be a modification to a q function.*\\nWe are not sure what this sentence means. What does \\u2018a modification to a q function\\u2019 mean, and how would it invalidate a method?\\n\\n*There does not appear to be my significant information on how the mutual information metric is computed between the action space and latent variable space.*\\nWe will include these details in the main text.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank you for your encouraging review.\\n\\nIt is true that while model-free, our approach attempts at capturing aspects of model-based reasoning. However, a classifier is a far simpler object to learn than a full model of an environment. \\n\\nA counterfactual model-based approach as in Buesing et al. would probably solve the problem. `Classical\\u2019 model-based approaches may be more difficult to tune because they will also be affected by the problem variance, and therefore may result in inaccurate models. \\n\\nWe believe however that generally speaking, model-based approaches are still significantly more complicated to set in motion. Model-based RL is still a developing field, and there are far more design choices involved in designing a model-based RL architecture than a model-free one. Choices have to be made with regards to the RL algorithm itself, the environment model, how is data used to learn the agent and the world model, the losses used. We are afraid that an attempt at a model-based approach would result in an unfair comparison, in which the model-based approach would underperform, which is not what we aim to prove. We aim to show instead we can get the benefits of a model-based approach without some of the drawbacks. Nevertheless, we are happy to take suggestions as to what a \\u2018fair\\u2019 model-based comparison would be. Would you want it to be counterfactual as well, or more classical?\"}",
"{\"title\": \"Response\", \"comment\": \"We thank you for your thoughtful review and comments.\\n\\nWe challenge the statement that the work is incremental. The CCA estimator is novel and does not have similar ideas in the literature we are familiar with. CCA was developed concurrently with HCA; their main (and intriguing) similarity was requiring learning a hindsight classifier in both cases. As a result, we spent a significant amount of time trying to understand the connections, and came up with the FC estimator, (which does resemble HCA). However, as pointed in the appendix, you cannot derive CCA from HCA and vice-versa, they are fundamentally different estimators leveraging different ideas (similar to the difference between variational inference and sampling-based methods in inference). \\n\\nOur current proof for CCA is derived from FC, but it is possible to prove CCA directly without invoking the h/pi ratio, which makes the connection less clear. We believe that presenting the unified approach clarifies the connection but should not be used as an argument that CCA and HCA are the same; they are not. Note further that CCA provides a performance guarantee (in terms of lower variance) and a guiding principle in terms of deriving useful Phi. We will mention HCA earlier in the paper (while trying not to have the discussion in two parts of the paper).\\n\\nFor state of the art baselines, we believe our ideas are orthogonal to many ideas in state of the art RL algorithms. CCA could be combined with natural policy updates (MPO, V-MPO), off-policy learning, better representation learning, and so on. We worry that these comparisons may therefore bring confounding factors and we are not convinced of their value. As an example, is it meaningful to compare CCA (vanilla policy gradient + counterfactual credit assignment) and VMPO (natural policy gradient + vanilla credit assignment). If we ran e.g. VMPO on our tasks and found it to underperform, we would not want the reader to conclude VMPO is worse than CCA. Their benefits are likely complimentary. Nevertheless, we are happy to try to include additional baselines, would there be any in particular you are interested in seeing the results of?\\n\\nNote that Guez et al. is not a paper on credit assignment; it does representation learning. Arjona-Medina deals with a slightly different setup (delayed reward, though they do lead to increased variance).\", \"notation_wise\": \"You are right, P(a|X_t,\\\\Phi_t) is implicitly a function of pi. It would be better to make that dependency explicit, so we will add it in the paper. We will also fix the notation for pi through the paper, thanks for noticing.\", \"re\": \"appendix, see general response- we will move elements of the appendix to the main text.\", \"typos\": \"Thank you, will fix.\", \"questions\": \"A) Perhaps the following is helpful: generally, we can think of a trajectory as a function of two factors: the agent\\u2019s actions, and external factors, which are independent of the action. The external factors are not known to the agent, but some of them may have a strong effect on the outcome. Phi represents the agents\\u2019 attempt at measuring those external factors from trajectory. Those external factors are defined both by having affected the outcome (i.e. predictive of the return), and being exogenous, i.e. not caused by the agent (hence the independence assumption). By \\u2018removing\\u2019 the contribution of those external factors to the outcome, all that is left is the agent\\u2019s actions (skills).\\n\\nRegarding the comment *h(.) quantifies the relevance of action a to the future state Xk*, note that state-HCA is a special case of the all-action FC PG estimator, which is different from CCA. As discussed in the paper, it is harder to find a good criterion for Phi in the all action case (we however still offer some leads). Note however that the intuition about removing action information still holds. Suppose for instance that the agent state Xk includes all past actions (A1...Ak). In this case it is easy to show that the HCA estimator degenerates into the vanilla, single action, policy gradient estimator, as carrying too much information about actions in the hindsight statistics makes the agent incapable of understanding more precise counterfactuals. We can elaborate on that proof if you would like.\\n\\nB) Good catch, we forgot to include that bit. We will add it back.\\n\\nC) This work mainly focuses on the problem of credit assignment for transfer in RL which is not directly related to the points we are making in this paper. However, we would be happy to include it. Ferret et. al leverage transformers to derive a heuristic to perform reward shaping. While we also investigate the use of transformers, our approach is not based on explicit reward shaping.\"}",
"{\"title\": \"Response [4/4]\", \"comment\": \"*13] I strongly encourage the authors to expand their study on plain MDP before getting to the POMDP complication. It is not clear where the performance gain comes from.*\\n\\nThe performance gain comes from the lower variance estimator with no additional bias. The lower variance comes from the fact that the value function and critics leverage additional information that correlate to the return, and therefore have lower error in predicting that return. The absence of additional bias comes from the fact that the Phi are independent from the agent\\u2019s action, a fact supported by theory and practice (see figure 1 right). There is no fundamental difference between POMDPs and stochastic MDPs beside the fact that the state should be the concatenation of past observation; some of the environments we studied are essentially MDPs (the partial observability of the key to door environment makes the environment a bit more challenging, but most of the difficulty comes from the variance of rewards).\\n\\nWe humbly believe your confidence score may have been stated too high.\\n\\n[1] Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning, Williams\\n\\n[2] Likelihood Ratio Gradient Estimation for stochastic systems, Glynn \\n\\n[3] Monte Carlo Methods in Financial Engineering, Glasserman\\n\\n[4] Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning, Greensmith et al.\\n\\n[5] Policy Gradient Methods for Reinforcement Learning with Function Approximation, Sutton et. al\"}",
"{\"title\": \"Response [3/4]\", \"comment\": \"*I am not sure what the authors mean.*\\n\\nThe all action gives an informative learning signal (in the form of Q(x,a) ) for all actions a, not just the action At that was used in that particular trajectory.\\n\\n*How is it possible that all actions become more likely? when their probabilities should be sum to one*\\n\\nThe possessive (her actions) simply meant to refer to the collections of all sampled actions through time, i.e. (A1,A2\\u2026.). This is not referring to the set of all actions for a fixed time step. All the actions that Alice actually took through the game are made more likely through the gradient step \\\\sum_t \\\\grad \\\\log P(At|Xt).\\n\\n*The update in proposition 1 shows that in the case the agent action does not change the outcome, then the gradient is zero.*\\n\\nWe are not sure what you are referring to here. The agent may not know they have not affected the outcome if they believe they always affect the entire outcome, which is, again, the working assumption of model-free RL. In the example given, the gradient is certainly not zero.\\n\\n\\n*6] \\\"Second, removing the value function V (Xt) from the return Gt does not bias the estimator and typically reduces variance\\\". Would the author refer to a paper stating that removing the value function V (Xt) from the return Gt typically reduces variance?*\\n\\nThis is a classical RL result, which you can find in pretty much all policy gradient papers, we suggest [1-5]. \\nHere\\u2019s some intuition; the estimator is St(Gt-V). Its expectation is unaffected by the choice of V, so the variance is driven entirely by E[St^2 (G_t-V)^2]. In a tabular setting the score function is upper bounded by 1, which leads to an upper bound of E[(G_t-V)^2], which justifies learning the value function by minimizing the expected square advantage, and in particular will outperform the choice V=0. When function approximation is involved, for smooth functions the variance is still upper bounded by a constant time E[(G_t-V)^2]. \\n\\n*The authors state that\\\"A particularity of the all-actions policy gradient estimator is that the term at time t for updating the policy \\u2207\\u03c0(a|Xt)(Q(Xt, a) depends only on past information;\\\" but it seems to me that Q is a function of the measure on the future. Isnt it the case?*\\n\\nNo, Q(Xt,a) is a prediction of the future. The inputs to the Q-function are computed entirely from past information (observations up to time t). Q-functions are trained on predicting future return, but the learned function does not require any input information from times t\\u2019>t to be computed (unlike, say, the return).\\n\\n*10] The authors state that \\\"In contrast, if the agent could measure a quantity \\u03a6t which has a high impact on the return but is not correlated to the agent action At , it could be far easier to learn Q(Xt , \\u03a6t , a).\\\" It is not clear why learning Q(Xt, a) is harder than Q(Xt , \\u03a6t , a). So far, Q(Xt, a) seems an easier function to approximate and most likely needs a fewer sample to learn Q(x, a) than something presumably complicated like Q(x, \\\\phi , a).*\\n\\nThis is a subtle point. First note that both functions approximate the return, and one has access to strictly more information (\\u03a6t), so in practice, your point is not true - it\\u2019s easy for the agent to ignore \\u03a6t if it\\u2019s not informative. In theory, the difficulty of learning the average (through monte-carlo return) is driven by its variance. The variance of the target of Q(Xt,a) is Var(Gt|Xt,a), which is higher on average than the variance Var(Gt|Xt,\\u03a6t,a) of the target of Q(Xt,\\u03a6t,a). This is because of the law of total expectation: Var(Gt|Xt,a) = E[Var(Gt|Xt,\\u03a6t,a)] + Var[E[Gt|Xt,\\u03a6t,a]]. The second term is non-negative, hence the inequality. Let us give a simple example (similar to our bandit problem). \\n\\nAssume that Gt = K + N(a,1), where K is a gaussian random variable with mean 0 and large standard deviation. \\nQ(a)=a, but to learn it, we are using samples with variances K^2+1. However, if you are given K in hindsight, the variance of the targets of a linear regression Q(K,a) = K+a only have variance 1, which is easy to learn.\\n\\n*11] In section 3.1, I strongly encourage the authors to elaborate more clearly on what they do. Is W a scaler? if yes, then how F can be constructed? Do you draw U,V,W each time step??*\\n\\nThank you, this is a typo, W is in R^K. The variables U,V,W are constant across all episodes. You can think of it as a random MDP. U,V,W is sampled separately for each seed, but otherwise kept constant across times.\\n\\n*12] Aside from many unclear statements in this paper that the authors can easily address, I could not find how the authors find \\\\phi. Since this is the main key component of the paper, it would be great if the authors could explain it in depth. I also could not find it clear in the appendix.*\\n\\nThis is all detailed in appendix A1 and A2. We will bring the most important elements in the main text.\"}",
"{\"title\": \"Response [2/4]\", \"comment\": \"*5] \\\"making it increasingly difficult to learn from classical reinforcement learning algorithms\\\", what the authors mean by learning from classical RL algorithm? and why the authors think a better credit assessment is needed and is the way to go. What motivates the authors to state the issue is the credit assignment?*\\n\\nWe mean the vast majority of model-free RL algorithms (policy gradient, Q learning and all their variants) that do not perform credit assignment beyond temporal one. The issue with credit assignment is precisely the one mentioned above: if an agent does not understand the fine grained effect of its actions and takes credit for all changes in the world, it cannot understand efficiently how to act. There will be too many confounding variables on all their actions, which make it essentially impossible to actually learn to act in the world.\\n\\n*3] \\\"Given a trajectory, model-free methods can in fact only learn about the actions that were actually taken to produce the data, and this limits the ability of the agent to learn quickly.\\\" Can you clarify this? I can use function approximation based methods, and then, the first part of the authors' statement is no longer true..*\\n\\n*7]\\\"This estimator updates the policy through the score term; note however the learning signal only updates the policy \\u03c0\\u03b8(a|Xt) at the value taken by action At = a \\\" I am not sure I understand this sentence. Is \\u03c0\\u03b8(a|Xt) the policy, or it is \\u03c0\\u03b8. Do authors have a different model for each state and action pair? Even in that case, since the need to normalize action probability, changing \\u03c0\\u03b8(a|Xt) will affect other \\u03c0\\u03b8(a|X) as well. Therefore, I am not sure what the authors mean here.*\\n\\n*8] Distinction between single action and all actions. In both propositions 1 and 2, it seems that the learning signal is provided for both actions. It is not clear to me how the authors make the distinction. Especially here \\\"The policy gradient theorem... shows it is possible to provide learning signal to all actions,\\\". I am not sure what the authors mean.*\\n\\n*9] To motivate the usage of phi, the authors talk about a scenario in a soccer game, which again I could not find useful, especially when they bring luck and skill. The authors state that \\\"When using the single-action policy gradient estimate, the outcome of the game being a victory, and ,assuming a \\u00b11 reward scheme, all her actions are made more likely\\\". How is it possible that all actions become more likely? when their probabilities should be sum to one? I am not sure again. Are the authors talking about using one trajectory for all the estimates? The update in proposition 1 shows that in the case the agent action does not change the outcome, then the gradient is zero.*\\n\\nAll these misunderstandings are related, so we address them together. We do use function approximation (in the form of neural networks), and it\\u2019s true that over time, with a large number of trajectories, agents can learn to interpolate and understand how actions are related to one another. \\n\\nWhile this is valuable, this is orthogonal to the fact that the learning signal itself may provide information about only the action which was taken (the random variable At), or other counterfactuals actions (a for other values than At).\\n\\nBy using the chain rule, for a softmax policy parametrized by a neural networks, the gradient of the loss with respect to parameters is the gradient of the loss with respect to the set of action logits times the gradient (jacobian) of the action logits with respect to the weights. The first term, defined by the choice of the RL algorithm, is the one we mean by \\u2018learning signal\\u2019. The second term is a function of the particular neural network architecture. While we use deep RL, our contribution is with respect to an RL algorithm. The benefits of the neural network architecture are orthogonal to that. \\n\\n*\\u2018Even in that case, since the need to normalize action probability, changing \\u03c0\\u03b8(a|Xt) will affect other \\u03c0\\u03b8(a|X) as well.\\u2019*\\n\\nThis is technically correct, but the learning signal obtained by normalization is trivial and uninformative when the number of actions is large. As an example, consider learning a classifier through supervised learning. At every step, the agent is shown an image, outputs an action in the form of a label, and is only told whether they got the answer right or wrong, then moves to the next image with no additional feedback. It\\u2019s true that getting the answer wrong slightly increases the probability of all other labels, but this is barely learning at all (and indeed we don\\u2019t recommend learning a classifier that way).\\n\\nBut one could envisage getting feedback beyond the reward, perhaps not the label per se, but a hint (\\u2018this is a mammal\\u2019), in which case even though you wrongly guessed dog, you know now to reinforce the probabilities of all mammals, and also decrease the probabilities of other classes than the one that was wrongly guessed.\\n\\n(continued)\"}",
"{\"title\": \"Response [1/4]\", \"comment\": \"Thank you for your review and questions. We answer your questions below, and will ensure the updated revision reflects those clarifications.\\n\\n1] *I did not get what the authors mean by luck or skill. These terms do not seem to be coherent terms in this paper. I highly encourage the authors to rethink such usage. Unless the authors mathematically define it in the paper.*\\n\\nThe \\u2018luck vs skill\\u2019 metaphor is only here to guide intuition (though our method goes beyond disentangling a simple notion of luck). In RL learning via policy gradients, agents reinforce actions that led to outcomes with higher reward than expected. Those higher rewards could have been obtained through a skillful choice of action, or because of \\u2018luck\\u2019 ( ie exogenous variables not under the control of the agent). When both factors affect the outcome, it can be hard to understand what is the contribution of the choice of action and of external factors. Say for example a person starts a business and gets very successful. Did they get lucky, having essentially bet on the right horse (the business is in an area that would get much higher demand than expected), or did they have great intuition?\\n\\n*2] \\\"Another issue of model-free methods is that counterfactual reasoning, i.e. reasoning about what would have happened had different actions been taken with everything else remaining the same, is not possible.\\\" Can the authors clarify it? Why is it not? When I learn a Q function, that tells me what would be the expected return if I choose other actions following the same policy, right? If you mean evaluating other policies is not possible, I still doubt the statement is true.*\\n\\nThis is an interesting question; RL practitioners typically argue that Q functions provide a counterfactual, as they provide an average estimate of the reward for other actions. We argue this is a very limited counterfactual (they are technically *not* counterfactual in the sense of causality theory per Pearl), because they average over all possible outcomes with a similar starting state. What we mean by counterfactual is a finer notion: 'what would have happened in this very same episode (which is what we mean by \\u2018everything else remaining the same\\u2019), had I taken another action?'. \\n\\nTo explain the difference, let us consider a very simple example. At the start of the day you receive a weather report (state x) that tells you there is a 50/50 percent chance of rain. You have to decide whether to take an umbrella or not (action a). \\nIf it rains and you carry an umbrella, you get a reward of 1, but if you don\\u2019t, you get a reward of -1 for getting soaked. Conversely, if it does not rain and you have an umbrella, you get a reward of -1 (due to umbrellas being cumbersome to carry around for no reason), and +1 if you don\\u2019t carry an umbrella.\\n\\nIn this scenario, the Q(x,a) function, where x={the weather report} and a={carrying an umbrella or not} is 0 for both actions. This is because in the system described above, carrying an umbrella or not is reward-equivalent.\\nNow imagine that you decide not to carry an umbrella, and get rained on (R=-1). A \\u2018true\\u2019 counterfactual here corresponds to understanding that *on that particular day* (in this particular episode), carrying an umbrella would have in fact resulted in a reward of +1 (and no 0 as the vanilla Q function indicates).\\n\\nNote this intuition can be formalized using our CCA estimator. In this example, an agent could discover that Phi=\\u2019presence of rain\\u2019 affects the rewards, but is not caused by carrying an umbrella or not (though a superstitious agent would probably believe it does). \\nQ(report, no umbrella, rain) is the factual outcome (evaluate to -1), while Q(report, umbrella, rain) is the episode-specific counterfactual one, which would evaluate to +1.\\n\\n*4] \\\"actions taken by the agent will only affect a vanishing part of the outcome\\\". What do the authors mean here? What the vanishing part of the outcome refers to?*\\n\\nBy vanishing we mean decreasing to the point of becoming very small. If you consider realistic environments in which an agent is a small part of a large system (due to the presence of complex, hard to model stochasticity as well as many other agents), the agent will typically only affect a small part of the overall trajectory of the system.\"}",
"{\"title\": \"Unclear and vague, but can be improved. I could not find how the authors find the key component of the method, i.e., phi.\", \"review\": \"In this paper, the authors develop a new policy gradient method to reduce the variance in the gradient estimations.\\nIn the commonly used policy method, the bias is a function of the state. e.g., V(x_t). In this paper, the authors propose to use bias V(x_t,\\\\phi_t) where \\\\phi_t is a statistics of future events such that \\\\phi_t is conditionally independent of the action at time t.\\n\\nThe authors show that using such statistics in V(x_t,\\\\phi_t) results in a reduction in the gradient estimate used in policy gradient methods. \\n\\n\\nLater, the authors also show that their method performs well in practice.\\n\\n\\nThere is a set of problems with the paper's presentation, which resulted in the negative evaluation.\\nThe analysis in the paper is straightforward and also easy to follow. However, I could not find how the proposed algorithm learns the \\\\phi.\\n\\nI encourage the authors to improve the clarity, presentation, and language in this paper. \\n\\n1) I did not get what the authors mean by luck or skill. These terms do not seem to be coherent terms in this paper. I highly encourage the authors to rethink such usage. Unless the authors mathematically define it in the paper. \\n\\n2) \\\"Another issue of model-free methods is that counterfactual reasoning, i.e. reasoning about what would have happened had different actions been taken with everything else remaining the same, is not possible.\\\" \\n\\nCan the authors clarify it? Why is it not? When I learn a Q function, that tells me what would be the expected return if I choose other actions following the same policy, right? \\nIf you mean evaluating other policies is not possible, I still doubt the statement is true. \\n\\n3) \\\"Given a trajectory, model-free methods can in fact only learn about the actions that were actually taken to produce the data, and this limits the ability of the agent to learn quickly.\\\"\\n\\nCan you clarify this? I can use function approximation based methods, and then, the first part of the authors' statement is no longer true. The second statement is inaccurate since the author did not quantify with respect to what method the quickness in learning is compared to.\\n\\n4) \\\"actions taken by the agent will only affect a vanishing part of the outcome\\\". What do the authors mean here? What the vanishing part of the outcome refers to?\\n\\n5) \\\"mak- ing it increasingly difficult to learn from classical reinforcement learning algorithms\\\", what the authors mean by learning from classical RL algorithm? and why the authors think a better credit assessment is needed and is the way to go. What motivates the authors to state the issue is the credit assignment?\\n\\n6) \\\"Second, removing the value function V (Xt) from the return Gt does not bias the estimator and typically reduces variance\\\". Would the author refer to a paper stating that removing the value function V (Xt) from the return Gt typically reduces variance?\\n\\n7)\\\"This estimator updates the policy through the score term; note however the learning signal only updates the policy \\u03c0\\u03b8(a|Xt) at the value taken by action At = a \\\"\\nI am not sure I understand this sentence. Is \\u03c0\\u03b8(a|Xt) the policy, or it is \\u03c0\\u03b8. Do authors have a different model for each state and action pair? Even in that case, since the need to normalize action probability, changing \\u03c0\\u03b8(a|Xt) will affect other \\u03c0\\u03b8(a|X) as well. Therefore, I am not sure what the authors mean here.\\n\\n8) Distinction between single action and all actions.\\nIn both propositions 1 and 2, it seems that the learning signal is provided for both actions. It is not clear to me how the authors make the distinction. Especially here \\n\\\"The policy gradient theorem from (Sutton et al., 2000), which we will also call all-action policy gradient, shows it is possible to provide learning signal to all actions,\\\".\\nI am not sure what the authors mean. \\n\\nThe authors state that\\\"A particularity of the all-actions policy gradient estimator is that the term at time t for updating the policy \\u2207\\u03c0(a|Xt)(Q(Xt, a) depends only on past information;\\\" but it seems to me that Q is a function of the measure on the future. Isnt it the case?\\n\\n9) To motivate the usage of phi, the authors talk\\u0010 about a scenario in a soccer game, which again I could not find useful, especially when they bring luck and skill. \\nThe authors state that \\\"When using the single-action policy gradient estimate, the outcome of the game being a victory, and ,assuming a \\u00b11 reward scheme, all her actions are made more likely\\\". \\nHow is it possible that all actions become more likely? when their probabilities should be sum to one?\\nI am not sure again. Are the authors talking about using one trajectory for all the estimates? The update in proposition 1 shows that in the case the agent action does not change the outcome, then the gradient is zero.\\n\\n\\n10) The authors state that\\n\\\"In contrast, if the agent could measure a quantity \\u03a6t which has a high impact on the return but is not correlated to the agent action At , it could be far easier to learn Q(Xt , \\u03a6t , a).\\\"\\nIt is not clear why learning Q(Xt, a) is harder than Q(Xt , \\u03a6t , a). So far, Q(Xt, a) seems an easier function to approximate and most likely needs a fewer sample to learn Q(x, a) than something presumably complicated like Q(x, \\\\phi , a).\\n\\n11) In section 3.1, I strongly encourage the authors to elaborate more clearly on what they do. Is W a scaler? if yes, then how F can be constructed?\\n\\nDo you draw U,V,W each time step??\\n\\n\\n\\n12) Aside from many unclear statements in this paper that the authors can easily address, I could not find how the authors find \\\\phi. Since this is the main key component of the paper, it would be great if the authors could explain it in depth. I also could not find it clear in the appendix. \\n\\n13) I strongly encourage the authors to expand their study on plain MDP before getting to the POMDP complication. It is not clear where the performance gain comes from.\\n\\n\\n................................................................\\n\\nPost rebuttal. The confidence rating is reduced.\\nI might have been mistaken, but the authors might find this paper useful. \\\"Troubling Trends in Machine Learning Scholarship\\\"\\nAgain, I might be wrong, and the mentioned paper might be of no use here.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"#### Summary:\\nThe paper explores a new approach to credit assignment that complements existing work. It focuses on model-free approaches to credit assignment using hindsight information. In contrast to some prior work on this topic, e.g., (Harutyunyan et al. 2019), the paper does not rely explicitly on hand-crafted information, but instead learns to extract useful hindsight information. The contributions of the paper are two-fold. First, the paper introduces two new policy gradient estimators, FC-PG and CCA-PG, and it proves that the novel gradient estimators are unbiased. Second, it provides experimental evidence that the novel estimators are beneficial compared to some prior work (in particular (Harutyunyan et al. 2019)). \\n\\n\\n#### Comments:\\n\\nOverall, I found the contributions of the paper interesting, but I'm somewhat on the fence about this paper due to the following pros and cons.\", \"pros\": \"The paper is clearly written, easy to follow, and interesting to read. It provides a good overview of the related work, and motivates well the problem at hand. Furthermore, the paper showcases that its algorithmic approach has theoretical grounding, and it experimentally verifies that it's beneficial compared to concurrent approach from (Harutyunyan et al. 2019).\", \"cons\": \"Given that a very similar type of counterfactual credit assignment approach has already been proposed in prior work, the technical contributions (theorems) of the paper seem somewhat incremental. The experiments, while indicating potential benefits of the proposed approach, utilize relatively simple environments compared to some of the recent papers on credit assignment (e.g. (Arjona-Medina et al. 2019), (Guez et al 2019)). Moreover, the experiments could include more state of the art baselines.\\n\\nApart from these high level comments, the following comments include suggestions for improvements and questions.\", \"related_work\": \"Since the hindsight credit assignment of (Harutyunyan et al. 2019) is a special case of FC-PG, this connection should be mentioned earlier in the paper, not just in the related work section. The flow of the paper is currently misleading, given that there is prior work that does propose quite similar ideas, e.g., the content between the title to section 2.4 does not seem to be reflect relevant prior work. Perhaps referencing relevant papers in earlier sections, or moving the related work section, would resolve this issue.\", \"notation\": \"Notation in the paper often omits important dependences, making some of the calculations confusing or not immediately clear. In the interest of making the claims more precise, it would be very useful to add important dependencies where needed. For example, in equation (1), does $P(a|X_t, \\\\Phi_t)$ depend on policy $\\\\pi$? Moreover, the notation does not seem to be consistent, e.g., policy $\\\\pi$ sometimes has dependency on $\\\\theta$ sometime not (in gradient calculations).\", \"appendix\": \"I think adding some parts from the appendix could improve the clarity of the content. In particular, the last paragraph on Page 3 that starts with 'We ensure that these statistics...' is not providing sufficient explanations regarding the technical content important for understanding the results. It is also not clear if all the content in the appendix is relevant for the results described in the main text.\", \"minor_typos\": \"-removed from the advantage, resulting a significantly lower variance estimator. --- resulting in?\\n-$\\\\lambda_{IM}$ does not seem to be defined before being used (in the paragraph before section 3.2)\\n-and the the benefits of the more general FC-PG and all-actions estimators. --- remove one 'the'?\\n\\n#### Questions:\\n\\nA) I'm a bit puzzled by the discussion regarding the conditional independence requirement in Section 2.5. Why is this an 'intuitive' requirement? How does it influence the interpretation in the paragraph before Theorem 3? How does this compare to (Harutyunyan et al. 2019) argument that '$h(.)$ quantifies the relevance of action a to the future state $X_k$'? \\n\\nB) The proof of Theorem 3 and Theorem 4 in Section D3 says that the theorems follows from Theorem 1 and Theorem 2 given the conditional independence assumption. Could you explain in more detail why the second statement (about variance) in Theorem 3 follows from Theorem 1 and 2? \\n\\nC) How does this approach compare to Ferret et al.: Self-Attentional Credit Assignment for Transfer in Reinforcement Learning?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Novel and interesting work on hindsight counterfactuals with perhaps some missing evaluations\", \"review\": \"This work attempts to address the problem posed by high reward variance and low sample efficiency in model-free RL algorithms. The proposal is to use counterfactuals to do finer-grained credit-assignment and reasoning about alternative actions without having to learn a potentially difficult environment model.\\n\\nThis is done by conditioning the value function on a random variable $\\\\phi$ that attempts to capture everything else about the future trajectory not resulting from the current action. This is done by maximizing the independence between $\\\\phi$ and $A$ given the current state. A classifier that predicts action based on $\\\\phi$ is required to do the above. This is also learned from data.\", \"claimed_contributions\": [\"Proposing a set of environments with difficult credit assignment.\", \"Novel algorithms that use counterfactuals that are unbiased and guarantee lower variance.\", \"The approach seems novel and interesting.\", \"The claimed contributions are supported to a large extent by theory and experimentation.\", \"The idea of constructing value functions conditioned on future trajectory information is not novel (Hindsight Credit Assignment does this), but the idea of learning the conditioning variable is (HCA uses states or returns).\", \"The paper is clearly written. The illustrative example of counterfactuals in hindsight with Alice and Megan is helpful.\", \"The approach is evaluated first on a bandit task and then on different versions of a partially observable gridworld environment and finally on a multi-task setting.\", \"Comparison to vanilla policy gradient and a couple of versions of prior work (HCA) over a substantial number of random seeds.\", \"The task interleaving setting is an interesting benchmark for multi-task settings.\", \"This work builds off of HCA and mainly addresses the case of high variance in rewards where the prior work seems to fail. It performs similar to vanilla PG on environments with little randomness in reward for similar actions, but better than HCA.\", \"The authors claim that they do not require a model of the environment but a classifier $h(A_T|X_T, \\\\phi)$ is learned which resembles an inverse model. Even though the approach does not require building a forward model, I am curious to know the performance of a model-based approach such as by Buesing et al. trained on the same data available for $h(A_T|X_T, \\\\phi)$ in these environments. Is it difficult to learn a model for the proposed tasks?\", \"I think the work contains enough novelty, the writing is clear and the experimentation is extensive. But, I am unsure whether to recommend acceptance without a model-based baseline trained on data available to the classifier used in this approach.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting but confusing\", \"review\": \"The message and paper propose a couple of environments where there is exogenous noise added to the reward function and the particular method in the paper specifically looks at this type of noise. While the method proposed may work in these types of environments it's not clear if more interesting environments do have these properties and we should be more concerned with this problem or that the environments used in the paper were specifically constructed to fit the use case of the algorithm.\\n\\nThe proposed method in the paper does offer interesting insight into how certain temporary consistent variables and the identification of such variables can help decrease the variance over policy estimates. However, the results in the paper are not overly convincing with respect to understanding the importance of this method on more realistic tasks that the community is generally interested in.\", \"some_more_detailed_notes\": [\"The introduction does not state that the particular credit assignment problems being looked into is that of partially observed environments. Overall, I find the writing in the introduction to not motivate the problem well our lead the reader towards what to expect in the rest of the paper. This makes it very difficult to understand and appreciate the paper.\", \"If it's still not clear from the middle section to let the detail of the contribution is going to be period by this point it sounds like the method is just going to be a modification to a q function.\", \"There does not appear to be my significant information on how the mutual information metric is computed between the action space and latent variable space.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
ijVgDcvLmZ | FSV: Learning to Factorize Soft Value Function for Cooperative Multi-Agent Reinforcement Learning | [
"Yueheng Li",
"Tianhao Zhang",
"Chen Wang",
"Jinan Sun",
"Shikun Zhang",
"Guangming Xie"
] | We explore energy-based solutions for cooperative multi-agent reinforcement learning (MARL) using the idea of function factorization in centralized training with decentralized execution (CTDE). Existing CTDE based factorization methods are susceptible to the relative overgeneralization, where finding a suboptimal Nash Equilibrium, which is a well-known game-theoretic pathology. To resolve this issue, we propose a novel factorization method for cooperative MARL, named FSV, which learns to factorize the joint soft value function into individual ones for decentralized execution. Theoretical analysis shows that FSV solves a rich class of factorization tasks. Our experiment for the well-known task of the Max of Two Quadratics game shows that FSV fully converges to global optima in the joint action space in the continuous tasks by local searching in the joint action space. We evaluate FSV on a challenging set of StarCraft II micromanagement tasks, and show that FSV significantly outperforms existing factorization multi-agent reinforcement learning methods. | [
"cooperative MARL",
"value function factorization",
"stochastic policy",
"continuous tasks"
] | Reject | https://openreview.net/pdf?id=ijVgDcvLmZ | https://openreview.net/forum?id=ijVgDcvLmZ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"fXzZXDk3DJ6",
"Fyrv4bTm9U0",
"FMl6KrIsYsw",
"bWIt6D2zgCz",
"rV-jjwHSMvD",
"T9NyHMvBsz",
"S_uHz2VaL8N",
"77WkftOwD3",
"oZG3QrAvOdU",
"AuIqMS0dAbA",
"TYd-nZDu860"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040511962,
1606155875292,
1606155320395,
1606155066069,
1606153921680,
1606152766494,
1604519321649,
1603984832194,
1603947890427,
1603876559623,
1603854020138
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3791/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3791/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3791/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3791/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3791/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3791/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3791/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3791/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3791/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3791/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Although the paper presents some interesting ideas, in general the reviewers agree that the paper lacks clear results and is not an easy read. The paper proposes a factorisation of value functions, a topic that has received quite some attention in the literature (e.g. QPLEX), and it seems that their is not sufficient innovation in the proposed method in the paper. There are also a number of claims in the paper (e.g. partial observability etc.) with which some of the reviewers disagree, and should be discussed more carefully in a revised version of the article, that all in all seems to need more work.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for your comments.\\n\\nIt should be mentioned that our work is different from QPLEX. QPLEX considers the Q value decomposition under the IGM architecture. Its policy is epsilon-greedy, which is suitable for discrete tasks. Our proposed FSV considers the soft policy factorization under the IGO architecture which is not only suitable for discrete but also for continuous tasks. Therefore, our work and QPLEX has different motivation and contribution. We will cite QPLEX in our future version. Hope our reply to Reviewer 2 can clear your doubts.\", \"q1\": \"Does the linear decomposition of Qtot of Qi and Vi limit the representation ability of FSV?\\n\\nThe nonlinearity can be obtained by the weight vector since we have end-to-end learned it as a function of $\\\\tau$ and $u$ which can be regarded as a function of $Q_i$.\\n\\nThank you for providing the comments in Q2 and Q3 that will help us analyze our method. We will systematically carry out ablation experiments.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thanks for your thoughtful comments.\\n\\nIt should be mentioned that we have not copied QPLEX since our work and QPLEX has different motivation and contribution.\\nMotivation. QPLEX followed the IGM principle and proposed a duplex dueling network to achieve a complete IGM function class. However, IGM cannot be applied to continuous action space since its argmax operator requires the discreteness of Q-values. \\nThus, QPLEX is not suitable for continuous works.\\nTo address the continuous tasks, we rethink factorizability in a policy-based manner.\\nIn addition, considering that [1] claimed that overcoming relative overgeneralization required a more explorative approach than simple epsilon-greedy action selection, we introduced a novel factorization algorithm based on the soft policy, called FSV.\\nIn summary, QPLEX considers the Q value decomposition under the IGM architecture. Its policy is epsilon-greedy, which is suitable for discrete tasks. Our proposed FSV considers the soft policy factorization under the IGO architecture which is not only suitable for discrete but also for continuous tasks.\\nTherefore, our work and QPLEX has different motivation and contribution.\\n\\nAlthough some details of our work look similar to QPLEX, they are actually different. \\nEquation 7 in FSV (FSV:equ7) seems similar to equation 10 in QPLEX (QPLEX:equ10). However, the meanings of Q and V are different in the two papers. The Q in FSV represents the expected return with entropy term, and the V in FSV is the log partition function of Boltzmann distribution, while the Q and V in QPLEX come from the dueling Q structure. In addition, FSV deduced FSV:equ7 directly from the integration of IGO and soft policy, while QPLEX devised QPLEX:equ10 to enforce IGM consistency.\\nBoth our work and QPLEX utilized an end-to-end learning architecture to approximate weight vectors, the weight vectors represent different meanings in the two papers. \\nIn QPLEX, the weight vector motivation may origin from Qatten [2]. In our work, the weight vector reveals the connection between credit assignment and exploration. \\nSpecifically, the larger $\\\\lambda_i$ (which means its $\\\\alpha_i$ is smaller and its policy has less randomness) the more important this agent contributes to the team, thus its policy should be more greedy to keep the team return. Agents with smaller $\\\\lambda_i$(which means its $\\\\alpha_i$ is larger and its policy has more randomness) can explore more arbitrarily because their performance doesn\\u2019t matter much. \\nIn fact, such an end-to-end architecture has been widely used in previous works, such as QMIX and Qatten.\\n\\nTheorem 2 was regarded as one of the main contributions because the counterpart in QPLEX is its main contribution by the reviewer. However, theorem 2 in FSV is a simple corollary of the integration of IGO and soft policy, since soft policy degenerate to the greedy policy when $\\\\alpha=0$ and IGO degenerate to IGM when the policy is greedy. Therefore, it is clear that FSV can achieve IGM in such a special case.\\n\\nWe stop the gradient there because we found such a trick helps when we duplicated Qatten. This trick will not change the gradient of the weight vector, and it only removes the coefficient $\\\\lambda_i$ of $Q_i's$ gradient of backpropagation which helps to reduce variance especially when $\\\\lambda_i$ is not normalized.\\n\\n[1] Wei E, Luke S. Lenient learning in independent-learner stochastic cooperative games[J]. The Journal of Machine Learning Research, 2016, 17(1): 2914-2955.\\n\\n[2] Yang Y, Hao J, Liao B, et al. Qatten: A General Framework for Cooperative Multiagent Reinforcement Learning[J]. arXiv preprint arXiv:2002.03939, 2020.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for your comments.\", \"q1\": \"what is the relation between IGO optimal policy and soft policy when the former is not representable by latter?\\n\\nIGO is a definition of the factorization of cooperative tasks in the centralized training phase, which describes the consistency between the optimal joint policy and the collection of individual optimal policies of each agent.\\nSpecifically, considering the decentralized execution, each individual policy is independent, then, the probability of choosing joint action $\\\\boldsymbol{u}$ naturally equals the product of the probabilities of choosing individual action $u_i$.\\nThat is, $\\\\prod_{i=1}^{N}\\\\pi_i(u_i|\\\\tau_i) = \\\\pi_{tot}(\\\\boldsymbol{u}|[\\\\tau])$, where $\\\\tau_i$ is a local action-observation history of agent $i$, and $[\\\\tau] = [\\\\tau_i]_{i=1}^N$.\\n\\nHowever, when all individual policies based on decentralized information are optimal, the joint policy $\\\\pi_{tot}$ may be optimal or not.\\nThat is to say, decentralized individual optimal policies may achieve global optimization or not.\\nThus, in the centralized training, when there exists $\\\\pi_{tot}^*(\\\\boldsymbol{u}|\\\\boldsymbol{\\\\tau}) = \\\\prod_{i=1}^{N}\\\\pi_i^*(u_i|\\\\tau_i)$, where $\\\\boldsymbol{\\\\tau} \\\\in \\\\mathcal{T}^N$ is a joint action-observation histories, the global optimization of a cooperative task can be achieved by decentralized individual policies.\\nIn this case, we say the joint policy can be factorized by individual policies or the task itself is factorizable.\\nThe relationship between IGO and soft policy is like the relationship between IGM and VDN QMIX QTRAN. Considering that IGM in general corresponds to greedy action selection, whose form of argmax can\\u2019t be applied with continuous Q-value, we redefine IGO for continuous space from the perspective of arbitrary policy. This means IGO\\u2019s optimal policy can be chosen as not only greedy policy (at which we prove IGO collapse into IGM) but also soft policy if we need.\", \"q2\": \"How does the local soft policy iteration guarantee joint policy improvement?\\n\\nConsidering that $\\\\pi_{tot}=\\\\prod_{i=1}^N\\\\pi_i$ and IGO gives that $\\\\pi_{tot}^*=\\\\prod_{i=1}^N\\\\pi_i^*$, we have \\n\\n\\\\begin{equation}\\\\nonumber\\n D_{KL}(\\\\pi_{tot}(u|\\\\tau)||\\\\pi_{tot}^*(u|\\\\tau)) =\\\\int\\\\pi_{tot}(u|\\\\tau) log\\\\frac{\\\\pi_{tot}(u|\\\\tau)}{\\\\pi_{tot}^*(u|\\\\tau)}du = \\\\int\\\\prod_{i=1}^N\\\\pi_i(u_i|\\\\tau_i)\\\\sum_{i=1}^Nlog\\\\frac{\\\\pi_i(u_i|\\\\tau_i)}{\\\\pi_i^*(u_i|\\\\tau_i)}du =\\\\sum_{i=1}^N\\\\int\\\\pi_i(u_i|\\\\tau_i)\\\\pi_{-i}(u_{-i}|\\\\tau_{-i})log\\\\frac{\\\\pi_i(u_i|\\\\tau_i)}{\\\\pi_i^*(u_i|\\\\tau_i)}du_idu_{-i} =\\\\sum_{i=1}^N\\\\int\\\\pi_i(u_i|\\\\tau_i)log\\\\frac{\\\\pi_i(u_i|\\\\tau_i)}{\\\\pi_i^*(u_i|\\\\tau_i)}du_i =\\\\sum_{i=1}^ND_{KL}(\\\\pi_{i}(u_i|\\\\tau_i)||\\\\pi_{i}^*(u_i|\\\\tau_i))\\n\\\\end{equation}\\nwhere we rewrite $\\\\pi_{tot}=\\\\pi_i\\\\pi_{-i}$ and $du=du_idu_{-i}$. The integral of -i equals 1 due to the probability normalization. Thus, minimizing the KL of individual policies through soft policy iteration will minimize the KL of joint policy. The following proof is the same as in SAC's paper.\", \"q3\": \"What is the meaning of * in sec 3.2? What does eq12 imply?\\n\\nThe $\\\\pi$ with * represents the optimal policy. The $\\\\pi$ without * represents the actual policy during training. The Q with * represents the ideal Q value. The Q without * represents the actual Q value during training. \\nEq12 represents the soft policy improvement of each agent, where $\\\\Pi$ is some set of policies such as a parameterized family of Gaussian distributions.\", \"q4\": \"How did eq 18 come about?\\n\\nJust like Eq12 in SAC, Eq18 is the way we realize soft policy improvement, where we minimize the KL of the current individual policy and optimal individual policy. \\n\\nQ7.Proof of theorem2.\\n\\n$\\\\epsilon-greedy$ indeed cannot be matched by a soft policy in general. But it can be matched when $\\\\epsilon=\\\\alpha=0$, which is all the proof need. In fact, eq7 can hold without IGM if $\\\\lambda$ is well constructed by a neural network.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thanks for your comments.\\n\\nOur contribution has three parts. First, we propose the definition of decomposable tasks from the perspective of policy rather than of Q-value, called IGO, which extends greedy-policy-based (value-based) IGM to arbitrary policy.\\nSecond, based on the IGO, we proposed a new factorize method with an energy-based policy. It should be mentioned that it is the first time to factorize soft policies.\\nThird, FSV is not only suitable for discrete action space tasks but also for continuous action space tasks. In addition, FSV solved the relative overgeneralization problem of factorizing methods in continuous action space.\\n\\nIn fact, although IGO requires the consistency of policy which seems like requiring the consistency of all actions, it is not a stronger definition than IGM. The requirement comes from the task itself. That is to say, if achieving the best performance in a given task requires greedy joint action selection, IGO is equivalent to IGM. But if achieving the best performance requires a stochastic joint policy, IGO works while IGM doesn't. In other words, IGO doesn't give a stronger constraint on a given task but extends the definition of factorizability in more tasks. To realize IGO, we adopt soft policy and end-to-end learn the weight vector $\\\\lambda$. Considering that soft policy can collapse into greedy policy when $\\\\alpha=0$, we guarantee IGM constraint in this special case.\\n\\nThere are two reasons for the sub-optimal action selection (relative overgeneralization). First, as QTRAN pointed out, QMIX and VDN cannot represent the optimal due to its lack of expressive ability. Second, as MASQL [1] pointed out, relative overgeneralization prevents policy gradient methods from achieving better coordination. Therefore, there are two ways to solve this problem. One is to enhance the expressive ability of the function class such as QTRAN and QPLEX, which can solve this problem in discrete action space.\\nAnother is adopting a more explorative approach than a simple epsilon-greedy action selection as [2] says. We have shown in the matrix game that expressive ability is the key to solve this problem in discrete action space. However, in continuous tasks, even with a fully centralized critic that has the best expressive ability like MADDPG, it will still suffer relative overgeneralization [1]. MASQL integrated MADDPG and soft policy and improves its performance, which motivates us to adopt a soft policy under IGO constrain. In conclusion, FSV with the full expressive ability and more explorative soft policy is reasonable to have a better performance.\\n\\nThe multi-head attention structure is inspired by Qatten.\\n\\n[1] Wei, E., Wicke, D., Freelan, D., & Luke, S. (2018). Multiagent soft Q-learning. ArXiv, March.\\n\\n[2] Wei E, Luke S. Lenient learning in independent-learner stochastic cooperative games[J]. The Journal of Machine Learning Research, 2016, 17(1): 2914-2955.\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"Thanks for your comments.\\n\\nIGO is a definition of the factorization of cooperative tasks in the centralized training phase, which describes the consistency between the optimal joint policy and the collection of individual optimal policies of each agent.\\nSpecifically, considering the decentralized execution, each individual policy is independent, then, the probability of choosing joint action $\\\\boldsymbol{u}$ naturally equals the product of the probabilities of choosing individual action $u_i$.\\nThat is, $\\\\prod_{i=1}^N \\\\pi_i(u_i|\\\\tau_i) = \\\\pi_{tot}(\\\\boldsymbol{u} | [\\\\tau])$, where $\\\\tau_i$ is a local action-observation history of agent $i$, and $[\\\\tau] = [\\\\tau_i]_{i=1}^N$.\\n\\nHowever, when all individual policies based on decentralized information are optimal, the joint policy $\\\\pi_{tot}$ may be optimal or not.\\nThat is to say, decentralized individual optimal policies may achieve global optimization or not.\\nThus, in the centralized training, when there exists $\\\\pi_{tot}^*(\\\\boldsymbol{u}|\\\\boldsymbol{\\\\tau}) = \\\\prod_{i=1}^{N}\\\\pi_i^*(u_i|\\\\tau_i)$, where $\\\\boldsymbol{\\\\tau} \\\\in \\\\mathcal{T}^N$ is a joint action-observation histories, the global optimization of a cooperative task can be achieved by decentralized individual policies.\\nIn this case, we say the joint policy can be factorized by individual policies or the task itself is factorizable.\\n\\nWe deduced Equation 6 through the integration of IGO and soft policy, that is, $Q_{tot}(\\\\tau,u)=\\\\sum_{i=1}^N \\\\frac{\\\\alpha}{\\\\alpha_i}[Q_i(\\\\tau_i,u_i) - V_i(\\\\tau_i)] + V_{tot}(\\\\tau)$.\\nNaturally, the $Q_i$ and $V_i$ can be approximated by networks.\\nThen, we should find a way to approximate $\\\\alpha$ and $\\\\alpha_i$.\\nConsidering that the weight vector $\\\\frac{\\\\alpha}{\\\\alpha_i}$ involves the credit assignment of each agent, we can use some global information in the centralized training phase to end-to-end learn the weight vector directly. \\n\\nPrevious work [1] showed a great performance gap between factored and no-factored methods in SCII such as MADDPG and MAAC[2], it points out that the factorization is the key to the performance. Thus, we compare the factored methods in SCII.\\nIn the Max of Two Quadratics game, MASQL[3] which also utilized soft policy can only converge the global optima for 72$\\\\%$ of the time while MADDPG never converged to it. \\nThe factored methods which are admitted new such as WQMIX[4] MAVEN[5] and QPLEX[6] can\\u2019t be applied in continuous action space. [1] and [7] (of which we follow the implementation in our experiments) extended QMIX and VDN to continuous action space, however, they also inherit their shortcomings as shown in Max of Two Quadratics game.\\n\\nThe relative overgeneralization problem represents a phenomenon that a sub-optimal joint action is preferred than the global optimum. This may be due to a lack of expressive ability to represent the optimum or just getting stuck in the sub-optimum. The previous work [8] have shown that giving higher weight to the larger reward helps to overcome relative overgeneralization. This idea is generally the same as MAVEN[5] which visits the global optimum more often and WQMIX[4] which gives a higher weight to the loss of larger $Q_{tot}$. Therefore, we think its beneficial to consider relative overgeneralization problem in factorize method.\\n\\n[1]de Witt, C. S., Peng, B., Kamienny, P. A., Torr, P., B\\u00f6hmer, W., & Whiteson, S. (2020). Deep Multi-Agent Reinforcement Learning for Decentralized Continuous Cooperative Control. ArXiv.\\n\\n[2] Iqbal, S. & Sha, F.. (2019). Actor-Attention-Critic for Multi-Agent Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:2961-2970\\n\\n[3] Wei, E., Wicke, D., Freelan, D., & Luke, S. (2018). Multiagent soft Q-learning. ArXiv, March.\\n\\n[4] Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning. Tabish Rashid, Gregory Farquhar, Bei Peng, Shimon Whiteson. NeurIPS 2020.\\n\\n[5] Mahajan, A., Rashid, T., Samvelyan, M., & Whiteson, S. (2019). MAVEN: Multi-agent variational exploration. Advances in Neural Information Processing Systems, 32(NeurIPS).\\n\\n[6] Wang, J., Ren, Z., Liu, T., Yu, Y., & Zhang, C. (2020). QPLEX: Duplex Dueling Multi-Agent Q-Learning. 1\\u201316. http://arxiv.org/abs/2008.01062\\n\\n[7] Wang, Y., Han, B., Wang, T., Dong, H., & Zhang, C. (2020). Off-Policy Multi-Agent Decomposed Policy Gradients. Cdm, 1\\u201320. http://arxiv.org/abs/2007.12322\\n\\n[8]Rashid, T., Samvelyan, M., de Witt, C. S., Farquhar, G., Foerster, J., & Whiteson, S. (2020). Weighted QMIX: Expanding Monotonic Value Function Factorisation. ArXiv.\"}",
"{\"title\": \"Blind review\", \"review\": \"This paper describes a new method for learning factored value functions in cooperative multi-agent reinforcement learning. The approach uses energy-based policies to generate this factorization. The method is presented and experiments are given for smaller domains as well as starcraft.\\n\\nThe idea of learning factored value functions is promising for learning separate value functions for each agent that allow them to learn in a centralized manner and execute in a decentralized manner (centralized training and decentralized execution). Several methods have been proposed along these lines, but as the paper points out, they have limitations that makes them perform poorly in some problems. \\n\\nThe proposed approach in this paper has some promising experimental results, but there are questions about the novelty and significance of the method. Furthermore, evaluating these contributions is difficult due to the lack of clear details in the paper. \\n\\nIn particular, the details of the approach itself in 3 are not clear. Starting with Definition 1, it seems like IGO is using an optimal *centralized* policy. Is this what is meant? If so, why is this needed (as opposed to an optimal decentralized policy). It will typically be impossible to achieve a centralized policy with decentralized information. Furthermore, the energy-based policies are defined in 3.2, but 'key' ideas such as approximating the weight vector aren't fully explained making the exact approach hard to determine. Also, it is beneficial that the current theorems and proofs are included, but the lack of sufficient detail makes it hard to parse and evaluate them. \\n\\nThere are also similar max entropy approaches, such as the paper below. \\n\\nIqbal, S. & Sha, F.. (2019). Actor-Attention-Critic for Multi-Agent Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:2961-2970\\n\\nAs well as other factorized methods, such as the papers below (which are admittedly new).\", \"weighted_qmix\": \"Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning. Tabish Rashid, Gregory Farquhar, Bei Peng, Shimon Whiteson. NeurIPS 2020.\\n\\nde Witt, Christian Schroeder, et al. \\\"Deep Multi-Agent Reinforcement Learning for Decentralized Continuous Cooperative Control.\\\" arXiv preprint arXiv:2003.06709 (2020).\\n\\nThe paper should discuss how the proposed method is an improvement over this other work and have a more comprehensive related work section. \\n\\nThe experiments are promising, but the relevant related work is not included and there isn't sufficient detail describing how the methods were run and discussing the results. In terms of comparisons, the paper should also need to compare with non-factored state-of-the-art methods. It is, of course, natural to compare with other factored methods, but what matters is general state-of-the-art performance of the domains. \\n\\nAs noted, the clarity and writing of the paper should be improved. Beyond the examples above, some other instances are below. \\n\\n- If the reader doesn't already understand the relative overgeneralization problem, Section 2.3 probably isn't sufficient. Figure 1 is helpful, but it should be described in the text to make the issue clear. \\n\\n- The connection between the overgeneralization problem and factored representations isn't completely clear. Factored representations have problems because they typically cannot represent the optimal value function (or policy). That is a separate issue than getting stuck in a local optimum (which can happen with any type of method).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Q-functions and Policies\", \"review\": \"The paper proposes a Q-factorization method by assuming an energy-based policies model. Q-functions are formulated as soft value functions with the energy parameters, and this adoption renders the function factorization more flexible compared to existing ones. The proposed solution applies to continuous-action tasks, a feat left unconquered by some of the existing methods. Authors exhibit that FSV outperforms others in various environments characterized by local optima.\", \"strengths\": [\"The formulation of Q-functions as soft functions, despite appearing simple, shows some effectiveness in a number of MARL tasks.\", \"The network architecture is intuitive.\"], \"major_concerns\": [\"Neither energy-based policies nor soft value functions is an original contribution of this work. True, the authors do not claim so. But the reviewer is left unsure as to what then the primary contribution of the paper would be.\", \"The method generalizes IGM to IGO but in doing so, foregoes the simplicity of the IGM condition. The reviewer would then expect to be met with a somewhat strong guarantee, but is instead presented with approximations on \\\\lambda_i. It is not clear from the paper how much insightful value the method has, when its criticism of a previous work (QTRAN) was based on intractability but the FSV method itself still relies on approximations. It would seem as though QTRAN and FSV each chose different paths to approximate different components of an MARL training scheme - the former takes may stronger assumption on the value functions while the latter takes assumptions on the nature of value functions being parametrized by approximated weights.\", \"The effectiveness of the proposed method is not yet well-accounted for. Issues are raised, but little explanation (or any attempt thereof) is provided. For example, the reviewer would have very much liked to gain an understanding of the relevance between IGO and its ability to alleviate relative overgeneralization. How does taking on greedy policies (which makes IGO collapse into IGM) make MARL agents more prone to overgeneralize with respect to each other? What kinds of findings would the authors present? What evidence could support those findings? The evaluation, while illustrating great performance gaps, needs a careful redesign so as to construct solid grounds for the soft value function factorization under IGO to be \\\"explainably\\\" better than existing works.\", \"The paper could be better positioned. The Related Works section could be put to better use to clearly distinguish two very different lines of research: value function factorizing MARL works and maximum entropy principle.\", \"There needs to be some justification about multi-head attention being used to \\\"enable efficient learning\\\" in Section 3.3. The reviewer is left hanging as to why and how such a choice was made.\"], \"minor_concerns\": [\"A few parts of the paper were difficult to follow. For example, there is an unfinished sentence in Related Works. In Section 2.1, there is an incomplete clause beginning with \\\"the reward function [...] shared by all agents\\\". Under Theorem 1, \\\"any distributions\\\" --> \\\"any distribution\\\". Also, what is meant by \\\"correct architecture\\\" in that same paragraph?\"], \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A novel paper which factorizes soft value function with stochastic policies.\", \"review\": \"This paper proposes a novel MARL framework named FSV, which incorporates the idea of energy-based policies and an efficient linear decomposition architecture in the joint action-value function with multi-agent maximum entropy reinforcement learning. Besides, the authors propose the IGO, which extends the IGM in stochastic policy cases. FSV suits in both the discrete and continuous action space scenarios. Experiments conducted on two simple examples with discrete and continuous action settings show that FSV could overcome the relative overgeneralization problem with the proper temperature setting. Furthermore, FSV in the challenging SMAC benchmark outperforms VDN, QMIX, and QTRAN in three scenarios.\\n\\nOverall, this paper is well-organized and easy to read. The authors present interesting ideas that combine the energy-based policy and maximum entropy reinforcement learning into the centralized Q-value mixing network to obtain a better expression ability than VDN and QMIX and overcome the relative overgeneralization problem.\\n\\nThere are some questions.\", \"q1\": \"Does the linear decomposition of Qtot of Qi and Vi limit the representation ability of FSV? It seems that FSV cannot represent the non-linear formation of Qtot and Qi.\", \"q2\": \"In Section 5.1, it is better to show the estimated \\\\lambda_i\\u2019s performance compared with the different \\\\alpha_0 settings.\", \"q3\": \"In Section 5.3, there lacks analysis of FSV and other methods. Especially, the ablation of FSV should be considered. In these three scenarios, which part (soft RL, critic\\u2019s structure, or others) contributes to FSV most and improves its performance steadily?\\n\\nThere are some typos.\\nIn section 5.2, \\u201con others\\u2019 policies (?).\\u201d\\nIn section 5.3, \\u201cexploration efficiency\\u201d -> \\u201cexploration efficiency.\\u201d\\n\\nAs pointed out by other reviewers, this paper shares much similarity with QPLEX but without reference and discussions, thus I would be inclined to reduce the score as well.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Poor writing with incomplete/inconsistent proofs\", \"review\": \"This paper is concerned with the problem of cooperative multi-agent reinforcement learning for CTDE scenario which is well studied in recent literature. The authors propose a factorisation method based on soft value functions. I found that the paper is extremely poorly written which makes it very difficult to understand the overall method. The presentation is also quite arbitrary with discussion around results that seem unnecessary. There is little novelty as most of the paper borrows from SAC paper by Harnooja et al, albeit with gross errors in copying. Here are some of the major issues:\\n\\n1. The authors discuss the IGO decentralisability, however what is the relation between IGO optimal policy and soft policy when the former is not representable by latter?\\n\\n2. How does the local soft policy iteration guarantee joint policy improvement? \\n\\n3. Why is the * being arbitrarily switched in sec 3.2? What does eq 12 even imply? isn't the KL minimiser $\\\\pi_i^*$ itself? Where is $\\\\Pi$ defined?\\n\\n4. How did eq 18 come about?\\n\\n5. The paper is full of unbacked blanket statements like: \\\" Although energy based distribution is very general which has the representation ability of most tasks,\\\" \\\"Our method are a member of them but out of the deterministic policy\\\" etc.\\n\\n6. There are many unintelligible sentences like: \\\"we need to extend the function class into any distributions\\\", \\\"IGO is more generality than IGM\\\", \\\"The individual value network is trained by minimize\\\", \\\"relative overgeneralization, where finding a\\nsuboptimal Nash Equilibrium, which is a well-known game-theoretic pathology\\\" etc.\\n\\n7. In proof for Theorem 2, $\\\\epsilon$-greedy eq 26 cannot be matched by a soft policy in general, thus the rest of the proof cant follow without corrections.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Without citation to an earlier paper, QPLEX [https://arxiv.org/abs/2008.01062], but the factorization and many details are identical.\", \"review\": \"In this paper, the authors use a method in soft RL style and a QPLEX factorization to propose the first multi-agent value function decomposition (VFD) method for stochastic policies. I appreciate the efforts to extend VFD to a larger function class.\\n\\nHowever, I am confused about the similarity to QPLEX [https://arxiv.org/abs/2008.01062v1 ], which was published several months ago. One of the main contributions of the paper is the value function factorization stated in Theorem 2. However, it is identical to QPLEX, including notations. Specifically, Eq. 7 and 13 of FSV are the same as Eq. 8, 10, and 12 in QPLEX, including notations.\\n\\nMoreover, some statements in the paper and details of the figures are very similar to those in the QPLEX paper. Eq. 33 of FSV is the same as Eq. 50 of QPLEX. These equations describe an implementation detail where QPLEX and FSV stop gradients at the same variables. Stopping gradients is reasonable in the context of QPLEX, and the reason for using this trick is well-motivated there. By contrast, why this trick is necessary for FSV remains largely unclear. I encourage the authors to explain why they stop gradients here in the context of soft reinforcement learning.\\n\\nIn summary, the similarity including theorems, equations, and notations (and even figures). Despite these similarities, the authors did not cite QPLEX, which has been online for several months before FSV is published.\\n\\nAdditionally, about the integration of QPLEX and soft Q-learning, I also have some concerns. The definition of soft value functions depends on the specific selection of the temperature parameter. In this paper, the temperature parameters are end-to-end learned by minimizing the temporal-difference error of Q-learning. Could the authors explain why this makes sense in the framework of soft Q-learning? The counterpart in QPLEX ($\\\\lambda$ in Eq. 10) is end-to-end learned to ensure the rich expressivity of QPLEX so that it can represent the complete IGM function class. This design is well-motivated in the original QPLEX paper.\\n\\n\\n** Minor points\\nSome of the claims in the paper need refinements. For example, in Para. 2 of the introduction, \\\"Centralized training with decentralized execution (CTDE) (Oliehoek et al. (2011)) is a common paradigm to address the partial observability\\\". This claim is not solid. Partial observability can still induce miscoordination in decentralized execution [Wang et al. ICLR 2020, https://openreview.net/forum?id=HJx-3grYDB]. That is to say, the framework of CTDE itself cannot solve the problem of partial observability. It is better to say something like \\\"CTDE with communication.\\\" I also find the claim \\\"Value function factorization methods have been an increasingly popular paradigm for solving the scalability in CTDE\\\" not convincing. In fact, QMIX, which is a VFD method, generally can not work very well in tasks with more than 20 agents.\", \"typos\": \"Last paragraph in the introduction: \\\"it significantly outperforms other baselines, SMAC (Samvelyan et al. (2019)).\\\" SMAC is not an algorithm.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
kB8DkEKSDH | Hellinger Distance Constrained Regression | [
"Egor Rotinov"
] | This paper introduces an off-policy reinforcement learning method that uses Hellinger distance between sampling policy (from what samples were collected) and current policy (policy being optimized) as a constraint.
Hellinger distance squared multiplied by two is greater than or equal to total variation distance squared and less than or equal to Kullback-Leibler divergence, therefore a lower bound for expected discounted return for the new policy is improved compared to the lower bound for training with KL.
Also, Hellinger distance is less than or equal to 1, so there is a policy-independent lower bound for expected discounted return.
HDCR is capable of training with Experience Replay, a common setting for distributed RL when collecting trajectories using different policies and learning from this data centralized.
HDCR shows results comparable to or better than Advantage-weighted Behavior Model and Advantage-Weighted Regression on MuJoCo tasks using tiny offline datasets collected by random agents. On bigger datasets (100k timesteps) obtained by pretrained behavioral policy, HDCR outperforms ABM and AWR methods on 3 out of 4 tasks. | [
"offline",
"Reinforcement Learning",
"off-policy",
"control"
] | Reject | https://openreview.net/pdf?id=kB8DkEKSDH | https://openreview.net/forum?id=kB8DkEKSDH | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"eZQ7iSOmNbJ",
"rZVJk6TUED-",
"EJrT67AUx2r",
"kOtJnMbqed7",
"1SGzy2WDuCz",
"4FSCHtZmIAn",
"Rj_GV35b4s6",
"S9XhelPzTBC",
"wBuE66h834h",
"hV0TNUsII3F",
"j89VeJR7xh9",
"VBIr6lYuNZO",
"30wqdURjFu7"
],
"note_type": [
"comment",
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1654996536627,
1654996479414,
1654996466858,
1610040475947,
1606101398948,
1605787102729,
1605786509603,
1605786363843,
1605785878100,
1603939574131,
1603900232949,
1603841764411,
1603778215350
],
"note_signatures": [
[
"~Wu_Zheng3"
],
[
"~Wu_Zheng3"
],
[
"~Wu_Zheng3"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3784/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3784/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3784/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3784/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3784/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3784/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3784/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3784/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3784/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"R3\", \"comment\": \"TEST\"}",
"{\"title\": \"R2\", \"comment\": \"TEST AGAIGN\"}",
"{\"title\": \"R1\", \"comment\": \"TEST\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The reviewer concerns generally centered around the novelty of replacing the distance metric for a policy constraint. While the authors clarified many of the reviewer concerns and added some additional comparisons, in the end it was not clear why the proposed approach was interesting: while it is true that this particular distance metric has not been evaluated in prior work, and the result would have been interesting if it resulted in some clear benefits either empirically or theoretically, in the absence of clear and unambiguous benefit, it's not clear how valuable this concept really is. After discussion, the reviewers generally found the paper to not be ready for publication in its present state.\"}",
"{\"title\": \"Read authors' response\", \"comment\": \"I have read the author's response and the updated experiments. Unfortunately, the empirical results are simply not convincing at all. Even just compared against the ABM and AWR baselines, the proposed method does not demonstrate a clear advantage (final returns are often within 1-std deviation of each other, while learning curves generally look very similar with no clear winner), whlie being significantly outperformed by BCQ.\", \"regarding_bcq_taking_the_gradient_through_the_q_function\": \"I don't see any inherent reason why taking the gradient through the Q function necessarily leads to better results. It could certainly lead to faster optimization, but that isn't necessarily very relevant in an offline RL setting. Additionally, in the AWR paper, their results in both online and offline experiments were reasonably competitive with RL methods that used Q-function gradients. I would recommend the authors make a more clear and explicit argument as to when using critic gradients like BCQ should perform poorly and empirically demonstrate a setting where such methods fail and methods like HDCR do.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for the review.\\n1. We do not know any papers where authors propose new metrics (except KL and Total Variation Divergence) which use is motivated theoretically and offline method for this metric is derived. The main purpose of the paper was to derive an offline method that will provide an alternative to FQI, AWR, and AWAC methods, which use KL divergence.\\n2. Actually, using Hellinger distance allow us to make bigger steps because the distance's derivative asymptotically tends to zero.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your review.\\nWe carefully revised the paper and clarified the mentioned phrases as well as some others. Also, we provided a y-axis title, which is an averaged episode reward of policies during evaluation. X-axes are iterations of the training.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your review.\\nFollowing your advice, we made a second experiment as in the BCQ paper. Despite HDCR being lower than methods that train policy function by taking gradient through Q-function, it provides better results than other methods that directly optimize policy function.\", \"about_other_metrics\": \"Asymptotically, derivatives of both MMD and Wasserstein tend to 1. This issue can affect training, especially when learning from offline data. This problem also exists in methods that use KL. In contrast, the derivative of Hellinger distance tends to 0, providing less conservative updates.\"}",
"{\"title\": \"Experiments update\", \"comment\": \"We would like to thank reviewers for their detailed reviews.\\nReviewers noted insufficient experimental part of the paper. Therefore we reshaped the second experiment to make it more conventional. We trained a behavioral policy (DDPG) on 1M timesteps and then collected a buffer of 100k timesteps as it is in BCQ paper. All four methods used the same buffer (dataset) during the whole training. \\nAlso, we publish individual responses below.\"}",
"{\"title\": \"Short and interesting paper, needs minor clarifications\", \"review\": \"This paper proposes an algorithm for off-policy reinforcement learning using the Hellinger distance between the sampling policy and optimized policy as a constraint. The motivation for the proposed method is explained in the preliminaries section. The actual algorithm and experiments run using the proposed algorithm are also provided.\\n\\nThe derivation is easy to follow, and this is because of the well-known lower and upper bounds on the Hellinger distance. \\n\\nThe writing of the paper needs work. For example, the abstract talks about the sampling policy and current policy. By current policy, what the authors mean is the policy that is being optimized. The sampling policy is the policy that was run offline. Clarifying these terms would help. Similarly, I did not follow \\\"return for the new policy is improved comparing to KL\\\".\", \"in_paragraph_3\": \"\\\"With the use of Lagrangian, have been derived\\\" needs proofreading. In eqn 13, what is beta?\\n\\nIn the figures, what are the axes?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Idea is not novel enough; results are not significantly better than baselines\", \"review\": \"##########################################################################\", \"summary\": \"The paper provides a new metric - Hellinger distance to be combined with trust region ideas in policy optimization. The major difference from prior work is the change of this distance metric. The paper shows that with this distance metric, along with Lagrangian relaxation, one could show analytic results of improved policies. The paper also shows similar lower bound improvement results and compared with baselines on offline rl tasks.\\n\\n##########################################################################\", \"reasons_for_score\": \"Overall, I vote for rejection. I think the idea of changing the distance metric is not novel enough. Critically, I do not think so far in the paper there is a strong enough motivation to use this distance metric: both innovation-wise and result-wise. I will explain in details below.\\n \\n##########################################################################Pros: \\n\\n \\n1. Idea is not novel: the overall idea of using an alternative metric does not seem novel. Though the authors motivated an 'improved' version of the trust region lower bound, by using the fact that the Hellinger distance is upper bounded by KL - I think such an improvement in the lower bound is a bit trivial and does not provide new perspectives on the old results.\\n \\n2. This new lower bound also might not provide additional benefits in practice - because in practice such lower bounds are generally too conservative.\\n\\n3. Experiment results are also not strong enough. I will explain below.\\n \\n##########################################################################\", \"cons\": \"1. The final performance of all three baseline algorithms are fairly bad in terms of final rewards (e.g. for halfcheetah, all returns are negative, yet we know that online algorithms could achieve >3000 at least and in some cases >6000). I wonder if this general inferior performance is a result of using offline dataset - in that sense, does the agent learn anything meaningful at all?\\n\\n2. From both fig 1 and fig 2, about for half of the tasks the performance seem to drop (or stay at the same level) as the case where no training is done (x-axis at the origin). Does this also corroborate my previous concern that these agents do not learn much at all?\\n \\n3. From the curves presented in Fig1,2, as well as mean+std results in Table 1,2, it does not seem that the new method provides much significant gains either.\\n \\n##########################################################################\", \"questions_during_rebuttal_period\": \"Please address and clarify the cons above. Thanks.\\n\\n \\n#########################################################################\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Insufficient contribution and experimental validation\", \"review\": \"The authors propose the use of the Hellinger distance instead of KL divergence to constrain new policies to remain close to the behavior policy. The technical aspects are straightforward, noting that Hellinger provides tighter bounds on total variation than KL, and can straightforwardly be plugged into the CPI/TRPO bounds for policy improvement. They also propose an offline reinforcement learning algorithm based on enforcing a Hellinger constraint to the data policy, deriving iterative optimization procedure, and evaluate it on offline\\n\\nI find the experimental evaluation highly lacking. It seems with the datasets and envs evaluated, policy performance actually *drops* as policy optimization is conducted, so it is not clear to me that these evaluations actually provide meaningful information towards which methods perform better in scenarious where we would want to use offline RL. I would like to see much more extensive evaluation of this method compared to other offline RL algorithms like BCQ https://arxiv.org/abs/1812.02900, BRAC https://arxiv.org/abs/1911.11361, or CQL https://arxiv.org/abs/2006.04779, over a much wider variety of datasets. \\n\\nIn general, I'm not convinced that simply using the Hellinger distance instead of KL will lead to significant improvements on its own, given that in the BRAC paper, the authors experimented with different trust regions including Wasserstein, MMD, and KL and didn't find huge differences in the tested domains. Overall, the contribution does not seem significant enough to warrant publication without strong experimental results, which this paper lacks.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Hellinger Distance Constrained Regression\", \"review\": \"Summary:\\n \\nThis paper proposes a supervised learning for off-policy reinforcement learning. It exploits the Hellinger distance instead of KL divergence. Thus it achieves tighter lower bound of the expected culmulative return than that using the KL divergence. Moreover, the new lower bound is policy independent. The experimental results show that the proposed method slightly outperforms other baselines when only small amount of data are given, while the algorithms fail to learn on several environments.\", \"reasons_for_score\": \"Though it has some advantages, I vote to reject this paper. This is because, it has low novelty, the experiments are wrongly designed, and thus it is hard to believe the results. The specific details are below.\\n \\n\\nPros\\n \\n+ Hellinger divergence is used instead of KL divergence, and thus the lower bound become tighter than that using KL divergence.\\n \\n+ The loss function for policy can be derived by theory\\n \\n\\nCons\\n \\n- Changing KL distance to Hellinger divergence has low novelty. Also, the derivation of the loss function using Hellinger distance isn't difficult. Hellinger distance and KL divergence are all under the class of Amari alpha-divergence. When alpha = +/- 1, Amari alpha-divergence becomes KL and when alpha=0, Amari alpha-divergence becomes the Hellinger distance = integral [sqrt(p) - sqrt(q)]^2 dx. Indeed, HD is symmetric and satisfies the axioms of distance. Basically, when we consider the HD on the space of probability distribution, we consider Euclidean geometry on the space of probability distribution, whereas the KLD induces the Boltzman interpretation, i.e., p ~ exp( -KLD).\\n\\n- In addition to the issue of significance in novelty, the numerical results show that the performance improvement is insignificant or negligible. .\\n \\n- The experiments used data sampled by random policies or first few samples of on-policy data, but I think that this is a little strange training setting. Most of the previous works in this line use samples at a certain performance (NOT DRAWN BY RANDOM POLICY). For example, in ABM paper[1], it used first 10,000 episodes (if the length of an episode is 1,000, it uses first 10 million samples), or first 2,000 episodes (first 2 million samples) to show its performance when it uses high performed samples, or low performed samples, respectively. These contain good performed samples relative to the random samples. However, experiments in this paper use almost random samples to train policies. We cannot expect a good policy at a certain performance using these random samples. This expectation is also shown in the results. Some learning curves go down as learning proceeds, and this means that the learning fails on these environments. If the proposed method learns successfully while the others fail to learn, it is a meaningful result, but it is not, otherwise. I think that the authors should evaluate performance using better samples to prove that the proposed method outperforms others.\\n \\n\\nReference\\n \\n[1] Noah Siegel, et al. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In International Conference on Learning Representations, 2020.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
aJLjjpi0Vty | Collaborative Filtering with Smooth Reconstruction of the Preference Function | [
"Ali Shirali",
"Reza Kazemi",
"Arash Amini"
] | The problem of predicting the rating of a set of users to a set of items in a recommender system based on partial knowledge of the ratings is widely known as collaborative filtering. In this paper, we consider a mapping of the items into a vector space and study the prediction problem by assuming an underlying smooth preference function for each user, the quantization at each given vector yields the associated rating. To estimate the preference functions, we implicitly cluster the users with similar ratings to form dominant types. Next, we associate each dominant type with a smooth preference function; i.e., the function values for items with nearby vectors shall be close to each other.
The latter is accomplished by a rich representation learning in a so-called frequency domain. In this framework, we propose two approaches for learning user and item representations. First, we use an alternating optimization method in the spirit of $k$-means to cluster users and map items. We further make this approach less prone to overfitting by a boosting technique.
Second, we present a feedforward neural network architecture consisting of interpretable layers which implicitely clusters the users. The performance of the method is evaluated on two benchmark datasets (ML-100k and ML-1M). Albeit the method benefits from simplicity, it shows a remarkable performance and opens a venue for future research. All codes are publicly available on the GitLab. | [
"collaborative filtering",
"recommender system",
"sampling theory"
] | Reject | https://openreview.net/pdf?id=aJLjjpi0Vty | https://openreview.net/forum?id=aJLjjpi0Vty | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"H_KFwNCGaHN",
"3umt7OwLCy_",
"6k-u6mGxduV",
"R7RKCJY35Bk",
"xZR5CamT04h"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512031,
1603676827244,
1603475683584,
1603338414980,
1603263377307
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3783/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3783/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3783/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3783/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper mostly received negative scores. A few reviewers pointed out that the idea of modeling user preference in the frequency domain seems novel and interesting. However, there are a few concerns around the clarity of the paper, the motivation of the proposed approach, as well as the experimental results being unconvincing (both in terms of execution as well as exploration of the results). The authors did not provide a response. Therefore, I recommend reject.\"}",
"{\"title\": \"Collaborative Filtering with Smooth Reconstruction of the Preference Function\", \"review\": \"Summary\\uff1a\\n\\nIn this paper, the authors regarded the rating prediction as a reconstruction problem with a smoothness assumption. They proposed two approaches for smoothly reconstructing the preference function of users and conducted experiments on both a synthetic dataset and two benchmark datasets (ml-100k and ml-1m). However, there are still some technical issues in this article, such as unconvincing experiments and confusing descriptions. As a result, I suggest the paper should be rejected.\\n\\nDetailed Comments\\uff1a\\n\\nWith respect to the problem of rating predictions in collaborative filtering, this work proposed to estimate the predictions by assuming an underlying smooth preference function for each user, the quantization at each given vector yields the associated rating. Then, they developed two methods including k-representation motivated by k-means and reconst-net, a feed-forward neural network, to do this. The authors also conducted experiments on two benchmark datasets (ml-100k and ml-1m) to test the performance of proposed algorithms.\", \"the_key_strengths\": [\"The perspective of reconstructing the user preference functions is novel.\", \"They proposed two effective methods to estimate the preference functions.\", \"An alternating optimization method is proposed that effectively optimizes a non-convex loss function and extracts user and item representations.\"], \"reason_to_reject\": [\"The contribution of \\u201cthe reconstruction of user preference functions\\u201d is overclaimed. Much literature in the area of CF regards the rating prediction problem as a reconstruction problem, such as Matrix Factorization (MF) based algorithms [1,2]. The authors should include these methods and elaborate on the difference between them.\", \"In the introduction, the authors pointed out that the contemporary deep network in RS is merely limited to shallow networks and they addressed this problem in this work. However, the reconst-net only involves three trainable layers. It is not clear how do you solve the \\u201cshallow\\u201d problem?\", \"The experiments are not convincing. The authors claimed that \\u201ctheir performance on benchmark datasets is remarkable\\u201d, but in experiments, we see that the performance of the proposed model is not competitive, e.g., the performance gap on ml-1m is about 5%. In addition, the authors did not dive into these results and analyze them. They should also include some MF-based methods for rating prediction problem, such as [1,2]\", \"Many details and explanations are missing and confusing. For example, the intuitions of figure 1 and figure 2 are not clear, and the authors should give more explanations with respect to these figures. In figure 4a, what does the legend represent, such as \\u201cuser tr.\\u201d, \\u201cva.\\u201d and \\u201citem tr.\\u201d?\", \"some typos:\", \"\\u201cin the Section ?? on both synthetic and real data\\u201d\", \"\\u201cfigure 2 shows an example\\u201d\\uff0c\\u201d figure\\u201d should be \\u201cFigure\\u201d\", \"[1] FISM: factored item similarity models for top-N recommender systems. KDD\\u201913\", \"[2] Factorization meets the neighborhood: a multifaceted collaborative filtering model. KDD\\u201908\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Ignores key problem that ratings are missing not at random\", \"review\": \"This paper proposes an approach based on Fourier transforms to predict ratings in collaborative filtering problems. The paper\\u2019s scope (\\u201csmooth reconstruction functions\\u201d) gets immediately narrowed down to Fourier transforms--it would be nice to provide some motivation for this choice over alternative smooth functions. The paper then clusters the users as a way to reduce the number of parameters in the model, given that the Fourier transform itself does not reduce it. As a further step, the clustering is replaced by a soft-clustering learned by a neural network. In the experiments, the RMSE of the rating prediction problem is worse than some baselines and better than others.\\n\\nBesides these technical steps, from a more big-picture perspective, I am not sure if the problem of rating prediction as cast in this paper, misses a key point. The key point I am concerned about is that the observed ratings are missing not at random [a]. For this reason, the collaborative-filtering literature abandoned the minimization of RMSE on the OBSERVED ratings ten years ago. Two different avenues have been pursued since then: most of the papers switched to ranking the entire catalog of items, e.g, see [b] to get started. A few papers continued with rating prediction, but stated the problem correctly by taking into account the fact that the ratings are missing not at random, eg., [c,d].\\n\\nIn this submission, the problem statement at the top of page 4, and Eq. 6, was not clear to me: while s was defined clearly in the Fourier transform earlier in the paper, I did not find a definition of s_u in Eq 6 in the context of rating prediction, i.e., is this the vector of ratings of user u? Only the observed ratings? How are the unobserved/missing ratings of user u treated in the proposed approach? \\n\\nGiven the RMSE-values in the experiments, my best guess is that the model was trained on the observed ratings only, ignoring the key problem that the ratings are missing not at random. \\n\\nI feel like a rating-prediction paper that ignores the key problem of collaborative filtering, i.e., the fact that ratings are missing not at random cannot be accepted (and should actually be desk-rejected). \\n\\nI encourage the authors to modify the approach to account for this key problem of collaborative filtering. Alternatively, this approach may be useful for different applications, like compressive sensing problems where the observations are truly random.\\n\\n[a] Collaborative Prediction and Ranking with Non-Random Missing Data\\nby B. Marlin and R. Zemel (RecSys 2009 Best Paper)\\n\\n[b] Training and testing of recommender systems on data missing not at random\\nby H. Steck (KDD 2010)\\n\\n[c] Probabilistic Matrix Factorization with Non-random Missing Data\\nby J.M. Hern\\u00e1ndez-Lobato, N. Houlsby, and Z. Ghahramani (ICML 2014)\\n\\n[d] Modeling User Exposure in Recommendation\\nby D. Liang et al. (WebConf 2016)\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"artful idea, but not very clear\", \"review\": \"The paper has proposed a smooth reconstruction of the preference function in collaborative filtering for recommender system. The evaluations have generally demonstrate the proposed method. However, there are several concerns left.\\n\\n1, The paper has proposed utilizing interpretable layers to implicitly cluster the users. However, it is not clear how the interpretable layer measure the interpretability using qualitative metric. I don't believe the the weights appeared in the X layers gains the interpretability to tell the model how the users may be clustered. In terms of the contribution of the interpretable layer, there should be ablation study to demonstrate the effectiveness of the interpretable layers(without x-layer).\\n\\n2, The experiments mostly look good. However, they are not superior compared to the baselines. It is not required to outperform the baseline results that listed in Table 3, but necessary analysis should be conduct why the gap exists. If it is claimed that some baselines use side information, it is vital to eliminate the side informations so that to make a fair experimental comparisons.\\n\\n3, The experiments could be explored to other datasets to show the generalization capability of the proposed method. Even in Movie-lens dataset, if would be easier to extend to ranking based evaluation(using NDCG, F1 etc), rather than purely rate-based evaluations since both of them are practical metric in recommender system.\\n\\n4, Some procedure of the proposed methods are not intuitive, e.g.In Alg 2, why take log2|C| as the total size of k-representations?\\n\\n5. some typos and gammer errors. e.g. page 4, 'in the section ??' page 5, 'which selects the updating steps adaptively and without supervision.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper seems to be a preliminary work and should not be considered for 2nd round review.\", \"review\": \"The paper is not well-written, so it is hard for me to fully understand the whole paper. It seems to me that the authors tried to model user preferences with smooth functions for collaborative filtering and results on two Movielens datasets showed that the proposed method is comparable to several baseline methods.\", \"pros\": \"1.\\tThe idea of capturing frequency domain information for collaborative filtering seems interesting to me, although I did not quite understand how this information is obtained in this work.\\n2.\\tThe authors mentioned that their method can be interpretable, which may be more interesting if they can provide some case studies for explaining recommendations using the proposed method.\", \"cons\": \"1.\\tThe motivation is not clear to me, i.e., why smooth functions are important in modelling user preferences in collaborative filtering is not well explained. \\n2.\\tThe authors seem to have a narrow understanding of collaborative filtering. For instance, the authors mentioned that \\u201cthe utilization of deep architectures is limited to shallow networks\\u201d in exiting CF literature. However, there are a lot of recent works building upon state-of-the-art deep learning techniques as far as I know. \\n3.\\tThe idea of this paper is very similar to a recent work (Harald Steck, Embarrassingly Shallow Autoencoders for Sparse Data, WWW \\u201919). The main difference may be that this work adopted clustering of users instead of individual user when modelling user interests.\\n4.\\tThe experimental results are not encouraging. The proposed method is actually much worse than Bayesian TimeSVD++ (Rendle et al., 2019) and IGMC (Zhang & Chen, 2019). So, it is hard to understand why this alternative method is promising in the area.\\n5.\\tThe presentation could be improved. The authors mentioned that their code has been published but no link was provided in the paper. \\n\\nOverall, I think this paper is a preliminary work and is not ready for publication. I think the authors should revise the paper and submit the revised version to another conference.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
MhTgnultR1K | A Real-time Contribution Measurement Method for Participants in Federated Learning | [
"Bingjie Yan",
"Yize Zhou",
"Boyi Liu",
"Jun Wang",
"Yuhan Zhang",
"Li Liu",
"Xiaolan Nie",
"Zhiwei Fan",
"Zhixuan Liang"
] | Federated learning is a framework for protecting distributed data privacy and has participated in commercial activities. However, there is a lack of a sufficiently reasonable contribution measurement mechanism to distribute the reward for each agent. In the commercial union, if there is no mechanism like this, every agent will get the same reward. This is unfair to agents that provide better data, so such a mechanism is needed. To address this issue, this work proposes a real-time contribution measurement method. Firstly, the method defines the impact of each agent. Furthermore, we comprehensively consider the current round and the previous round to obtain the contribution rate of each agent. To verify effectiveness of the proposed method, the work conducts pseudo-distributed training and an experiment on the Penn Treebank dataset. Comparing the Shapley Value in game theory, the comparative experiment result shows that the proposed method is more sensitive to both data quantity and data quality under the premise of maintaining real-time. | [
"Federated Learning",
"Contribution Evaluation",
"Multi-party Participation"
] | Reject | https://openreview.net/pdf?id=MhTgnultR1K | https://openreview.net/forum?id=MhTgnultR1K | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"iqcy594-Hn",
"LdxLRhnQAT",
"9cWjcy_OapH",
"0bzyBCYvgap",
"2h4iP9Z4Lu5",
"80mIhJhBkT-",
"_x8-Dx8deQk",
"3NFR5hFguhG",
"w8_7d-o5qH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512099,
1606138194486,
1606138166243,
1606138129745,
1606138060785,
1603899946993,
1603871807619,
1603758753957,
1603721857878
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3781/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3781/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3781/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3781/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3781/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3781/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3781/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3781/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Although this paper tackles an important problem, all reviewers agree that it requires further work before it can be published. First, the paper would need to be polished in order to be easier to read. Stronger experiments would also be needed in order to support the claims of the paper, e.g. by considering additional datasets and proper baselines. Finally, an important concern about this paper is novelty and originality. It is not clear at this point that the contribution is substantial enough for a conference like ICLR. Addressing these points would significantly improve the paper.\"}",
"{\"title\": \"Reviewer 4 Response\", \"comment\": \"Dear reviewer:\\n\\nFirstly, thank you for your review on our paper, we will accept it with an open mind and continue to improve the paper. We will strengthen the theoretical proof of the paper and provide more experimental data. The polishing of the paper is also one of the tasks we need to carry out, and the revision of the chart annotations. We will continue to enrich the experimental part of the problems on the experimental data set and refer to more recent research work. Thank you for your suggestions for our paper.\"}",
"{\"title\": \"Reviewer 3 Response\", \"comment\": \"Dear reviewer:\\n\\nThanks for your comments on our paper. We will improve the judgment and constraints of attackers, and strengthen the research on the system. At the same time, we will also select more datasets and baseline methods to enrich our experiments. Thanks again for your suggestions and review. We will continue to study related work.\"}",
"{\"title\": \"Reviewer 1 Response\", \"comment\": \"Dear reviewer:\\n\\nThank you for your review on our paper, and thank you for your affirmation of the direction of our paper. We will choose more general algorithms for experimental verification, such as FedAVG and FedSGD. We will conduct theoretical proofs of our experiments based on game theory to enrich our theoretical part. For the problem of insufficient datasets, we will also verify with more datasets. In the meanwhile, we will continue to revise the issues of the paper to meet the requirements. Thank you again for your suggestions and affirmation of our work direction.\"}",
"{\"title\": \"Reviewer 2 Response\", \"comment\": \"Dear reviewer:\\n\\nThank you for your review on our paper, we will accept it with an open mind and continue to improve the paper. In response to the dataset in the experiment, we will also verify it on more comprehensive dataset. Simultaneously, we will polish the sentence of the paper to make it easier to read. For the contribution measurement method proposed in this article, we will also follow strengthen the theoretical support. Based on our measurement method, we will compare it with the more accurate Shapley Value, which will be reflected in the subsequent submissions. Thanks again for your suggestions and review. We will continue to study related work.\"}",
"{\"title\": \"A REAL-TIME CONTRIBUTION MEASUREMENT METHOD FOR PARTICIPANTS IN FEDERATED LEARNING\", \"review\": \"This paper designs an equation, i.e., equation (5) in the paper, to measure the impact or contribution of each participant/agent in federated learning. The designed measurement method is applied to attention aggregation algorithm of federated learning. Few experiments using Penn Treebank are conducted to support its claims.\\n\\nThis paper should be rejected because (1) the paper is unpolished and thus is hard to read, (2) the novelty appears quite weak, and (3) the experiments are difficult to understand and generally do not support its contributions\", \"concerns\": \"The paper is difficult to read due to the poor use of English. Many sentences are incomprehensible. Thus, it was often impossible for me to determine exactly what the authors would like to say or describe. Please have your submission proof-read for English writing style and grammar issues. Moreover, please treat the equations as the parts of sentences and make sure that the caption formats of Figures obey the ICLR format.\\n\\nI also have a serious concern about the novelty of this paper. If my understanding is correct (due to the aforementioned reason), Subsection 3.3 is the only new material proposed by the authors. However, the proposed equation, i.e., equation (5), seems like a design choice without any theoretical justification or providing any intuitive reason, which significantly degrades the novelty of this paper.\\n\\nFinally, the experiments should be refined to support its main claims. As claimed in Section 1, the proposed measurement method is real-time and has low computational complexity. However, no experiment nor quantitative comparison addressing the running time and complexity between the proposed method and Shapley Value. Actually, the authors compared their method with the method of approximating Shapley Value instead of exact Shapley Value. Furthermore, please cite for Shapley Value papers.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The idea lacks novelty and the experiments are not convincing\", \"review\": \"Summary:\\n\\nThe paper proposes a new contribution measurement approach for federated learning. The basic idea is that the agent with a larger model update has a larger contribution. Specifically, based on FedAtt [1], the impact of a client is computed as the local updates plus the impact of the previous round times a decay rate. The experiments on a dataset show that the proposed approach can have a similar contribution measurement compared with Shapley Value.\\n\\n[1] Learning private neural language modeling with attentive aggregation. IJCNN 2019\", \"strengths\": \"(1) The motivation of the paper is clear.\\n\\n(2) The studied area is important. Effective incentive mechanisms in federated learning are still an open challenge.\", \"weakness\": \"(1) The proposed idea lacks novelty and may not be applicable in general federated learning algorithms. The contribution of each client is simply evaluated by its local update in FedAtt. FedAtt is not a widely used federated learning algorithm currently. It is not clear whether the proposed approach is applicable to other standard federated learning algorithms such as FedAvg. Also, I do not understand why the paper focuses on FedAtt instead of FedAvg.\\n\\n(2) The paper lacks reasonable explanations for the proposed approach. A client may have arbitrary bad data and the local updated model may be far from the global optimal model. In such a case, since the distance between the local model and the global model is large, the contribution is also large according to the proposed approach, which is not reasonable. It is not clear how the proposed approach can handle such cases.\\n\\n(3) The experiments are weak and not clear. \\n\\n a) It is not explained how the agent contribution rate is computed. \\n\\n b) The experiments are conducted on a single dataset. More datasets are needed. \\n\\n c) From Figure 2, it is hard to say that the proposed approach has a similar measurement with SV. \\n\\n d) Since the motivation is to reduce the computation overhead, the authors should show compare the computation complexity or the computation time of the proposed approach and SV.\", \"minor_issues\": \"(1) The writing can be improved, e.g., \\u201cSuch\\u201d -> \\u201cFor example,\\u201d\\n\\n(2) Figure 1 is not referred to in the text.\\n\\n(3) Figure2-5: orange and blue colors are not explained.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper needs more thoughtful thinking.\", \"review\": \"The paper is to measure each client\\u2019s contribution to training the federated learning model. In particular, the contribution is measured by the distance between the local model and the global model in each iteration. The targeting problem is interesting, and the use of attention-based model divergence is also an interesting idea to measure the contribution. However, the paper lacks strict theoretical discussion to prove the proposed solution is a reasonable one rather than a heuristic method. Moreover, the experiment is too weak to support the claims. The paper\\u2019s technique contribution and originality are also limited.\\n\\nBelow are some detailed concerns.\\n\\n1) The authors need to make a clear definition of the assumed application scenario so that the below problems can be avoided or solved. \\n\\nIf the client\\u2019s contribution is linked to rewards, it is unavoidable that some clients will produce fake data to gain more contribution to the commercial federation system. Therefore, the paper should discuss the prevention of \\u201cattacking by fake data\\u201d. \\n\\nFor example, if the client randomly shuffles the index of neurons in the trained local model w_k, then the client\\u2019s local model will get a bigger s_k^l calculated by equation 2. Thus, this client is likely to gain a big reward at every iteration.\\n\\nAccording to equation 5, the contribution at the early stage will be discounted. It is unfair for the clients to be selected at an early stage. Therefore, from a systematic perspective, some clients may refuse to contribute to the training process at an early stage. \\n\\n\\n2) Contribution is not enough\\n\\nThe core method comes from the FedAtt algorithm \\u2013 an attention-based federated aggregation method. The paper\\u2019s primary contribution relies on section 3.3 to measure the contribution according to the gradients. \\n\\n\\n3) The experiments are too weak to support their claim. \\n\\nMore datasets and baseline methods are required, for example, the FEMNIST, FeCeleba.\\n\\nIt is unclear how to define an objective metric to measure the quality of the proposed method. The contribution is a subjective feeling that various to different tasks and assessor.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Low computational complexity calculation of weights for federated learning clients\", \"review\": \"The paper proposes a low computational complexity method for weighting contributions of clients in a federated learning setting. The main contributions are to compare the weighting method with Shapley values and their sensitivity to low data volume and quality. The paper is based on the FedAtt paper that calculates weights based on the Euclidean distance between the server model and each client and for each layer.\\n\\nThe experimental setup is well described, including details about the hardware, software, datasets, model, and evaluation criteria. However, the model only specifies a \\\"smaller GRU-based model\\\" without giving any details of what that model is. They do not clearly describe some parameters of the approximation of the Shapley value calculation, reducing the value of the comparison between FedAtt and Shapley values. They could also have taken additional steps to improve the claims' confidence, e.g., only one dataset was used, which is relatively weak compared to the original FedAtt paper. The graphs in the results section could be described with more detail to explain what, e.g., the colors of the \\\"special agents\\\" mean. Also, there are no confidence measures specified, making it hard to evaluate the claims' validity.\\n\\nThe references include essential papers but are missing some core references, such as Federated Learning and Shapley values themselves. Also, related papers such as \\\"Active Federated Learning\\\" by Goetz et al. talk about very similar ideas but lack any mention in the paper. The language and grammar could be improved, and some of the formulations make it hard to read. The comparison to Shapley values is also not motived in any detail, thus further reducing the paper contributions' value.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
j0uePNuoBho | Learned Threshold Pruning | [
"Kambiz Azarian",
"Yash Sanjay Bhalgat",
"Jinwon Lee",
"Tijmen Blankevoort"
] | This paper presents a novel differentiable method for unstructured weight pruning of deep neural networks. Our learned-threshold pruning (LTP) method learns per-layer thresholds via gradient descent, unlike conventional methods where they are set as input. Making thresholds trainable also makes LTP computationally efficient, hence scalable to deeper networks. For example, it takes $30$ epochs for LTP to prune ResNet50 on ImageNet by a factor of $9.1$. This is in contrast to other methods that search for per-layer thresholds via a computationally intensive iterative pruning and fine-tuning process. Additionally, with a novel differentiable $L_0$ regularization, LTP is able to operate effectively on architectures with batch-normalization. This is important since $L_1$ and $L_2$ penalties lose their regularizing effect in networks with batch-normalization. Finally, LTP generates a trail of progressively sparser networks from which the desired pruned network can be picked based on sparsity and performance requirements. These features allow LTP to achieve competitive compression rates on ImageNet networks such as AlexNet ($26.4\times$ compression with $79.1\%$ Top-5 accuracy) and ResNet50 ($9.1\times$ compression with $92.0\%$ Top-5 accuracy). We also show that LTP effectively prunes modern \textit{compact} architectures, such as EfficientNet, MobileNetV2 and MixNet. | [
"Efficiency",
"Model Compression",
"Unstructured Pruning",
"Differentiable Pruning"
] | Reject | https://openreview.net/pdf?id=j0uePNuoBho | https://openreview.net/forum?id=j0uePNuoBho | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"dHfEdzqHCDU",
"JyV0PbgEPms",
"yRt6rn00yXa",
"K4Su6hD2a9",
"eprAOfJRslN",
"hSVwnM8IWs",
"Bbp-JccEu-",
"cPQPVWuQITT",
"AvgrA-0gNoE"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512171,
1605912803168,
1605912692370,
1605912486128,
1605912380346,
1603926580263,
1603877188739,
1603796968192,
1602555111390
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3775/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3775/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3775/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3775/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3775/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3775/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3775/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3775/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a method for differentiable pruning that replaces the hard thresholding of standard pruning, with a soft version that permits taking the gradients of the pruning threshold. The proposed benefits are an accuracy that is better or competitive with alternative methods as well. Moreover, the paper suggests the technique to be efficient.\\n\\nThe pros of this paper are that it is working in an interesting setting of differentiable pruning, with the hope of -- in some sense -- simplifying the pruning process or at least unifying the process with standard training. The technique is plausibly justified in its technical development. The paper also follows with a significant number of experiments. \\n\\nThe cons of this paper are that the conceptual framework -- beyond the initial idea -- is not fully clear. In particular, this paper does not elucidate a clear set of claims and hence, results in the difficulty on the Reviewers part in detangling the claims and identifying the appropriate comparisons.\\n\\nFor example, the paper doesn't take up a simple claim that it is state-of-the-art in accuracy vs parameter measures (and would seem not to given the results of Renda et al. (2020)). It need not necessarily make this claim, but there are suggestions to such a claim early in the paper. If this is not an intended claim, then the paper can remove any suggestions to such (i.e., the claims around new SoTA for networks not evaluated in prior work). \\n\\nThe paper has a somewhat tentative claim that it is more efficient (in the total number of epochs of training) versus other techniques (Table 3). However, the presented results are only at a single-point versus other methods. Renda et al. (2020) directly consider accuracy versus retraining cost trade-offs. Appendix E of that paper provides one-shot pruning results for ResNet-50 showing accuracy on par with that presented here. The number of retraining epochs is also similar to here. This paper, however, only compares against the most expensive iterative pruning data point in the other paper.\\n\\nIn sum, my recommendation is Reject. This is promising work that needs only (1) to include a few testable claims and (2) to re-organize the results (and perhaps run a limited set of new results) to thoroughly explore those claims. For example, if the most important claim is accuracy vs retraining cost, then it needs to show a more complete trade-off curve of the two results. Of course, this, in principle, opens the door to comparisons to many other techniques in the literature.\"}",
"{\"title\": \"Learned Threshold Pruning\", \"comment\": \"We thank you very much for your comments. Please find our responses below.\\n1.\\tWe agree that in its current form LTP\\u2019s final keep-ratio is only indirectly controlled via $\\\\lambda$. However, an explicit pruning schedule can easily be imposed on LTP where the pruning process is split among several rounds, each targeting a preset keep-ratio. In this form, $\\\\lambda$ is started off with a small value, but gradually increased to make sure each round\\u2019s target keep-ratio is reached at the desired point.\\n\\n2.\\tWe will make changes to the experiments section to make each assertion supported by either a graph or a table. Regarding efficacy of our soft $L_0$ compared to $L_2$ & $L_1$ in the presence of batch normalization, please refer to our response #11&12 To reviewer #2. For a different perspective (admittedly not related to batch-normalization), please refer to response #2 to reviewer #1.\\n\\n3.\\tWe intended to use $\\\\delta(.)$ instead of $\\\\sigma(.)$ as it is often used to denote the Dirac \\u201cdelta\\u201d function. We will fix this and include the derivative in its definition.\\n\\n4.\\tWe will use \\u223c and \\u2272, when defining sigmoid function\\u2019s transitional region, and clearly state their definitions.\\n\\n5.\\tWe will define $\\\\eta$, i.e., learning-rate, before using it. We explained equation (8) after equation (9), but we do agree that this part needs to be revised.\\n\\n6.\\tWe will define $\\\\lambda$ after (9), and $\\\\eta_{\\\\tau}$ after (11). There is only one threshold learning-rate, hence we will drop subscript $l$. We will move (9) before (8).\\n\\n7.\\tWe will rewrite the section around equation (12) to make it more readable.\\n\\n8.\\tA weight\\u2019s distance from y=x indicates how much it changed during pruning. Figure 1 (left) shows that many kept and pruned weights changed only slightly, while some kept weights grow significantly (points to the left and above the red lines) and some pruned weights shrunk a lot (points to the right and below the red curves). The Motivation of this plot was to show that, because of the latter fact, LTP is not strictly a magnitude-based pruning scheme. The motivation of the right plot was to show the importance of (8), which ensures that pruning does not stall.\\nThe scale of the two plots are different because pruning in the right plot stalled early on, resulting in a much smaller pruning threshold (3.6 vs. 0.7). A gap around the threshold is undesirable as it indicates that sigmoid function\\u2019s transitional region is depleted of weights, causing $L_0$ not to vary for small changes in the threshold. This stalls the pruning process (c.f., text between (7) and (8)). \\nThe scatter plots are not suitable for inferring proportions as a point may represent a single weight, or multiple ones falling on top of one another. We will add more details to Figure 1 to make it clearer.\\n\\n9.\\tFigure 2 compares $L_0$ vs. no regularization (vs. $L_2$). We will move Figure 1, which compares using (3) instead of (14), to the ablation study and move that section after the ImageNet Pruning. This covers both suggested scenarios. \\n\\n10.\\tTable 2 has three sections; each covering comparison points with the same original model (Caffe/TorchVision/STR). We have repeated LTP with STR model and will update Table 2 to include it. This has the added benefit that LTP will be the last entry of TorchVision and STR sections. We will label LTP results as \\u201cours\\u201d. For comparison to Kusupati, please refer to our response #7 to reviewer #1. We will add number of epochs to Table 2 to provide a better comparison between LTP and Renda.\\n\\n11.\\tWhile Figure 3 does not report on other methods, it does provide information on LTP\\u2019s consistency (please refer to our response #4 to reviewer #2). We will explain Figure 3 in a separate paragraph and swap presentation of Tables 4 and 3.\\n\\n12.\\tTorchVision\\u2019s AlexNet is different from that of Caffe, with the former having fewer parameter in its convolutional layers (2.5M vs. 3.75M). We tried to import Caffe\\u2019s pre-trained AlexNet to torch (using https://github.com/jiecaoyu/pytorch_imagenet), but did not succeed. Since the two models, despite differences, bear many similarities, we felt comparing results would still be useful, and we have stated the caveats clearly. We will remove the speculative comment \\u201c, and we conjecture can be compressed less\\u201d from the revised paper.\\n\\n13.\\tWe will present Table 5 first and re-arrange it for enhanced readability. \\n\\n14.\\tWith respect to Kusupati, we have repeated LTP on STR baseline and show a more comparable result (please refer to our response #7 to reviewer #1). We note that Ye et al needs 730 epochs to compress AlexNet, whereas LTP needs less than 100 epochs for most networks. We will add the suggested comment \\u201cOur method needs considerably less hyper-parameter tuning than existing SoTA methods, whilst achieving nearly the same accuracies for similar levels of compression\\u201d to the Table 3\\u2019s text.\"}",
"{\"title\": \"Learned Threshold Pruning\", \"comment\": \"We thank you very much for your comments. Please find our responses below.\\n1.\\tWe appreciate reviewer\\u2019s desire for an overhaul of our results, where SOTA methods are implemented & top1 plots for a range of keep-ratios are given, but we hope following facts demonstrate that our method of identifying relevant SOTA & comparing against them is both effective & practical. First, reproducing SOTA results is non-trivial (Zhang reported 21x compression for AlexNet but using their code we could not go above 10x). Even if code/hyperparameters are given, producing so many points, each requiring 100\\u2019s of epochs, is computationally very intensive. Second, we note that GMP uniformly beats all other baselines in Figure 4 of Kusupati and that Renda, Zhang and Ye (all beating GMP) are absent.\\n2.\\tMany SOTA works (Mao, Zhang, Ye) only present a few comparison points as a table (Kusupati also gives MobileNetV1 results as a table). Tables are effective as they focus on inflection points of tradeoff curves where accuracies are still high & useful; reporting very high or low keep-ratios where accuracy has not dropped at all, or dropped too much, is not useful.\\nTable 3 compares computational efficiency of methods; hence only reports number of epochs needed to achieve a certain compression rate. Tables 2 and 4 report Top1 and Top5 wherever the original paper provided them. Tables 2, 4 and 5 pertain to different DNN\\u2019s, with differing baselines as each paper only considered a subset of them. We will add comments to clarify these points. \\n4.\\tWe agree that unstructured pruning has seen many advances & provide a thorough account in our Related Work. Cognizant of this, we identified SOTA works for each DNN, e.g. Mao, Ye, Renda, Kusupati for ResNet50, and Han, Manessi, Zhang, Ye for AlexNet, and compared to them. For MobileNetV2, EfficientNet-B0 & MixNet-S there are no results in the literature yet, hence we used GMP as baseline. We will augment Table 2 with more comparison methods, e.g., GMP, DNW, etc. \\n5.\\tFigure 3 shows error bars for 10 runs of LTP on ResNet50. The bars give top1s\\u2019 standard deviation, centered at average top1 for these 10 runs. The plot attests to LTP\\u2019s consistency; typical standard deviation is around 0.1%. We will clarify this in Figure 3. \\n6.\\tFor performance comparisons to STR, c.f., response #6&7 to reviewer #1. While STR and LTP\\u2019s soft pruners both allow for learning per-layer thresholds, latter also allows for keeping small but important weights & pruning large but redundant ones, which differentiates LTP from strictly magnitude-based methods (c.f., responses #1 and #2 to reviewer #1). LTP\\u2019s soft pruner also provides a parameter (Sigmoid\\u2019s $T$) as a control knob. Kusupati does not comment on whether theirs allows for such behavior. We will add comments to Figure 1 to further clarify. \\n7.\\tFor hyperparameters, please refer to our response #5 to reviewer #1.\\n8.\\tWe will add number of pruning and finetuning epochs to Table 3. LTP does not have an explicit pruning schedule, instead it exhibits a natural polynomial-like schedule as seen in upper-left plot of Figure 2 (Zhu and Gupta 2017 also reported that a polynomial schedule gave them their best results). $\\\\lambda$ determines how rapidly LTP gets to target keep-ratio. MobileNetV2 is architecturally more compact than the over-parameterized ResNet50, therefore we set $\\\\lambda$ such that LTP pruned it more gently, which translated to a larger number of epochs. We will add comments to Table 3 clarifying this. \\n9.\\t$\\\\sigma_{kl}^2$ is the variance of layer $l$\\u2019s empirical weight distribution. \\n10.\\tLTP operates on pre-trained models, hence oblivious to training parameters of original papers. We used a learning rate of 1e-3, a batch size of 128 and the SGD optimizer (momentum of 0.9) for all results. We will add these details to hyperparameter section. \\n11.\\tExcept for Table 5, all points are from original papers. \\n12.\\tDeficiency of $L_1$ and $L_2$ has been discussed in Laarhoven 2017 & Hoffer 2018, & ML blogs such as https://blog.janestreet.com/l2-regularization-and-batch-norm/. We have provided evidence in our ablation study. Top two plots of Figure 2 show that LTP with $L_0$ or $L_2$ prunes ResNet20 to a keep ratio of 0.7 in 35 epochs. With $L_0$ the model keeps its original training top1 of 85%, whereas $L_2$ causes it to drop to 73%. Lower left plot shows that magnitude of kept weights remains constant under $L_0$ but exponentially drops with $L_2$, in accordance with section 3.2.\\n13.\\tWe agree that $L_1$ provides some regularization, but as observed, only to the extent that it approximates $L_0$. Equation (6) reveals that our soft $L_0$ is in fact an $L_1$ norm that is applied to thresholded, rather than the original, weights. Hence it is a better approximation to $L_0$ than $L_1$ (see also response #2 to reviewer #1). We will add comments to further clarify this.\\n14.\\tWe will add more details on GMP in our Related Work & revise section 3.3 to streamline it more.\"}",
"{\"title\": \"Learned Threshold Pruning\", \"comment\": \"We thank you very much for your comments.\\n\\nAs the reviewer notes, LTP replaces the sigmoid function by the step function to hard prune the network at the end of the pruning process. As stated in section 4.1, selecting a small enough T (e.g., 1e-3 times the variance of layer\\u2019s empirical weight distribution, c.f., equation (15) and Table 1) ensures that the performance of the soft-pruned and hard-pruned networks are close, and resulting performance loss, if any, would be recovered during the subsequent finetuning of the hard pruned network.\"}",
"{\"title\": \"Learned Threshold Pruning\", \"comment\": \"We thank you very much for your comments. Please find our responses below.\\n1.\\tLTP keeps a small but important weight, $w_{kl}$ by constantly pushing it above the threshold; assume that the threshold grows large enough such that $w_{kl}$\\u2019s mask drops just below $1$, e.g., $0.95$, resulting the soft pruned weight $v_{kl}$ to drop from its locally-optimal value of $w_{kl}$ to $0.95 \\\\times w_{kl}$. Since $v_{kl}$ is important, the backprop restores it to its original value by scaling up $w_{kl}$ by a factor of $1.05 = 1/0.95$. Note that this process of pushing small but important weights above the threshold is due to LTP\\u2019s soft-pruning, not soft regularization ($L_0$ or otherwise). We will add clarifying details to Figure 1. \\n2.\\tLTP prunes a large but redundant weight, $w_{kl}$, by dragging it below the threshold. Assume the threshold grows large enough such that $w_{kl}$ just enters sigmoid\\u2019s transitional region. As such, $w_{kl}$ receives a second gradient with respect to $L_0$, in addition to gradient with respect to classification loss. Since $w_{kl}$ is redundant, $L_0$ gradient is dominant and causes $w_{kl}$ to decay below threshold. While $L_2$ or $L_1$ may have a similar effect, using $L_0$ is advantageous as it targets weights next to pruning thresholds (within the transitional region), whereas $L_2$ or $L_1$ place more importance on larger weights that are further away. This allows $L_0$ to better measures various layers pruning potential. Also, $L_0$ is not affected by batch normalization as detailed in the paper. We will add clarifying details to Figure 1.\\n3.\\tWe have provided a thorough review of related works in our paper. The referenced work, (Louizos et al 2018), has been cited in the first paragraph of Section 2. Also, (Louizos et al 2018) provided some results for MNIST and CIFAR, but none for ImageNet, which is the focus of our paper.\\n4.\\tMost pruning methods, iterative magnitude-based pruning (Han et al 2015a and b), weight & learning-rate rewinding (Renda et al), progressive ADMM (Ye et al 2019) operate according to a pruning schedule where pruning is split among several rounds, each targeting a specific pruning ratio subject to a computational budget (~40 epochs for ImageNet). As such, they do not provide a continuum of pruning ratios; if the accuracy at the end of a pruning round is not acceptable, one needs to repeat the last few pruning rounds (potentially involving tens of epochs) with a less aggressive schedule. This contrasts with LTP where toward the end, the keep ratio and model accuracy change very gently from one epoch to the next. Hence in a single run of LTP, we get a series of checkpoints with different pruning ratio vs accuracy tradeoffs to choose from. \\n5.\\tLTP has 3 hyper parameters: $L_0$ loss multiplier ($\\\\lambda$), sigmoid temperature multiplier ($T_0$) and threshold learning-rate ratio ($\\\\eta_{\\\\tau} / \\\\eta$). $\\\\lambda$ controls the final pruning ratio and is the only parameter that needs per scenario adjustment. The $\\\\lambda$ values in Table 1 provide guidance on how to choose for a new architecture. For all results in the paper we have used a fixed $T_0$ of 1e-3 and although we have used 3 values for $\\\\eta_{\\\\tau} / \\\\eta$, our hyper-parameter search showed that any value between $1e-5$ and $1e-7$ works as fine. Hence $T_0 = 1e-3$ and $\\\\eta_{\\\\tau} / \\\\eta = 1e-5$ provide a reasonable choice for a new architecture. LTP is very robust to hyper-parameters. We will add some clarifying comments to the hyperparameter section. \\n6.\\tRegarding novelty, LTP and Kusupati (a contemporaneous work as they cite LTP), are among the first differentiable methods for unstructured pruning of neural networks. Also, LTP\\u2019s differentiable $L_0$ regularizer is quite novel. In addition, we are the first to report unstructured pruning results (beyond global magnitude-based pruning) for MobileNetV2, EfficientNet-B0 and MixNet-S. We will add comments to better clarify novel aspects of LTP.\\n7.\\tRegarding Kusupati, we had used TorchVision\\u2019s pretrained model (a standard model) as baseline. Kusupati used another model with a 0.85% top1 accuracy advantage. Based on reviewer\\u2019s suggestion, we have repeated LTP with Kusupati\\u2019s baseline and achieved a top1-accuracy of $74.38%$ at a compression rate of 8.84x (keep percentage of 11.3%) after 28 epochs, compared to Kusupati\\u2019s top1 of $74.31$ at a compression rate of 10.24x (keep percentage of 9.8%). We will update Table 2 to include this comparison point & change \\u201cmatching\\u201c to \\u201ccomparable\\u201d in its text.\\n8. Renda requires 900 epochs compared to LTP\\u2019s 30, a significant difference. We agree that weight and learning-rate rewinding are retraining and not pruning schemes (Renda uses global threshold pruning). Hence, weight or learning-rate rewinding can easily be used on top of LTP and likely improve its performance. We will test this in our future work.\\n9. We will add a new figure showing per-layer keep-ratios & pruning thresholds.\"}",
"{\"title\": \"A parameter based pruning method based on a relaxation of L0 regularization and a novel per layer threshold learning\", \"review\": \"This paper proposes soft pruning and soft l0 regularization.\\n\\nThe soft pruning learns a binarization function based on a sigmoid. I am curious to understand how the algorithm can let some low magnitude weights not be pruned and some large magnitude weights be pruned in the end. The paper claims the key is updating all the weights while maintaining the masks for inference. That seems to me ineffective compared to L1 or similar methods. Also, even in the case the weights are not masked, eventually, those gradients should become zero (as they are masked in the forward pass) and therefore preventing them from having large values, isn't it? That needs some clarification. \\n\\nAs the paper then focuses on L0 regularization, I missed comparisons to related work on that regard. This is not the first paper aiming at using L0 and proposing a differentiable approach. How this compares to others? For instance, a quick search led to LEARNING SPARSE NEURAL NETWORKS THROUGH L0 REGULARIZATION in ICLR 2018. \\n\\n\\n\\nI am confused with one of the contributions is to \\\"provides a trace of checkpoints with varying pruning ratios and\\naccuracies. Because of this, the user can choose any desired checkpoint based on the sparsity\\nand performance requirements for the desired application\\\"\\nWhy is this different from any other approach? As soon as the code saves the checkpoint (which most do) then, the user has access to the same flexibility, right?\\n\\n\\nThe section about the hyperparameters is confusing. How are the hyperparameters determined? Each architecture is using a different hyperparameter, how a user could set these?\\n\\nThe experimental results do not seem to support the novelty and the text is kind of misleading. LTP is below Renda and Kusupati. Text suggests Kusupati uses a better baseline, would it be possible to show results of LTP on that baseline? if not, why not?\\nFor Renda, seems like the key difference is the training process. Would be good to see the benefits of a longer training process for LTP. It is not clear to me that LTP can get better results even training for longer. \\n\\nResults on more compact networks compared to the self-implementation of Global pruning seem promising for larger compression rates. \\n\\nThe paper also claims the benefit of learning the threshold per layer, however, provides no result on the distribution of those parameters. Would be interesting to see how these values are distributed in each architecture to reinforce the value of not using a single value for all layers and architectures.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review of the paper Learned Threshold Pruning\", \"review\": \"The paper proposed a new method to prune a neural network. The method is interesting, innovative and effective. It makes it possible to learn tunning parameter via back propagation, hence learn together with network's weights.\\nThe work is well motivated. \\nThe paper is well structured, the writing is clear and easy to follow.\\nThe conducted experiments are thorough and clearly show the efficiency of the proposed method. The paper contains enough information to replicate the experiments.\\n\\nThe work would be beneficial for others if the code is published open.\", \"a_question_for_clarification\": \"When hard prunning the network (section 4.1), we just replace sigmoid(x) by step(x)?\", \"post_discussion_update\": \"I have read the updated paper and other reviews, especially from reviewer #2. While I am still positive about the approach/methodology, I am not confident about the technical details of the experiments, without which, it's very hard to justify the effectiveness of the method. I share other reviewers' views regarding inconsistencies, e.g. Tables 2-5, that have not been fixed in the updated paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An exciting new approach to pruning but the execution is flawed\", \"review\": \"## Summary\\n\\nThe paper introduces a new type of soft threshold operator in conjunction with appropriate weight regularization that can be used in the context of neural network pruning to obtain sparse, performant networks from pre-trained, dense networks. The main idea is to replace the Heaviside step function that occurs in \\\"hard threshold\\\" pruning, which is non-differentiable, by a sigmoid function that can be differentiated and thus enables the efficient training/optimization of relevant pruning parameters. Pruning is hereby performed on a per-layer basis by training a regularized per-layer threshold. \\n\\n## Score\\n\\nI am quite intrigued by the method and I think it has potential. It seems easy enough to implement while providing decent improvement over competing methods, at least judging from the sparse experimental results that were presented. This brings me to the big weakness and the reason for my score. The experiment section does not provide enough evidence in my opinion to justify acceptance since I cannot say with full confidence that the method reliably performs well. And even when the approach does not outperform existing methods in all aspects, at least I would like to be able to judge what are the scenarios where the method performs well. More details are provided below. \\n\\n## Ways to Improve My Score\\n\\nMainly, please address the points I mentioned in the \\\"Weaknesses\\\". Some of them can be addressed by updating the writing. However, the major concern of mine is the experiment section. I think it requires a full overhaul including more comparison methods, standardized experiment settings, better organized presentation of the results, and clear description of the hyperparameter choices. \\n\\n## Strengths\\n\\n* The concept of the introduced soft threshold operator is easy to follow and intuitive. I appreciate the detailed description of the resulting derivatives in Section 3 and the provided intuition. As such the method is well-described and the benefits are clear. \\n\\n* The various aspects of their ablation studies are interesting, c.f. Figure 1, Figure 2. It helps better understanding some of their design choices. \\n\\n* The presented experimental evidence seems to hint at very decent performance, especially at the ImageNet scale. This is encouraging to see and underscores the intuition behind the method. \\n\\n## Weaknesses\\n\\n* To me, the biggest weakness is the presented experimental evidence. It seems scattered, the presentation is confusing, and most of all it makes it extremely difficult to assess the performance gains of the presented method in comparison to existing methods. Some points that I would like to list specifically are below: \\n 1. There is no single figure that allows me to assess the prune-accuracy trade-off across a large range of prune ratios and for various comparison methods. Something like Figure 4 in the paper by Kusupati et al. 2020 (https://arxiv.org/pdf/2002.03231.pdf) is, in my opinion, necessary in order to more reliably assess the resulting performance. \\n 2. The authors only present a selected set of prune ratios and resulting accuracies in their Tables 2-5. Also the results are presented inconsistently. Table 3 only reports compression rate. Table 4 reports Top-5 accuracy and compression rate but not Top-1 accuracy. Table 2, 5 present Top-1 and Top-5 accuracy but only a few selected comparison methods that differ from the one presented in Table 4. \\n 3. More comparison methods: Unstructured pruning has seen quite a few advancements over the last couple of years and as such I believe it is crucial to compare to many more pruning methods. This is particularly important since standardized comparisons are missing and so simply presenting some results gathered from other papers is not enough in my opinion. \\n 4. Were experiments repeated multiple times? I can see that Figure 3 was based on 10 repeated runs. What do the error bars represented? Also, what about the other experiments? Could the authors clarify how the other numbers were generated and also report mean and standard deviation. The experiments should be repeated at least a couple of times. \\n\\n* A more thorough comparison to STR (Kusupati et al. 2020, https://arxiv.org/pdf/2002.03231.pdf). STR shares a lot of similarities with this work in the sense that STR also introduces a per-layer threshold for pruning that can be efficiently optimized using a differentiable soft threshold operator. There is only one comparison point in Table 2, which however seems to be based on a different implementation thus resulting in a different baseline accuracy. There is also no discussion how the soft threshold operators of STR and LTP differ and what makes one better than the other. \\n\\n* The experimental hyperparameters are not fully listed and the ones that are listed are scattered throughout the paper. Also, the authors did not provide code; so I couldn't check their implementation either. In particular: \\n 1. The contributions list in the introduction mentions the number of pruning and fine-tuning epochs for some experiments but the experiment section doesn't provide a full overview of pruning+fine-tuning epochs. Table 3 provides some of these numbers but not all details. Also what are the particular reasons for the choices? ResNet50 seems to require 30 total pruning epochs, while MobileNetV2 requires 101 epochs. Why? \\n 2. What does $\\\\sigma^2_{|w_{kl}|}$ in equation 15 refer to? \\n 3. What about training parameters? Are those the same as in the original paper? What about an overview table with all training parameters? \\n 4. How were the comparison methods implemented? Were they even implemented or were the results taken from the respective papers? \\n\\n* I am not convinced that $L_1$ and $L_2$ regularization don't work for pruning in general as the authors claim in Sections 2 and 3.2. I understand their point and in their case their approximation to $L_0$-regularization indeed seems to play a central role but I wonder how true this statement is in general. In particular, while batch normalization (BN) may allow for arbitrary re-scaling of layers, the _relative magnitude_ of weights may still be impacted by $L_1$-regularization. Moreover, after all they don't use $L_0$-regularization either, just another differentiable approximation to $L_0$-regularization (just like $L_1$ is a differentiable approximation to $L_0$). \\n\\n## Other Minor Feedback\\n\\n* I find the introduction and related work interesting and it serves as an appropriate motivation for the work. However, I find it somewhat disingenuous and/or misleading not to mention global magnitude-based pruning in the first sections. The authors cite a lot of related work that requires manual and/or more complicated approaches to identifying per-layer sparsity patterns when the most obvious solution is to perform global magnitude pruning (GMP). Since this usually works quite well as baseline and the authors also compare to GMP in their experiment section, I believe this merits a longer discussion where the authors compare to GMP. \\n\\n* Section 3.3 could be simplified. I was pretty confused since a lot of prior quations are cited in the explanation between newly introduced equations. That makes it pretty hard to read and I believe the introduced concepts and resulting equations could be streamlined. E.g. you could introduce all relevant equations in a continuous paragraph instead of jumping between equations, which results in the heavy use of equations citations.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Really like the approach, and the exposition; experiments section could be made clearer potentially\", \"review\": \"My overall feeling about this paper is that I really liked it. I felt it was clearly written, nice pacing, didn't bombard us with things we already know, nor did it skip over things we don't know about. I like the approach of using the sigmoids to be able to learn the thresholds and the weight pruning. I really like this approach. In practice, there are still several hyper-parameters to tune, which are apparently model-specific (table 1), and it seems unclear how to directly set the sparsity (we can only influence it using lambda, not set it directly, as far as I can see?). Nevertheless I do really like this approach and direction, ie of learning things as much as possible. The approach seems to me analagous to using eg Adam instead of SGD.\\n\\nI feel the experiments section could be written a bit more clearly so as to make the assertions in the conclusion and introduction stand out as really obvious. For example, the assertion that LTP can be used in the presence of batch normalization was not made apparent in the experiments section I felt, nor was it shown as a benefit compared to other baselines, I felt.\", \"details\": \"'computationally extensive' => 'computationally expensive'\\n\\nfigure 1 really far from where it is first referenced. had to hunt for it....\\n\\nwriting really clear. Good exposition of related papers. Pacing very nice. Very easy to understand.\\n\\nGood choice of which information to present. (cf papers that present lots of well-known knowledge, or skip over some key advanced concepts).\\n\\nKind of a detail, but I like the use of $sigm$ to denote sigmoid, (cf many papers will write the sigmoid out in full, which makes the equations much harder to read)\\n\\nPersonally, I would prefer that $\\\\sigma_T(w_{kl}) = \\\\frac{\\\\partial sigm((w^2_{kl}-\\\\tau_l)/T)}{\\\\partial w_{kl}}$ is defined before equation 3. Otherwise I first read equation 3, wondering what is $\\\\sigma_T$, and then realize it is defined underneath.\\n\\nTo be honest I'm not a fan of the notation $\\\\sigma_T(\\\\cdot)$, since when I first read it I parsed it as $sigm_T(\\\\cdot)$. I would prefer either using some other symbol, or perhaps writing out the full partial derivative, though that seems probably long, without some other symbol, or perhaps adding a $'$, like $\\\\sigma_T'(\\\\cdot)$.\\n\\nI'd also prefer that the definition of $\\\\sigma_T(w_{kl})$ includes the derivative in this definition, ie:\\n\\n$$\\n\\\\sigma_T'(w_{kl}) = \\\\frac{\\\\partial}{\\\\partial w_{kl}} sigm\\\\left(\\\\frac{w^2_{kl} - \\\\tau_l}{T}\\\\right) = ... etc ....\\n$$\\n\\nI like the paragraph that describes the behavior of $\\\\sigma_T'$\\n\\nI like the exposition of the various regularization methods. Concise and yet easy to understand.\\n\\n$\\\\eta$, in equation 8 is not obviously defined anywhere. I went hunting for it, but couldn't find it. Please define it before using it.\\n\\nEquation 8 comes out of nowhere, with no explanation of what it means, or why it is. The rest of the paper explains things very well, but equation 8, I'd have to think a lot, wasn't immediately obvious to me where it comes from, what it means.\\n\\nI'm not sure what the syntax $\\\\sim T$ means here. It normally means 'is distributed as', but that meaning doesn't seem to make sense here? It looks like it's being used to mean $\\\\approx$?.\\n\\nOk, I had to go all the way back to equation 2 and the paragraph after it. Looks like $\\\\sim$ is being used here to mean $\\\\approx$. I think that using $\\\\approx$ would be more standard, and easier to understand? Ok, I googled $\\\\sim$, and it turns out that it can often be used to mean 'is approximately the same order of magnitude as', eg https://math.stackexchange.com/a/2177014/45703 But I personally found it confusing because it is very often used to mean 'is distributed as', so personally I would prefer to have a short explanation like 'where $\\\\sim$ means \\\"is approximately the same order of magnitude as\\\"'\\n\\nI guess the other reason I find it confusing though is this equation implies to me that $|w^2_{kl} - \\\\tau_l| = 0$ would not be in the region, but in fact the region is I feel something like:\\n\\n$$\\n|w^2_{kl} - \\\\tau_l| \\\\lesssim T\\n$$\\n\\nPersonally I think I would prefer the conditions for the transitional region written in this way; would be less confusing for me; I think.\", \"similarly_equation_5_would_be\": \"$$\\n\\\\sigma_T(w_{kl}) \\\\approx \\\\frac{1}{T}, \\\\text{ for } |w^2_{kl}-\\\\tau_l| \\\\lesssim T\\n$$\", \"and_then_equation_8_becomes\": \"$$\\n\\\\eta \\\\cdot \\\\left| \\\\frac{\\\\partial \\\\mathcal{L}^T}{\\\\partial w_{kl}}\\\\right| \\\\ll T , \\\\text{ for } |w^2_{kl}-\\\\tau_l| \\\\lesssim T\\n$$\\n\\nI'm not sure though why this derivative is in this constraint? Isnt the constraint simply that\\n\\n$$\\n\\\\sum_{l=1}^L \\\\sum_{k=1}^K I\\\\left[ |w^2_{kl} - \\\\tau_l| \\\\right] > m\\n$$\\nwhere $I[\\\\cdot]$ is an indicator function, and $m$ is some positive integer?\\n\\nie, at least some points need to be in the transitional region. I'm not sure I follow why we need a derivative in the constraint. Please can you add some description around equation 8 so I can follow what is going on :)\\n\\nAnd also trying to work through equation 8, it seems like it is saying that we want to make the derivatives as close to zero as possible, relative to T. But isnt this the opposite of what we want? Dont we want to have a reasonable number of derivatives that are not near zero?\\n\\nOk, after equatino 9, we get some explanation for equation 8 :) But I think the explanation could be moved forward somewhat :)))\\n\\nOk, based on this explanation, ie the one after equation 9, $\\\\eta$ is probably learnin rate. But please define $\\\\eta$, near equation 8 :)\\n\\nFrom the explanation, I'm not sure that equation 8 is a condition that actually *prevents* premature pruning, so much as a heuristic to minimize weights moving too quickly out of the transitional region. Preference to be clearer about this, since it would certainly have helped me to understand equation 8 more easily and quickly :)\\n\\nequation 11, the brackets could be nicer looking if use \\\"\\\\left(\\\" and \\\"\\\\right)\\\", I feel (so they are as large vertically as the derivative fractions they contain)\\n\\n$\\\\lambda$ in equation 11 is not defined. From section 4.1, we can see it is a hyper-parameter to be tuned, but that is not stated in equation 11. Preference to state at equation 11 that $\\\\lambda$ is a hyper-parameter, and $\\\\eta_{\\\\tau_l}$ is the learning rate for the threshold of layer l. Hmmm, does this mean that each layer has its own learning rate to tune for its threshold? If there is one single learning rate for the thresholds, I think it might be clearer to represent it as $\\\\eta_\\\\tau$? If there are per-layer learning rates, then this seems to be to contradict the implied promises in the introduction that we don't have per-layer hyper-parameters to set?\\n\\nOk, looks like $\\\\lambda$ was first used in equation 9. But still wasnt defined there I think?\\n\\nI'd also prefer that equation 9 was defined before presenting equation 8, on the whole. This way I can read sequentially, not have to skip forwards and backwards.\\n\\nThe sentence just before and after equation 12 is very complex and hard to take in. Please consider breaking into smaller simpler sentences. eg \\n\\n\\\"$\\\\partial L_T/\\\\partial w_{kl}$is given by 9, 10, 3 and 7 as:\\n\\n(equation goes here)\\n\\nThis includes $\\\\sigma_T(w_{kl})$, which from equation (5) $\\\\approx 1/T$, and will become large for $T \\\\ll 1$. This means that the gradient will become large, and constraint (8) will be violated.\\n\\\"\\n\\nFigure 1 I feel needs a lot more explanation.\\n- why are they so symmetrical about the y=x line? doesnt this imply that the number of weights less than theshold before pruning and the number of weights less than theshold after pruning is similar?\\n- why is the y-axis labeled 'w'? I thought the pruned weights are 'v'?\\n- why make the plot? What is the motivation of this plot?\\n- why is the scale of the axes radically different between left and right (0-16 vs 0-3) ?\\n- why is the left hand plot preferable to the right hand plot?\\n- why is the gap around threshold in the right hand plot a bad thing?\\n- why is the proportion of weights below threshold similar in both left and right?\\n\\n\\n4. experiments\\n\\nI like the table of hyper-parameters in table 1. (cf many papers skip over which hyper-parameter settings were used, making reproducing the work challenging)\\n\\nAppreciate the explanation of the significance of $\\\\lambda$ as the primary hyper-paramter determining the sparsity levels. I feel that this explanation could be moved back to equation 9.\\n\\nAppreciate the observations on how to set $\\\\lambda$, and $\\\\eta_{\\\\tau_l}$.\\n\\nI wouldnt really call figure 2 an 'ablation study'. It's more like a comparison study of various baselines and approaches I feel? An ablation study I feel would be more like:\\n\\n- no regularization\\n- no drop second term in 12 (and simply not use any clamping etc in its place)\\n\\nI think the ablation study should go after the imagenet pruning results. (I mean, I think it's traditional to put ablation studies after the section of results vs other baselines/sota models)\\n\\nTable 2 is very unclear to me\\n- why is your method not at the bottom of the table\\n- I think your own method should be in bold and say \\\"(ours)\\\" after the name\\n- looking at the table, it's not very clear why we should choose LTP?\\n - the highest rate and top1 looks to be Kusupati et al?\\n- in the text description, it says that Kusupati uses a stronger baseline, ie STR\\n - why don't you use STR too?\\n- in the text it says that Renda et al needs more training\\n - why not put the amount of training required as an additional column in the table?\\n\\nJumping to table 3 mid-paragraph is I think jarring. I think first talk about table 2 in one paragrpah, then talk about table 3 in the next. Or at least don't mix and match across tables, at least without having first presented each table on its own first. I feel. Like the sentence 'In fact as table 3 shows' cannot I feel precede the clause 'Finally, figure 3 provides ...', which presents what is table 3. Oh, that's figure 3, not table 3. Anyway, I think table 3 needs some introduction, please.\\n\\nYes, so, it seems to me that the resnet50 results from table 3 could be added to table 2 perhaps?\\n\\nFigure 3 should be in a separate paragraph, since it is not a comparison with baselines/other models. It's just an obseration about LTP itself. And really, without any comparison with how other models/pruning strategies are, I'm not sure it is very meaningful to me? Like, for all I know other models have smaller error bars?\\n\\nI think table 4 presentation should follow immedaitely table 2 presentation.\\n\\nThen table 3 presentation.\\n\\nFigure 3 might be best in a separate 'appendix'-y sub-section at the end of section 4, I feel. Since it's not comparing to other models, like table 2, 4 nad 3 are.\\n\\nTable 4. If torchvision gives worse results than caffenet, then why not use caffenet instead, or re-implement the caffenet version of alexnet in torch? Otherwise, we can see taht LTP results in table 4 dont match the baselines, and we cannot tell if this is because the torchvision baseline is weaker, and LTP is strong, or whether LTP is weaker than eg Ye et al.\\n\\nIn fact, Ye et al only drops 0.1% top-5 accuracy compared to original, whereas LTP drops 0.4%, compared to torchnet original, so I feel that justifying the lower top-5 error rate on the weaker baseline model is not entirely sufficient?\\n\\nTable 5 looks like the strongest table to me. Might be worth putting it first? I feel that it could be useful to highlight the top results in each column in each scenario in bold? I think that ideally each column should be a single scenario (whereas here each column is multiple scenarios), then it is easy to highlight the top in each column. For example you could put the different rates as different columns, and use eg top-5 accuracy throughout the table. (or put top 1 and top 5 accuracies as tuples perhaps?)\\n\\nI kind of think that table 3 should be folded into the other tables.\\n\\nI think it's not clear from these tables why we should use eg LTP instead of Kusupati et al, or Ye et al. I think that either make it clear in the table somehow, or perhaps put in the text. Like eg \\\"Our method needs considerably less hyper-parameter tuning than existing SoTA methods, whilst achieving nearly the same accuracies for similar levels of compression.\\\"\\n\\nIn the conclusion, you mention batch normalization, but this was not brought to the fore in the experiments section. Like, I would expect to see some models that can only be pruned using LTP, and other approaches fail to prune, but I don't remember this being shown clearly in the experiment section?\\n\\nBasically, I think the assertions in the conclusion are exciting, but aren't made clearly obvious in the experiments section. I think for each assertion in the conclusion there should be a single table or graph that shows this assertion very clearly, in comparison to other possible baselines.\\n\\n[post discussion edit]\\n\\nAfter discussion, I lowered my score to 'marginally above acceptance threshold':\\n- the theory section of the paper looks very interesting to me\\n- I find it hard to see clearly from the experimental section the extent to which the method beats existing methods\\n- I feel that the experiments could be made more rigorous to clearly show the benefit compared to other techniques\\n- concretely, I feel taht the tables could be structured in such a way that one can glance at each single table, and see clearly in what way LTP is better than the baselines. Concretely, for the results tables:\\n\\n- table 2: LTP gives worse accuracy than Renda, and worse compression. The text mentions training epochs are fewer for LTP, but the table doesn't show this benefit (there is no column with number of training epochs)\\n- table 3: this table is a little apples and oranges I feel. it shows that the number of training epochs is less for LTP than Renda, but the compression ratio is slightly less. I feel that you could have compressed a little more, to make the compression ratios comparable. In addition, I feel it is important to include the accuracy in the table. Without accuracy, then I feel it is not possible to compare.\\n- table 4: I feel you could do whatever is needed to do to ensure that the baseline model you are using matches the baseline that other teams are using. This could mean porting NTP to caffe, or porting caffe network into torch. Currently, the LTP pruning is on a worse 'parent' model, and performs worse than the other baselines in terms of accuracy. I'm not sure it's sufficient to hand-wavingly just add/subtract the delta in performance between the baselines to the LTP results (which is not explicitly being done, but if one doesn't do that, then one would have to assume that LTP performs worse, I feel)\\n- which only leaves table 5 that plausibly provides an apples-for-apples comparison, but only for a single baseline\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
oyZxhRI2RiE | SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing | [
"Tao Yu",
"Rui Zhang",
"Alex Polozov",
"Christopher Meek",
"Ahmed Hassan Awadallah"
] | Conversational Semantic Parsing (CSP) is the task of converting a sequence of natural language queries to formal language (e.g., SQL, SPARQL) that can be executed against a structured ontology (e.g. databases, knowledge bases). To accomplish this task, a CSP system needs to model the relation between the unstructured language utterance and the structured ontology while representing the multi-turn dynamics of the dialog. Pre-trained language models (LMs) are the state-of-the-art for various natural language processing tasks. However, existing pre-trained LMs that use language modeling training objectives over free-form text have limited ability to represent natural language references to contextual structural data. In this work, we present SCORE, a new pre-training approach for CSP tasks designed to induce representations that capture the alignment between the dialogue flow and the structural context. We demonstrate the broad applicability of SCORE to CSP tasks by combining SCORE with strong base systems on four different tasks (SPARC, COSQL, MWOZ, and SQA). We show that SCORE can improve the performance over all these base systems by a significant margin and achieves state-of-the-art results on three of them. | [
"score",
"context representation",
"conversational semantic",
"task",
"structured ontology",
"lms",
"conversational semantic parsing",
"csp",
"sequence",
"natural language queries"
] | Accept (Poster) | https://openreview.net/pdf?id=oyZxhRI2RiE | https://openreview.net/forum?id=oyZxhRI2RiE | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"lminC_fgyqn",
"kVaZMkPsLL",
"Iw_NpknlD-1",
"C-fLJY0xoby",
"ID7b5_t5s4J",
"ELVjltvqMO",
"weXKbX5DfmU",
"zBy-S8rxZ62",
"ESnBW707lSx",
"A0OFsCNHQqu",
"j7lMRGuq-Um",
"iuR6ND4yGcJ",
"1Nfwb7K_fE",
"b_0Q-nmnBt5",
"3dHhpdaYjUU",
"rxQbNWIQsdA",
"5O9wmJydn_",
"1zLXUYZTfFb",
"qDvZFFZi1hR",
"slZZXsyZLa"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040376697,
1606279611304,
1606279288648,
1606279254744,
1605884222367,
1605883900362,
1605608078573,
1605492885472,
1605472619787,
1605333881561,
1605328144634,
1605328015884,
1605327960976,
1605327783010,
1605327560440,
1604468816227,
1604438900882,
1603816709563,
1603793746750,
1603786381253
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3773/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes to pre-train contextual semantic parsing models on synthesized data (using a small amount of additional supervised training data and grammar-based generalizations therefrom) with two new training objectives: Column Contextual Semantics (CCS), mapping text to database columns, and Turn Contextual Switch (TCS), to deal with the update semantics between turns.\\n\\nI thank the reviewers for their detailed engagement with this paper, and thanks the authors for their responsiveness in doing extra experiments and rewriting that made this paper better and the decision clearer.\\n\\nPros\\n\\nThe authors did such a great job of summarizing the pros, that I think I can just copy their summary: \\\"We are glad that the reviewers appreciate the novelty and the effectiveness of our proposed approach (R5), find our experiments to be comprehensive and convincing by achieving SOTA on 3 out of 4 different tasks (R1, R2, R3, R4), ablation studies and analysis to be informative and well done (R2, R4), and think our paper is clearly written and easy to follow (R1, R2, R3, R4).\\\"\\n\\nCons\\n\\n- A somewhat specific and ad hoc data synthesis solution\\n- Stronger pre-trained contextual language models might beat assumed baselines or methods shown here (R4, R5)\\n- The story is weak and should be better motivated through discussion of contextualization of interpretation\\n\\nIn general the reviewers recommend accepting the paper, and I agree. However, it is perhaps not of the novelty, clarity, or impact size to qualify for more than a Poster. R5 has a good point about how strong pre-trained LMs are a general tool and should be preferred to the extent they work in 2020, but I think they are too opinionated to suggest this is a reason for rejection. Along with the other reviewers and the authors, I think it is most reasonable to accept work showing good progress using \\\"medium-sized\\\" pre-trained LMs -- really we thought BERT was big a couple of years ago! -- and this work has comprehensive experiments with good results. I would encourage the authors:\\n\\n- To say more about the alternative strategy of instead using a bigger pre-trained LM, as has come out in the discussion on OpenReview, and the pros and cons of this approach (though maybe the results with BART are the only fairly comparable data point)\\n- To strengthen the presentation by orienting the paper more around the importance of contextualization in interpreting dialog turns in conversational semantic parsing (as opposed to the \\\"one turn\\\" nature of the original famous semantic parsing datasets).\\n\\np.s. One typo I noticed in the revised paper while reading: fours --> four\"}",
"{\"title\": \"Summary Response to All Reviewers\", \"comment\": \"We thank all reviewers for their thoughtful feedback. We are glad that the reviewers appreciate the novelty and the effectiveness of our proposed approach (R5), find our experiments to be comprehensive and convincing by achieving SOTA on 3 out of 4 different tasks (R1, R2, R3, R4), ablation studies and analysis to be informative and well done (R2, R4), and think our paper is clearly written and easy to follow (R1, R2, R3, R4).\", \"we_have_updated_our_paper_with_the_following_changes_to_reflect_the_reviewer_comments\": [\"update ablation study of different objectives - pre-training with CCS+TCS (only synthesized data used) objectives achieves the best performance on three tasks (Table 6), which is very significant compared to MLM. (Section 4 - What is the effect of each pre-training objective?)\", \"Reviewer3 and Reviewer4\", \"clarify the usage of datasets in pre-training with different objectives (MLM vs. CCS+TCS vs. CCS+TCS+MLM). Basically, (CCS+TCS) is pre-trained on only synthesized data (Section 3.3)\", \"Reviewer4 and Reviewer5\", \"BART Encoder+Decoder on Spider (Appendix E) - using BART Encoder+Decoder on Spider underperforms SOTA model (RATSQL+BERT)\", \"Reviewer 5\", \"change \\\"SCoRe\\\" function to \\\"RoBERTa\\\" (Section 2.1)\", \"correct the typo in Eq 1 (Section 2.1)\", \"ToD-BERT (Appendix E) - RAT-SQL + SCoRe outperforms RAT-SQL + ToD-BERT by 7.6%\", \"Reviewer 2\", \"TCS experiments (Appendix E) - TCS only improves 2.4% so far on SParC\", \"comparison with Herzig et al. (2020) on SQA (Appendix C.2)\", \"add analysis of few-shot learning with 10% training data on SQA as a separate paragraph (Section 4)\", \"Reviewer 3\", \"data concatenation or multi-task training results (Appendix E) - incorporating the additional SParC examples does not significantly improve the performance on CoSQL and SQA compared with SCoRe.\", \"update equations 2 and 3 by including the formal representation q (section 2.2)\", \"Reviewer 1\", \"improve discussion on motivation in the introduction (section 1)\", \"comparisons with related work on other contextualization methods (section 5)\", \"clarification on TCS and spacing (section 2.2)\", \"move Table 1 to separate it from Figure 1\", \"Reviewer 4\", \"add more data synthesis details (section 2.3)\", \"add citations in Table 3\", \"correct statistics about MultiWOZ 2.1 (Section 3.1 and caption in Table 1)\", \"add more description of SQA (Section 3.1)\", \"add more details on pre-training steps (section 2.2-Pre-Training Setup and Steps)\"]}",
"{\"title\": \"Author Response\", \"comment\": \"We used about 500 examples from SParC to induce the grammar for data synthesis in pre-training. For a fair comparison, we have incorporated the additional SParC examples in CoSQL and SQA. We found this does not significantly improve the performance on CoSQL and SQA compared with SCoRe. Please check Appendix E for more details.\"}",
"{\"title\": \"Summary of discussions with R5\", \"comment\": \"We summarize the discussion that we had with R5 here for the benefit of the other reviewers and the AC. We would like to thank R5 for the detailed discussion and for suggesting that we share a summary of it with everyone.\\n\\n**Using extremely large pretrained LMs (e.g. T5) as baselines**\\n\\nOur own experiments with BART (Appendix E) and results of using T5 for semantic parsing (Shaw et al.) and question answering (Roberts et al.) show than using large pretrained models like underperforms or yields comparable performance to SOTA models (custom models + BEERT/RoBERTa, e.g. RATSQL+BERT). We opt for using the latter as baselines since it is the SOTA published results and are smaller in size, easier to train, deploy, etc.\\n\\n**Task-specific pre-training v.s. larger LM models**\\n1. We agree that exploring whether larger LMs pre-trained on text only can perform well in CSP or dialog tasks is a very interesting question. This is part of an even bigger question on the role of custom models that aim to build innate priors into the models for solving NLP tasks vs. fully relying on one large pre-trained LM.\\n2. This is a heavily debated topic and the NLP community is far from reaching any conclusive outcome. We believe there is value in pursuing both directions and being on one side of the debate shouldn\\u2019t be grounds for dismissing work on the other side.\\n3. We also argue that accuracy is not the only measure, model size matters and smaller model sizes are of interest to the research community due to economical, environmental and wide accessibility concerns.\\n4. Finally, we note that our model is not specific to one task or one dataset. Our experiments show it improves the performance of different base models on 4 different CSP tasks (fully-sup., weakly-sup, and dialog state tracking, QA, semantic parsing).\\n\\n**Fairness of using 500 examples from SPARC in the synthetic data generation**\\n1. We ran additional experiments as suggested by R3 (see Appendix E). Our results show than adding these 500 examples in the training process yields no or very small gains.\\n2. Note that in our original experiments, we tested the effectiveness of SCoRe on two totally unseen datasets CoSQL and SQA (weakly-supervised SCP without SQL labels) to show that the benefits of the proposed pre-training method is general.\\n\\n**Improvements of SCoRe over the four CSP tasks are substantial**\\n\\nWe agree that the increase in percentage is subjective, but we can still find some objective evidence by looking at prior improvements on these competitive tasks.\\n1. As you can see on [MWoZ public leaderboard](https://github.com/budzianowski/multiwoz#belief-tracking), a significant improvement in SOTA as defined by most recently published works is about 1.5%. SCoRe improves 2.1% over BERT and 1.8% over the very recent prior SOTA.\\n2. On [SParC leaderboard](https://yale-lily.github.io/sparc), a significant improvement is about 2%, and SCoRe outperforms prior SOTA by 3.7% in QM and 4.8% in IM. Similarly, on [CoSQL leaderboard](https://yale-lily.github.io/cosql), a significant improvement is also 2%, and SCoRe outperforms prior SOTA by 4.8% in QM and 4.2% in IM.\\n\\n**Other concerns raised by R5 (that has been addressed has since been deleted from the edited review after the discussion)**\\n1. ToD-BERT: we clarified that ToD Bert does in fact use a response contrastive loss, not just MLM. We added a comparison against ToD Bert in Appendix E.\\n2. Ablation study: we pointed to several ablation studies and added more as per the discussion (Section 4 and Appendix E).\\n3. Pre-training dataset: we clarified that we do not use the labeled data from any of the datasets for pre-training.\\n4. We addressed other minor issues (see list of paper edits for details).\"}",
"{\"title\": \"Thanks for your response\", \"comment\": \"Thanks for your response! We are working on the experiments you suggested and try to get at least some of them done by the rebuttal deadline.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks again for your comments.\\n\\n**1. Result interpretation**\\nWe are afraid we do not agree with how the reviewer interpreted the results we shared in the comments. The comments mentioned two methods:\\n- (a) A very large LM finetuned on text-to-SQL (T5-3B, ~ 3000M parameters)\\n- (b) A custom model (RAT-SQL) + BERT-large finetuned on text-to-sql task (~ 500M parameters).\\n\\nBoth (a) and (b) have similar performance (within 0.3 points) as reported in Shaw et al. We use (b) as our baseline and show that replacing BERT/RoBERTa with SCoRe can result in 5.8% improvement on SParC, 2.7% improvement on CoSQL and 4.9% on SQA. \\nWhile we do not directly compare our approach to T5. We show that our approach >> (b) and (a) ~= (b). As such, *we think the statement in the review \\u201cLM trained on text-only can achieve similar or better performance in CSP\\u201d is only justified when comparing T5-3B to BERT* ***but not justified when comparing T5-3B to the proposed model in this paper SCoRe***.\\n\\nWhether going to even larger models (e.g.10s or 100s billion parameters) may erase any gain that a method like SCoRe may achieve is an open research question that we do not have an answer for now. This is also a question that is relevant to pretty much every other NLP task out there. \\n\\n[Another recent EMNLP\\u201920 paper](https://arxiv.org/pdf/2002.08910.pdf), from a subset of the authors of T5, provides more evidence that larger LM is not sufficient for the task of open domain question answering. Their findings are:\\n- An 11B parameter model (T5) underperforms SOTA models (SOTA is a custom model +BERT) on three datasets (Natural Questions: 41.5 vs. 32.8, WebQuestions: 42.4 vs. 42.8, TriviaQA: 57.9 vs. 42.9)\\n- The authors ***add a second-stage of pre-training using a QA specific objective*** (salient span masking) to T5-11B and the results improve but still not SOTA (Natural Questions: 35.2, WebQuestions: 42.8, TriviaQA: 51.9). This further supports our hypothesis that adding a \\u201csecond stage of pre-training to induce inductive bias beyond general LM objectives is useful for many tasks\\u201d. \\n\\n2.Overall, ***we agree that exploring if larger LMs pre-trained on text only can perform well in CSP or dialog tasks is a very interesting question***. This is part of an even bigger question on the role of custom models that aim to build innate priors into the models for solving NLP tasks vs. fully relying on one large pre-trained LM. **This is a heavily debated topic and the NLP community is far from reaching any conclusive outcome. We believe there is value in pursuing both directions and being on one side of the debate shouldn\\u2019t be grounds for dismissing work on the other side**. \\n\\nWe agree with the reviewer that we have shown the benefits of our approach over LMs in the size of 100s M of parameters (BERT/RoBERTa) but did not show their benefit for even larger models with billions of parameters (e.g. T5). We chose to use BERT/RoBERTa because: (1) it is used by all SOTA methods over the 4 tasks we experimented with and (2) evidence from recent work (e.g. Shaw et al.) suggests that much larger models do not outperform custom models + BERT. While we believe the approach will be valuable even for extremely larger models (e.g. T5), we acknowledge that this is an empirical question for future work.\\n\\n3.We would also like to point out that we believe that **accuracy is not the only measure, model size matters and is of interest to the research community**. Even if a custom task-specific model (e.g., RAT-SQL+BERT) achieves the same accuracy as much larger LMs (e.g., T5-3B, or T5-11B), the smaller models present significant economical and environmental benefits and will ensure the advances are available to a much wider audience. Using the smaller model will be the clear (and often only) choice available to many companies and universities that cannot afford to use much larger models (e.g. T5) for deployment or research. For practical reasons, and even if a company can afford the more expensive model, they would also favor a smaller model that is as performant to reduce their deployment costs or to serve the model in a resource-constrained setting (e.g. on a mobile device). This is a very actively discussed topic, in NLP and other applications of ML.\\n\\nRegarding the comment on compression, while this is a very promising research direction, compression usually comes at a cost of performance loss. Additionally, compression is not only applied to extremely large models (e.g. T5) but can also be applied to large models like Bert/SCoRe.\\n\\n4.**Fairness of pre-training datasets and comparisons**: That is why we also tested the effectiveness of SCoRe on two totally unseen datasets CoSQL and SQA (weakly-supervised SCP without SQL labels). The improvements of SCoRe on CoSQL and SQA show that the proposed pre-training method is general even when pre-training data is only synthesized by a grammar induced by SParC examples.\"}",
"{\"title\": \"Response to response\", \"comment\": \"I appreciate the response. I think though that the comparison I suggested against data concatenation or multi-task training is something needed. Sure the amount of training data used is small, but could still improve the results but differences obtained by adding SCoRe are not huge either.\"}",
"{\"title\": \"Re: Comments\", \"comment\": \"***Do existing LMs (BART, T5, GPT-2) outperform custom models + BERT on CSP tasks?***\\n\\n1) Thanks for reporting these numbers, this actually confirms the point I was making at the beginning in my review. Larger LMs, e.g. T5, which are pre-trained with ***text only***, not even related to CSP or dialogue, can perform well in this task. And \\\"cannot be used on usual GPUs\\\" is actually not true (https://huggingface.co/t5-3b), for the pre-trained of SCoRe you have been using 8 V100s, with a little engineering T5 can be run in probably two V100, with probably very few finetuning steps needed. Anyhow, the main point is that: independently by the number of parameters, which for instance has never been mentioned in the paper, LM on text-only can achieve similar or better performance in CSP. Especially, given that one of the main claims/motivation of the paper is: \\n\\\"However, existing pre-trained LMs that use language modelling training objectives over free-form text have limited ability to represent natural language references to contextual structural data\\\"\\nWhy shall we prefer a custom architecture or very complex custom pre-training strategy over LM pre-training? notice number of the parameter can be reduced with compression techniques for example, and again the paper never mentions model size. \\n\\n2) I am a bit confused by the table, BART with (No DB used) seems to perform very well. I am wondering if is there a result with BART + DB used.\\n\\n3) From my understanding, ToD-BERT is a classification approach for DST, thus, of course, GPT doesn't perform well. And in Simple ToD, the GPT is trained to generate all the Dialogue-State, so it needs also to learn the slot names. Anyhow, also here, the size of the model matters if we talk about SOTA, and in GPT, the larger the better. \\n\\n***Are our improvements marginal?***\\nThanks for clarifying, the increase in percentage is subjective, and since we are out for subjective comments, I believe that for a much simpler pre-training such as MLM, or a general pre-trained model, the reported improvement is marginal. But as I mentioned this is subjective, depending on what is the experimental setting. \\n\\na) fairness of pre-training datasets and comparisons: synthetic data generation is tricky because, yes you use only 500 samples or the dev set only, but there is a huge human bias in the construction of these datasets, which probably read way more than 500 samples, or even if you read 500, can pretty much guess the distribution of the data, especially for a grammar-based generation as SQL. \\n\\n***Should pre-training be done over only largely available datasets from real data samples?***\\nThanks for this comment, I mostly agree. \\n\\n***Contribution***\\nThanks for your summary, I would suggest to include this more clearly in the paper. \\n\\n\\nThanks again for your comments. I believe we mostly clarify the doubts. \\nI will update my score and my review accordingly, and let the AC and other reviewers decide. I suggest making a new comment summarizing this discussion, for facilitating the AC job and having your paper fairly judge.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks a lot for your clarification and comments!\\n\\n**Do existing LMs (BART, T5, GPT-2) outperform custom models + BERT on CSP tasks?**\\n\\n***Mostly No, based on our experiments and other published results.*** We cannot find any published results on SParC/CoSQL using BART/T5. However, we have found some evidence on Spider. Spider is the single-turn version of SParC and CoSQL. Among >70 submissions, only 3 or 4 of them use BART/T5 (they are not SOTA) and most use BERT/RoBERTa. If BART or T5 can simply be applied to CSP tasks, it is very likely that they already SOTA and would be used more often. Below, we compare T5/BART with RAT-SQL+BERT-Large, which is the same SOTA base system for SParC and CoSQL in our paper.\\n\\n(1) T5. From a recent Google paper [Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?](https://arxiv.org/abs/2010.12725), in Table 4 they applied T5 as seq2seq to Spider:\\n\\n| Model(parameters) | Spider |\\n|---|---|\\n|T5-Base (220M)| 57.1|\\n| T5-3B (3000M)| 70.0|\\n| RAT-SQL + BERT-Large (~500M) | 69.7|\\n\\nRAT-SQL+BERT-Large outperforms T5-Base by 12.6. ***T5-3B improves only 0.3, but it is 6 times larger*** and cannot be used on usual GPUs (Google can use TPU), so it is prohibitively expensive for us. SCoRe is significantly smaller than T5-3B and hence easier, cheaper, and faster to finetune and deploy.\\n\\n(2) BART. During the beginning of our project, we have performed experiments to compare BERT, BART encoder, and BART encoder+decoder as seq2seq. ***We found BART cannot outperform custom models + BERT***:\\n\\n| Model|Spider|\\n|-|-|\\n|RAT-SQL + BERT|69.7|\\n|RAT-SQL + BART encoder|67.8|\\n|BART encoder + decoder (406M, as a seq2seq task)|62.4|\\n\\nMoreover, from Table 2 and 3 in [SMBOP: Semi-autoregressive Bottom-up Semantic Parsing](https://arxiv.org/abs/2010.12412), BART didn't outperform BERT:\\n\\n|Model|Spider|\\n|-|-|\\n|RAT-SQL + BART-Large (No DB used)|66.0|\\n|RAT-SQL + BERT-Large (DB used)|69.7|\\n|RYANSQL + BERT (No DB used)|60.6|\\n|SMBOP + BART-Large (No DB used)|60.5|\\n\\n(3) GPT-2. From Table 5 in ToD-BERT and Table 1 in SimpleToD, ***GPT-2 does not outperform BERT on MWoZ***:\\n\\n|Model| MWoZ|\\n|-|-|\\n|GPT-2 (ToD-BERT)|46.2|\\n|BERT-Base (ToD-BERT)|45.6|\\n|GPT-2 (SimpleToD)|56.5|\\n|TripPy BERT|58.4|\\n\\n**Are our improvements marginal?**\\n\\n***The improvements of our CCS+TCS without MLM are actually very significant compared to MLM only.*** The best results are achieved using CCS+TCS without MLM on SParC, CoSQL, and SQA. We believe the improvements of ~2-6% on these competitive tasks are not marginal. From Table 6,\\n1. CCS+TCS largely outperforms MLM only (+5.5% on SParC, +1.7% on CoSQL, and +3.4% on SQA)\\n2. Adding MLM to CCS+TCS hurts the performance (-3.9% on SParC, -0.3% on CoSQL, and -4.4% on SQA)\\n3. Using only MLM (SCoRe MLM only vs. RoBERTa) helps a bit (<~1%) compared to RoBERTa\\n\\n\\n**Fairness of pre-training datasets and comparisons**\\n\\nAs mentioned above, the best results are achieved by pre-training SCoRE on ***only synthesized data (using CCS+TCS) without any natural questions (using MLM)*** on SParC, CoSQL, and SQA. We do NOT highlight MLM as our contribution. Actually, we show that adding MLM to TCS+CCS hurts. To clarify our usage of datasets,\\n1. CCS+TCS doesn't use any label data except only ~500 SParC examples to induce synthetic data grammars (Section 2.3).\\n2. NO CoSQL or SQA data are seen in our pre-training, and SCoRe with CCS+TCS still outperforms RoBERTa on CoSQL (+2.7%) and SQA (+4.9%).\\n3. For WMoZ, Campagna et al. (2020) use only the *dev* examples to induce the grammar. \\n\\n\\n**Should pre-training be done over only largely available datasets from real data samples?**\\n\\n***The difficulty is CSP data annotation is VERY expensive***. Semantic parsing tasks are different from text-to-text generation because it has to also encode other inputs (such as structural database schema and table content) and then decode formal programs (such as SQL). Therefore, using larger models (BART, T5, GPT-2) as a seq2seq model for semantic parsing is not a trivial task. We agree that ideally pre-training for CSP should be done over large real CSP datasets. However, CSP annotation requires experts to annotate formal programs such as SQL. Even by combining existing CSP datasets, the size is still limited.\\n\\n***As our contribution, we prove how to perform effective pre-training on synthesized data for many CSP tasks*** with conversational and compositional questions. In fact, while data synthesis has been widely applied ([Berant and Liang, 2014], [Wang et al., 2015], [Jia and Liang, 2016], [Andreas, 2020]), how to use them for pre-training is still unclear. We demonstrate the effectiveness of our pre-training objectives (TCS+CCS) over multiple representative CSP tasks including fully- and weakly- supervised (dialog state tracking, context-dependent semantic parsing, SQL-grounded state tracking, sequential question answering).\"}",
"{\"title\": \"Response and Clarification\", \"comment\": \"Thanks for your response, let me clarify my review.\\n\\n***Comparisons with other existing large language models***\\nLet some reason why comparing with different and larger LM is important\\n(1) yes, you can apply this method to any kind of LM, after all, is a pre-training strategy. However, the point here is that if you use large and more powerful LMs (e.g., BART T5 etc.) the advantage of using CCS and TCS may become more and more marginal, and thus a general LM is still preferable over a custom model for this task, which also requires dataset-specific (collected specifically for the task) and synthetic data.\\n(2) The concern I am raising is the effectiveness of the proposed pre-training over different kind of pretraining and larger models, not about SOTA models. Since you are proposing a pre-training strategy, the comparison is among pre-training strategy, and since larger and large LM, especially for Seq2Seq ask, works very well, it is important in my perspective to verify that the give pre-training is really effective. \\n\\n(1 from the list). I mean (1) BART (cast the task as a seq2seq generation task and directly use BART as the task model).\", \"r\": \"\\\"But they cannot directly show the effectiveness of our proposed pre-training strategies\\\", I meant running BART as a baseline even without your pre-training strategy, to compare different pre-training strategies.\\n(2 from the list) That's nice, thanks. \\n(3 from the list) I apologies, I did not notice Table 6, thank for point it out. As we can see also here, the adding CCS and TCS lead to marginal improvements, especially considering the amount of annotation needed for CCS and TCS. PS. I suggest to bold the best in the table, to improve readability. \\n\\n***Fairness of pre-training datasets and comparisons***\\n(1) As you mentioned, MLM sees samples from the evaluated datasets, this will bias the model over these samples, even if it is just the text. For instance, it has at least seen all the entities and can learn a good representation of those. This would not bias too much the model, but it is important to show results for a completely unseen dataset. After the model is released, it will be fine-tuned to other tasks, what are the expected performance? I mean this would mean simply to finetune the proposed model to another dataset, for example, SGD (Rastogi et.al. 2019).\\n\\n(2) thanks, but this is partially not true. Generating synthetic data as in Campagna et al. (2020) uses the inductive bias from the dataset itself (e.g., MWoZ) to generate the data. Else, the synthetic data would be completely random. \\n\\n(3) and indeed, it doesn't perform as well as other baselines\\n\\n(4) I absolutely agree with your statement, but I don't get the novelty of showing that MLM works in CSP. And again, I am not convinced that pre-training model for seq2seq problems (e.g. BART, T5 or even GPT2-XL) cannot work well in these tasks.\\n\\n\\nIn general, two points: \\n- I think pre-training, by definition, should be done over largely available datasets from real data samples. Otherwise, the resulting model will be helpful only in particular domains and problems. \\n- The baselines, although SOTA, are not relevant to the claim, and to it is not convincing a so complex pre-training (CLS and TCS) vs over simple MLM or even MLE as in GPT. BERT and RoBERTA are good in classification tasks, CSP is more a generation task in my understanding.\\n\\n\\nI hope my comments do not sound rude or aggressive, they are not meant to be. I like the CSP approach to ToDs, and I happy to see such progress in this field. \\n\\nThanks again for your response, I hope my comments can engage in more discussions.\\n\\nLooking forward to hearing from you.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for your feedback!\\n\\n**Comparisons with other existing large language models**\", \"we_think_our_current_comparison_in_the_paper_is_fair_and_reasonable_for_the_following_reasons\": \"(1) Our pre-training approach can be applied to any existing LMs including BART and we demonstrate that it improves the performance on CSP tasks when using BERT/RoBERTa as our base. (2) The reason why we choose BERT/RoBERTa as our base LM is because almost all of the current SOTA models on CSP tasks actually use them. We are not aware of any published results that use BART, T5, or other large LMs for conversational text-to-SQL.\\n1. Regarding your suggestion on comparison with BART, etc., could you please clarify whether you mean comparing base systems (e.g. RAT-SQL) + BERT/SCoRe with (1) BART (cast the task as a seq2seq generation task and directly use BART as the task model), or (2) with base systems (e.g. RAT-SQL) + BART encoder? We agree that both comparisons are interesting to have. But they cannot directly show the effectiveness of our proposed pre-training strategies (without the same base LM and base systems, and involving other unrelated factors). In particular, the first comparison is more like comparing BART vs. Downstream task-specific models (e.g. RAT-SQL). \\n2. ToD-BERT actually uses both MLM and response contrastive loss, not just MLM. For this comparison, we will try to add the experimental results of RAT-SQL + ToD-BERT vs. RAT-SQL + SCoRe on SParC and Trippy + ToD-BERT vs. Trippy + SCoRe on MWoZ before the rebuttal ends.\\n3. We already provide results of SCoRe pre-trained using only MLM (without the CCS and TCS) on all four tasks in Table 6.\\n\\n**Fairness of pre-training datasets and comparisons**\\n\\nThanks for this point. We believe it is unwarranted, for four reasons:\\n1. Most importantly, we only pretrain with MLM on natural data, and with CCS+TCS only on synthetic data (Eq 5). We don\\u2019t actually use any labeled data from the same datasets as the evaluation for training. The only exception is using ~500 examples from SParC to manually induce synthetic data grammars for CCS and TCS (Section 2.3).\\n2. Table 6 shows an experiment where only CCS and TCS are used, thus no natural data from downstream datasets are leveraged at all. SCoRe still achieves a significant improvement.\\n3. The SQA dataset is entirely separate as it\\u2019s weakly supervised (no SQL labels) yet we pretrain on synthetic text-to-SQL. No SQA data points are seen in any pretraining steps.\\n4. Finally, our comparison with BERT/RoBERTa is fair because it is an important research problem to adapt pretrained language models using in-domain task-related data (e.g., Don\\u2019t Stop Pretraining: Adapt Language Models to Domains and Tasks). In Table 6 we did have a comparison with RoBERTa/BERT using MLM-only pretraining, which shows the benefit of SCoRE on four datasets.\\n\\n**Ablation Study**\\n\\nWe do have an ablation study. In Table 6, we show the effectiveness of three different independent objectives. More discussions are available in the paragraph to answer the research question \\\"What is the effect of each pre-training objective?\\\". In particular, we found SCoRE outperforms RoBERTa/BERT + MLM on additional in-domain data.\\n\\n**Minor-Cons**\\n\\nThanks for catching these! We will submit a revision before the rebuttal deadline by changing \\\"SCoRe\\\" function to \\\"RoBERTa\\\", correcting the typo in Eq 1, and showing more examples of CCS and TCS.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for your feedback!\\n\\n**Usefulness of TCS objective**\\n\\nThanks for the suggestion! Yes, we'll do that. Also, you can get secondary evidence for it by comparing TCS+CCS with CCS only that are already in Table 6 (the performance is improved on all the three tasks especially SQA by adding TCS.). We'll run the TCS-only experiment during the rebuttal period and will update the PDF (likely for one dataset during the rebuttal period and all others for the camera-ready).\\n\\n**Effectiveness of each grammar**\\n\\nTo clarify this suggestion, did you mean template rules or the whole grammar? For each interpretation:\\n1. Ablation by individual template rules. The set of ablations will be very large, hard to interpret and the computational expense will be prohibitive.\\n2. Ablation by grammars. This type of ablation will not be informative. If we don't use the follow-up grammar, we cannot generate follow-up conversations, which (a) reduces the problem to single-turn semantic parsing, and (b) effectively drops CCS+TCS in pre-training entirely. If we don't use context-independent grammar, we cannot generate the first questions in conversations.\\n\\n**Comparison with Herzig et al. (2020) on SQA**\\n\\nHerzig et al. (2020) outperform Wang et al. (2019) on SQA because (1) they don't generate logic forms but answer questions on tables by selecting table cells and applying aggregation operators. Wang et al. (2019) generate latent programs to answer questions yet the grammar of the latent program can only cover a subset of questions. (2) They reduce the search space by reusing the answer to the previous question to answer the current question. However, we showed that SCoRE can improve Wang et al. (2019) with RoBERTa and achieve a performance of 65.4% very close to SOTA of 67.2%. We will still try to improve our performance, but the current experimental results have already shown the effectiveness of SCoRE.\\nFurthermore, we also believe that generating symbolic programs has many practical advantages (even at a cost of ~1% accuracy drop), such as showing interpretable reasoning steps, enabling formal reasoning, and operationalization without GPU/TPU accelerators. We will add this to our rebuttal revision.\\n\\n**Few-shot learning with 10% training data on SQA**\\n\\nIn this experiment, we want to answer the research question of \\\"Can SCoRE deliver more value when in-domain data is limited (e.g., in a low-resource setting)?\\\" This is similar to experiments done in ToD-BERT paper and other investigations of LMs as few-shot learners. We choose SQA for this experiment because the SQA annotation is most different from the synthetic text-to-SQL dataset we use for pretraining SCoRE. In Table 4, the setting of 10% SQA training data demonstrates that SCoRE delivers more benefits compared to the RoBERTa baseline under a low-resource setting where only a small amount of task-specific annotations are available. When all training data is used, SCoRE of 65.4% outperforms RoBERTa of 62.8% by 2.6% QM accuracy, and when only 10% training data is available to both models, the improvement (57.1% - 53.3% = 3.8%) is even larger. We will add this analysis as a separate paragraph in Section 4 in our rebuttal revision.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for your feedback!\\n\\n**Using labeled data in pre-training method**\\n\\nThanks a lot for your thoughtful comments and suggestions. We would like to first clarify that our pre-training approach *doesn\\u2019t use all labeled training data in the four datasets but only a small set of SParC dataset* (~500 examples, to manually induce the data synthesis grammar). Then we apply the induced grammar to synthesize data for pretraining using CCS and TCS to learn the alignment of utterances and the queries. Therefore, \\n1. In our SParC experiments, both SCoRe (only the ~500 examples) and baseline models only use SParC training data;\\n2. For CoSQL and SQA experiments, SCoRe pre-training doesn\\u2019t have access to any CoSQL and SQA labeled data;\\n3. To show our pre-training method is not tied with a grammar induced from dataset-specific examples, as shown in Table 2 and 4, SCoRe could still improve the performance on CoSQL and SQA even though its data synthesis grammar is only induced from SParC.\\n\\nWe will make this clearer in our rebuttal revision. Please also check out our response to a similar question of Reviewer 5.\\n\\nAdditional comparisons against data concatenation or multi-task training (e.g., training a single model on all concatenated labeled data in the four tasks) would be very interesting, yet since we only use a small amount of labeled SParC data in our SCoRE pretraining, using an equivalent amount of data is unlikely to significantly help.\\n\\n\\n**Technical Clarity**\\n\\nThanks for this suggestion! We will update equations 2 and 3 (e.g.) by including the formal representation q in our rebuttal revision.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for your feedback! We will update our paper in our rebuttal revision to make a more compelling story with comparisons with other contextualization methods as follows. We are happy to incorporate any future comments.\\n\\n**Our motivation and story**\\n\\nWe will work on the introduction and related work sections to clarify the motivation and better position the work w.r.t other contextualization methods. To summarize our motivation and story, we observe that the shared key challenge in several different CSP tasks (dialog state tracking, context-dependent semantic parsing, SQL-grounded state tracking, sequential question answering) is how to jointly represent the natural language utterances and underlying structured ontology while taking into consideration the multi-turn dynamics of the dialog. Questions in these CSP tasks are more compositional than other text since they can be mapped into formal representations, and therefore we would like to design a more effective pre-training approach to inject this kind of compositional bias in LMs. Unlike other prior work on data synthesis using an induced grammar to enlarge data size, we propose to pre-train LMs on synthesized data and design CSP related pre-training objectives (CCS and TCS) that creates representations to improve several CSP tasks.\\n\\n**Comparisons with related work on other contextualization methods**\", \"we_will_update_the_related_work_section_with_an_improved_discussion_on_other_contextualization_methods_and_highlighted_our_contributions_compared_to_previous_work\": \"1. While the previous work has achieved significant progress in different datasets separately, to the best of our knowledge, we are the first to study four different CSP tasks together (sequential text-to-SQL, conversational text-to-SQL, dialog state tracking, and weakly-supervised sequential question answering) by addressing the shared key challenge of learning representations in pre-trained language models that capture the alignment between the dialogue flow and the structural context.\\n2. Our approach is different from previous work because we address the challenge of conversational semantic parsing tasks by learning pre-trained representation for both the multi-turn dynamics of the dialog and the relation between the unstructured language utterance and the structured ontology.\\n3. We introduce a new data synthesize procedure for conversational text-to-SQL dialogues and use it in a different way by pretraining language models to induce better representations for many CSP tasks.\\n4. Our pre-training approach is much more data-efficient than prior LM pre-training work and saves a lot of time and computing resources. Our pre-training step can be done within only one day using 8 V100 GPUs.\\n\\n**Clarification on TCS**\\n\\nTurn Contextual Switch encodes changes between different turns of user queries (the system response is not involved here) since we assume that most turn contextual shifts are from the user.\\n\\n**Paper Format**\\n\\nThanks for pointing this out! Table 1 is pointed in multiple places, so it was hard to move it closer to all the references. We will separate it from the figure below and put it closer to the position where the first reference occurs (like right before Section 2) in the rebuttal version.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for your feedback!\\n\\n**Data synthesis steps**\\n\\nThe reason why we synthesized 435k dialogues is because there are about 400k tables from multiple online resources after filtering and cleaning and then we synthesize about one dialog for each table. We will add more details on data synthesis to the rebuttal version. In Appendix, Table 11 shows an example of synthesized data and corresponding templates in our grammars.\\n\\n**Generalizability concern due to the synthesized data**\\n\\nThanks for raising this point! We also considered this problem when we were working on this project. To show our pre-training method *is actually general* (not tied with a grammar induced from a specific data), we only study a very small set of SParC dataset in order to induce the grammar generation rules. SCoRe is pre-trained on the data synthesized by the grammar (induced from a small subset of SParC), and could still improve the performance on CoSQL and SQA. Also, CSP questions are more compositional than other text since they can be mapped into some formal representation. Data synthesis using induced grammar is widely used to enlarge data size.\\n\\n**SOTA base systems vs. \\u201cstronger\\u201d general pre-trained models**\\n\\nWe agree that these kinds of experiments would be interesting to have. However, the main focus of our paper is to show the effectiveness of the proposed CSP pre-training method for a practical-sized pretrained LM. We are not aware of any published results using large LMs such as BART/T5 for conversational text-to-SQL, as they are challenging to fine-tune and deploy and require access to computational resources that are not readily available. Please also check out our response to a similar question by Reviewer 5.\\n\\nWe believe that no matter what base systems or language models will be used, the pre-training approach is likely to improve their performance on CSP tasks (either incorporated with base systems or directly pre-train the language models), but this will require additional experiments to verify. In Section 4, we show the *effectiveness of SCoRe improving multiple independent base systems*.\\n\\n**Areas for improvement**\\n\\nWe will consider adding more studies on general seq2seq models for small single-turn datasets and sentence classification tasks, yet SCoRe is designed for CSP tasks.\\n\\n\\n**Typos and Clarity**\\n\\nThanks for catching these! In our rebuttal revision, we will add citations in Table 3, correct statistics about MultiWOZ 2.1, add more description of SQA, add more details on pre-training steps.\"}",
"{\"title\": \"Review (Edited after comments)\", \"review\": [\"[Summary]\", \"In this paper, the authors proposed a pre-training strategy for Conversational Semantic Parsing (CSP) tasks. The pre-training is run on top of any existing LM (i.e., in this work RoBERTA has been used), and uses three additional loss functions to inject the CSP inductive bias into the LM: Column Contextual Semantics (CCS), Turn Contextual Switch (TCS) and Masked Language Modeling (MLM). Moreover, the authors proposed to use synthetically generated data in the pretraining. The results are presented in four well-know datasets for CSP: SPARC, COSQL, MWOZ, and SQA.\", \"[Pros]\", \"the proposed pre-training strategy is novel\", \"the performance of the proposed pre-training strategy is effective\", \"[Cons-Edited]\", \"the paper claims that \\\"However, existing pre-trained LMs that use language modelling training objectives over free-form text have limited ability to represent natural language references to contextual structural data.\\\", there the authors have not compared the proposed strategy with large pre-trained LMs, especially for Seq2Seq training (e.g., BART, T5, GPT-2) and larger versions of these models. Independently by the number of parameters, which for instance has never been mentioned in the paper, LM trained on text-only can achieve similar or better performance in CSP (Check Author Response Comments), without the need of task-specific pre-trained (i.e. CCS and TCS).\", \"although few samples, only 500 samples or the dev set only, are used for generating the synthetic data, some of the datasets are used for the pre-training strategy. Moreover, there is a substantial human bias in the construction of the synthetic data, for instance, to create these data probably a human would need to read way more than 500 samples, or even with 500 samples, a human can pretty much guess the distribution of the data, especially for a grammar-based generation as SQL\", \"the comparison made in the paper are over existing model instead over existing pre-training strategy, or larger models.\", \"[Minor-Cons]\", \"In Eq. (1) the \\\"SCORE\\\" function is actually a RoBERTA encoder, if I understood correctly, else, this function is not defined anywhere. Why not using RoBERTA or LM instead?\", \"In Eq. (1) there is a typo I guess $v_t$ should be $h_t$\", \"The explanation of the two pre-training loss CCS and TCS is very hard to understand, and Figure 2 doesn't help. I suggest showing more examples.\", \"[Reason to reject] The claim of the paper is not supported for lack of comparisons with different, larger and using different pre-training strategies, LMs. Moreover, the community should be encouraged to create as general as possible pre-trained models, instead of task-specific ones, and especially pre-trained models that use real and unlabeled data.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"TCS needs further investigation\", \"review\": \"The paper proposes to pretrain contextual semantic parsing models on synthesized data with two new training objectives: Column Contextual Semantics (CCS) and Turn Contextual Switch (TCS). The CCS objective predicts correct database operations based on corresponding columns in tables. The TCS aims to predict the labels of conversational turn switch patterns categorized based on differences in meaning representations between dialogue turns. The synthetic data is generated by apply two utterance-SQL generation grammars. They show that the new approach significantly outperforms te baselines on Sparc, CoSQL, and MultiWOZ.\\n \\nMy decision is just between marginally accepted and marginally rejected. I like the idea of pretraining with CCS and the empirical results show that the proposed approach outperforms all baseline systems on three out of four benchmark datasets. However, my major concern is the usefulness of TCS.\", \"pros\": \"1. The paper finds out that CCS and synthetic data works in pre-training, despite the prior work finds synthetic data is not useful in a standard supervised setting. \\n\\n2. Overall, the paper is well written. I can easily follow the technical details of the proposed methods. \\n\\n3. This paper provides comprehensive experiments to justify the key contributions of this paper. The ablation study helps understand which technique works on the selected datasets.\", \"cons\": \"1. The usefulness of TCS objective needs further justification. I suggest adding experiments of TCS only to Table 6. The TCS objective needs further investigation to understand in which cases it works.\\n\\n2. It is desirable to have an ablation study to investigate how effective is each grammar (with/without follow-up context-free grammar) for pretraining.\", \"questions\": \"1. Could you explain why the proposed method works worse than herzig et al. (2020b) on SOA?\\n \\n2. Why does the system compare with the baselines also trained on 10\\\\% training data of SOA?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3 (Edited post author response period)\", \"review\": \"This paper proposes a pre-training approach to improve the performance in conversational semantic parsing. The idea is to use the training data to learn how to generate contextual representations by combining the now commonly used masked language modelling pretraining objective (MLM) with two additional objectives, named column contextual semantics and turn contextual switch. Furthermore, additional synthetic data was generated.\\n\\nI found the paper easy to follow and the results convincing on the whole. 4 datasets and different parsers were used and in each case the proposed method improved the results, and in 3 out of 4 a new SOTA was reached. \\n\\nHowever, it is important to note that the objective propose needs labeled training data in order to learn the alignment between the utterances and the queries. Thus they allow us to exploit labeled data for other versions of the task, rather than exploit unlabelled data for the semantic parsing. Therefore, while I like the paper, I think a key comparison missing is to compare against training existing models using combinations of the datasets considered, to allow the models access to the same training data. This could be easy incases where the output is of the same form, e.g. SQL for SPARC and COSQL, or it could be multi-task training to combine it with the datasets fo the other two tasks. As things stand, the approach proposed indeed brings benefits, but it is only compared against methods using no additional labeled data beyond what is built for the task at hand. Having said this, I am not arguing that training data concatenation or multi-task training will work better; but I think such a comparison is needed.\\n\\nBeyond this, the other aspect of the paper that needs to be improved is the technical clarity. While I found the intuitive descriptions of equations 2 and 3 easy to understand, the equations themselves were not. In both cases the formal representation q (e.g. the SQL query) while it is needed to calculate the objective, it is not in the equations. Thus it would be very easy to have it interpreted differently and thus lead to ambiguity and people not being able to reproduce it. This is particularly important as how one does the decomposition of q is likely to matter to the results.\", \"post_author_response\": \"I appreciate that the extra experiment I asked for was conducted and the equations were fixed. While I think Review5 raised some interesting discussion points which should be included in the final version, I still think the paper has merit, even if larger-scale pre-training would have improved the results. Thus I raised my score to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting work, would benefit from improved discusison\", \"review\": \"Overview: the authors propose a new pre-training method for grounding LM representations in both structural/schematic information and also dialogue context. To explore the efficacy of this method, they experiment in 4 different conversational semantic parsing tasks, which are each different enough to demonstrate the usefuleness of their approach.\", \"contribution\": \"modified pre-training method\", \"the_good\": \"My overall impression is that this work makes sense, the paper is clearly written and flows just fine, and that the authors demonstrated the efficacy of their proposed method. The fact that the 4 baselines are different enough makes your claim convincing.\", \"the_bad\": \"I feel that this paper is really lacking a driving motivation and the cohesive story is a bit weak/lacking. For example, I felt the Related Works section was very superficial, rather than contributing. Shouldn't the discussion/conversation be more about contextualization methods? You are not the first to try contextualizing schema elements, for example. I think the paper could be significantly improved if the conversation is shifted towards discussion of contextualization, rather than \\\"CSP is a task. People use pre-training. You can generate some data and use it\\\". You have an interesting story in your hands, but you are not using it! How is your contextualization method different from other attempts? In what ways is it better, in what ways is it worse? That said, I still think the paper has good and interesting work, and I do think it should be accepted, but I would really encourage the authors to consider making some changes on this note because it will make the work much more interesting, thought provoking, and impactful.\", \"small_things\": [\"unless i click on the box to jump to Table 1 on the PDF, it took me some time to find Table 1 when I printed it out. Is there anyway to move this closer to where you point to it, or sepperate it from the figure below?\"], \"clarifying_question\": [\"With regards to encoding the Turn Contextual Switch -- are you encoding the changes from user to agent, or just the changes from user to user, or both? Is it always user to user query changes, because some datasets have no system response? This is the only section in the paper that I found to be lacking a bit, and maybe a sentence or two more could add clarification for me.\"], \"overall\": \"\", \"good_work\": \"-)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well written paper introducing a pre-training objective specifically for semantic parsing.\", \"review\": \"Summary:\\nThis paper introduces a semantic-parsing specific pretraining objective. The authors argue that general pre-training methods such as BERT do not have enough inductive bias for semantic parsing.\\nSince a lot of data doesn\\u2019t exist for semantic parsing datasets, the authors use synthetic data to better adapt to the ontology of the task. They use this pre-trained model on 4 semantic parsing tasks and show that their pre-training is indeed helping the SOTA models by establishing new SOTA for 3 of the 4 datasets.\", \"reasons_for_score\": \"This is a clearly written paper with the objective of making conversational semantic parsing better. The authors present a reasonable idea, do extensive experiments to show its merit on variety of tasks. The pre-trained checkpoints released by this paper will be useful to the community as a whole.\", \"pros\": \"1. 2 new objective functions (TCS and CCS) specific to the task of semantic parsing. Authors clearly show that both those objectives help downstream tasks\\n2. The paper was easy to follow with clear objectives.\\n3. Strong performance of their pre-trained model which improves over previous SOTA\\n4. Well done ablation studies and analysis.\", \"cons\": \"1. Data synthesis steps are ad-hoc. Why were only 435k dialogues synthesized? It would be great to have a more detailed study of how the synthetic data looks, what is the effect on model performance.\\n2. Due to synthetic data the method is not very general. Training larger models with more data will be non trivial to do.\\n3. While the authors chose SOTA baselines for their task, stronger general pre-trained models (BART, T5 etc) might beat this method easily.\\n\\nPlease address and clarify the cons above \\n\\nTypos/Areas for improvement:\\n1. Citations in table 3\\n2. Mistake in Table 1 Multiwoz2.1 is multi domain.\\n3. The number of utterances are not clear: 3.1 -> vanilla multiwoz2.1 does not have over 100k task oriented dialogues\\n4. 3.1 SQA description not clear.\\n5. In section 2.2 it would be great to have a clearer description of the steps for pre-training, it's hard to tease out the exact steps taken\\n6. For table 7, what happens when you use the synthetic tuning data to first train and then fine-tune on the original task.\\n7. For table 8: More studies on the generalizability would be good\\n 1. If you simply drop in the network to something small like geo-query, does it help a general seq2seq model?\\n 2. Beyond semantic parsing tasks - does this help in sentence classification too?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
C4-QQ1EHNcI | Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference | [
"Erik Daxberger",
"Eric Nalisnick",
"James Allingham",
"Javier Antoran",
"José Miguel Hernández-Lobato"
] | The Bayesian paradigm has the potential to solve some of the core issues in modern deep learning, such as poor calibration, data inefficiency, and catastrophic forgetting. However, scaling Bayesian inference to the high-dimensional parameter spaces of deep neural networks requires restrictive approximations. In this paper, we propose performing inference over only a small subset of the model parameters while keeping all others as point estimates. This enables us to use expressive posterior approximations that would otherwise be intractable for the full model. In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork. We propose a subnetwork selection procedure which aims to maximally preserve posterior uncertainty. We empirically demonstrate the effectiveness of our approach compared to point-estimated networks and methods that use less expressive posterior approximations over the full network. | [
"expressive posterior approximations",
"expressive",
"subnetwork inference expressive",
"subnetwork inference",
"bayesian paradigm",
"potential",
"core issues",
"modern deep learning",
"poor calibration"
] | Reject | https://openreview.net/pdf?id=C4-QQ1EHNcI | https://openreview.net/forum?id=C4-QQ1EHNcI | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"iVwwjAuOa3T",
"wJpLDFMTPCX",
"DLL2ZuzZ21e",
"uOb_FDh4Qhz",
"rqcMhCHK7aw",
"KBMXrxa8J7",
"dnuTnUwRkin",
"s2PYE0iJSN",
"xrYSjH7Nol5",
"sjop_3593Hz",
"f-8Qd2qUUCi",
"gErsWdf0h8c"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040351322,
1606305445496,
1606305276748,
1606305048977,
1606304967240,
1606304637577,
1606304568338,
1606304313485,
1603926164691,
1603855184992,
1603375540140,
1603128928914
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3771/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3771/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3771/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3771/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper propose an approach to efficient Bayesian deep learning by applying Laplace approximations to sub-structures within a larger network architecture. In terms of strengths, scalable approximate Bayesian inference methods for deep learning models are an important and timely topic. The paper includes an extensive set of experiments with promising results.\\n\\nIn terms of issues, the reviewers originally raised many concerns and the authors provided a large update to the paper. However, following that update and the discussion, several concerns remain. First, the reviewers noted that the originally submitted draft made claims about the optimality of the sub-network selection procedure that were incorrect due to the use of a diagonal approximation. The authors subsequently retracted these claims and re-focused on the idea that the subset selection approach is theoretically well-motivated heuristic that performs well empirically. Following the discussion, the reviewers continued to express concerns about the heuristic nature of this procedure. \\n\\nA second point has to do with scalability. The reviewers noted that the authors had only evaluated their approach on small data sets, leaving open the question of how scalable the method is. The authors responded by adding experiments on the same data sets using larger models, which does not squarely address the issue raised. Third, an additional point was raised regarding the lack of control of resource use in the experiments. The authors note that their approach can use more resources when available while many other methods can not. However, some methods including deep ensembles can also expand to use more resources, as can posterior ensembles produced using MCMC methods like SGLD and SGHMC. The authors need to consider quantifying space-performance and time-performance trade-offs in the same units for different approaches to satisfactorily address this issue. While the authors added one set of experiments looking at deep ensembles in isolation, their conclusions that performance saturates for these models at low ensembles sizes seems to be hasty in some cases (e.g., deep ensembles show continued improvement for large corruptions in Figure 5(right) despite the claim by the authors that the models saturate after 15 epochs). \\n\\nIn summary, this appears to be a promising approach. While the authors made significant efforts to correct issues and address questions with the original draft, the majority view of the reviewers following discussion is that this paper requires additional work to more carefully expand on the revised results and to address the heuristic status of the sub-network selection approach.\"}",
"{\"title\": \"Thank you for the feedback!\", \"comment\": \"Thank you very much for your kind words and helpful feedback! We address individual points below:\\n\\n\\n### Scaling to larger datasets\\n\\nYou bring up a good point. Our approach is scalable in the size of the weights as it decouples network size from the dimensionality of the space over which inference is performed. Therefore, we would argue that our validation in terms of scalability is provided by the size of the network we use. **In the appendix (see Figure 8 in Appendix B), we provide additional results on MNIST image classification with ResNet50 (as opposed to ResNet18 in the original version of the paper) to strengthen our claim.** In principle, the cost of subnetwork inference should scale linearly with the number of training samples, as it just requires summing more Jacobian outer products. The memory consumption is constant in the dataset size. **We have adjusted the claims in our paper to prevent confusion regarding this.**\\n\\n### MFVI and other baselines\\n\\n**We added mean-field VI BNNs to our image classification experiments in the revised manuscript.** In particular, we implemented VOGN, a natural gradient version of MFVI that scales well to large networks (https://arxiv.org/abs/1906.02506). We use the hyperparameter settings provided by the authors. We find that VOGN tends to underfit both datasets, although the issue is larger for CIFAR10. Despite this, similarly to Ovadia et. al. (2019), we find MFVI to provide large uncertainties for rotated MNIST digits. Here it performs similarly to subnetwork inference. However, this is not the case for CIFAR10 corruptions, where MFVI fares poorly both in terms of accuracy and uncertainty estimation.\\n\\nNote that last-layer approaches performed poorly in Ovadia et al. (2019), which is why we considered these to be less interesting/important to include. By \\u201cmore recent results from ICML [...] (as referenced in the related work section)\\u201d, we believe you are referring to Dusenberry et al. (2020) and Swiatkowski et al. (2020)? We did not compare to Swiatkowski et al. (2020), as they propose a more efficient parameterization of mean-field Gaussian VI (i.e. an approximation that is less expressive than mean-field), which is not expected to perform better than the mean-field Gaussian methods we assessed (i.e. diagonal Laplace and VOGN). Similarly, Dusenberry et al. (2020) also focus on efficiency, and as a result, their experimental results show that their approach performs consistently worse than deep ensembles. We focus on increased expressivity as opposed to increased efficiency.\\n\\n### NLL, ECE, accuracy tables\\n\\nWe agree that this would be useful, but unfortunately the space constraints prevent us from moving the tables to the main text. They can all be found in Appendix B.\\n\\n### Minor suggestions / typos\\n\\n**Thanks a lot for all these suggestions, which we incorporated into the revised paper!**\"}",
"{\"title\": \"Thank you for the feedback!\", \"comment\": \"We thank the reviewer for their positive comments and constructive suggestions. We address both individual points below:\\n\\n### On the our subnetwork selection strategy\\n\\nWe agree that our phrasing of our subnetwork selection procedure was not the best. **We have revised our claims about subnetwork selection optimality and now simply present our approach as an empirically well-performing subnetwork selection strategy (with theoretical motivation).**\", \"regarding_our_approximations\": \"1) We employ the Laplace approximation as it is a post-hoc method. It allows us to perform post-hoc subnetwork selection.\\n\\n2) The linearized approximation goes hand in hand with the Laplace approximation. The full Hessian is almost always intractable. Instead it is common to resort to the outer product (GGN) approximation. The implied posterior corresponds to that of a linear model and indeed linearisation has been found to perform better for prediction than regular MCMC estimation of the BNN predictive posterior (https://arxiv.org/pdf/1906.11537.pdf).\\n\\n3) The diagonal assumption for subnetwork selection is indeed unrealistic. Due to the large size of NN weight spaces, considering the full covariance matrix is always intractable. One of the main findings of this paper is that it is better to perform a cruder approximation when choosing a subnetwork and then perform full covariance inference over that subnetwork than it is to directly use crude factorised approximations over the whole network (see experiments in Sections 5.1 and 5.3).\\n\\n4) Concerning your point regarding regression: The GGN approximation is indeed only exact for the linearised model in the regression setting. However it is an accurate approximation for generalised linear models that employ non-linear linking functions, like is the case for classification; see the updated Section 4 (step \\u201c1. Point Estimation\\u201d) for more justification and details.\\n\\n**We have adjusted our claims throughout the paper and re-written section 4 in a way that clarifies the points above.** \\n\\n### On the computational requirements of subnetwork inference and comparison to baselines\\n\\nYou are correct in that our approach uses more memory than baselines. **We provide additional experiments, as you suggested, showing our method makes an efficient use of parameters:**\\n\\nSeveral works have shown that performance of deep ensembles saturates after around a dozen ensemble members, i.e. there are diminishing returns (see e.g. https://arxiv.org/pdf/2007.08483.pdf, or https://arxiv.org/abs/2006.08437 ). We evaluate large deep ensembles (of up to 50+ elements) on our MNIST and CIFAR tasks and replicate this finding (see Figure 5 in Appendix B). We find the performance of subnetwork inference is better than a deep ensemble even for large numbers of ensemble members, in particular for high degrees of dataset shift. \\n\\nOn a separate note, with subnetwork inference, we can choose to use as much compute as we have available. This means that, unlike deep ensembles, our method can be scaled down when on a budget and could potentially improve with more capable hardware. Our experiments are limited by our available hardware: We choose a subnetwork size that allows us to comfortably make predictions with resnet18 on a single p100 GPU (16GB). I.e., while our approach is not explicitly aimed towards memory efficiency, we still do not use excessive amounts of memory. For severely memory-constrained settings (e.g. embedded systems), other approaches might indeed be more viable, as that is not the focus of our work.\\n\\nWe understand that our comparison might be \\u201cfairer\\u201d if all methods used the exact same memory and compute resources, but we do not think that this is the most useful comparison, as some methods can significantly improve in performance when more memory/compute is available (i.e. ours), while other baselines cannot benefit from having more resources (e.g. deep ensembles, MAP, Dropout, diagonal Laplace), and quickly saturate in their performance.\\n\\nFinally, this work is aimed to introduce a first viable method to effectively perform inference over subnetworks. We do not claim that our approach solves every issue, but believe that it is an important first step in a promising direction to make Bayesian deep learning more effective. There are many possibilities to make our method more (memory) efficient, which we believe to be beyond the scope of this work, but are certainly keen to explore in future work.\"}",
"{\"title\": \"Thank you for the feedback! (2/2)\", \"comment\": \"**20**\\n\\nThanks, fixed that!\\n\\n**21**\", \"it_is_simply_classification_error\": \"$\\\\frac{\\\\text{N incorrect}}{Total}$. **We clarify this in the updated manuscript.**\\n\\n**22**\\n\\nimproved \\\"robustness\\\" also appears to be equally unfounded on these grounds.\\nWe believe there may have been some miscommunication with regards to the term \\u201ccalibration\\u201d. We take calibration to refer to a model placing more uncertainty on points on which it is more likely to make wrong predictions (http://www.gatsby.ucl.ac.uk/~balaji/ensembles_nipsbdl16.pdf). We don\\u2019t think that the likelihood obtained by subnetwork inference are just \\u201cmarginally higher\\u201d, but significantly higher. The predictive error is similar, so any improvement in log-likelihood must be due to better uncertainty calibration. **We furthermore provide Brier score and ECE results in the appendix, which are widely used measures for calibration; these also support our claims that our method is well-calibrated, especially for increasing amounts of dataset shift (see Figure 7 in Appendix B).**\"}",
"{\"title\": \"Thank you for the feedback! (1/2)\", \"comment\": \"We thank the reviewer for their thorough feedback and constructive suggestions. We address your individual points below:\\n\\n**1-2**\\n**Agreed; we rephrased and clarified in the updated text.**\\n\\n**3**\", \"that_is_a_good_question\": \"we could indeed take the square of each weight\\u2019s gradients to fill the diagonal of our covariance matrix.\", \"we_did_not_do_this_due_to_additional_implementation_complications\": \"having larger matrices filled with zeros would force us to use sparse matrix optimised operations to retain computational tractability. We do not see this additional complication as necessary considering the poor performance of factorised posterior approximations in neural networks (https://proceedings.neurips.cc/paper/2020/hash/b6dfd41875bc090bd31d0b1740eb5b1b-Abstract.html). In practise, our main focus is on capturing correlation among weights. Indeed, in our experiments we consistently find full covariance subnetwork inference to handily outperform diagonal covariance approaches.\\n\\n**4-6**\\n\\n**We have significantly revised Section 4 for clarity.** In the updated version we focus on a generalised linear model where the MAP setting is found via gradient descent. This matches the setting used in all our experiments. Note that step 2 merely defines a general mask that can be produced by any pruning method; we then later describe how to choose that mask.\\n\\n**7**\\n\\n**We agree. The change has been implemented.**\\n\\n**8**\\n\\nYes, that is correct. We believe this is decently standard nomenclature for predictive models.\\n\\n\\n**9**\\n\\nResults for toy 1d regression are mostly qualitative. As such, it is hard to come up with a rigorous way of selecting hyperparameters. We choose to select the prior precision such that the full covariance approach (optimistic baseline) presented reasonable results and used the same value for all other methods. We first tried a precision of 1 and found the full covariance approach to produce excessively large errorbars (covering the whole plot). We then tried a value of 3 and found the result to look reasonable. **We describe this in our new implementation detail appendix.**\\n\\n**11**\\n\\n**Thanks, Done!**\\n\\n**12**\\n\\nIndeed Kin8nm is not a UCI dataset. Good catch!. **We added dataset descriptions and locations in the implementation detail appendix.**\\n\\n**14**\\n\\nWe are a bit confused by your suggestion. We present test data log-likelihoods to estimate the predictive performance of different methods. It is unclear to us how a prior would come into play here. \\n\\n**15**\\n\\nWe would like to clarify that the experiments in section 5.2 focus on regression tasks. We are not aware of any widespread (within the ML community) uncertainty calibration metrics for regression apart from LL. Indeed, most other paper that use the datasets we consider provide results in terms of LL (https://arxiv.org/abs/1502.05336, https://papers.nips.cc/paper/2017/file/9ef2ed4b7fd2c810847ffa5fa85bce38-Paper.pdf https://arxiv.org/abs/1506.02142 https://arxiv.org/abs/1811.09385 ). The papers you reference only address classification settings. **For our image classification experiments (Sec 5.3), we do provide ECE and Brier score plots in the appendix (see Figure 7 in Appendix B).**\\n\\n**16-18**\\n\\nIn two gap datasets you mention, the predictions from the MAP estimated single hidden layer networks are superior to those from the larger models. In linearized NNs, bayesian inference does not alter the predicted mean. This means that, on the aforementioned datasets, small networks will have more accurate mean predictions when performing subnetwork inference. Gap datasets are especially designed to be prone to models overfitting on them, to which we attribute smaller networks being more accurate. Having said this, even on these datasets, larger models see a larger increase in LL than smaller ones when performing inference over an equal number of weights. This is consistent with the results on our other datasets. Given the popularisation of large NNs in the past few years, and their strong empirical performance on real world data, we consider that it is indeed generally better to use a large model and perform subnetwork inference than performing full network inference on a small NN. **After reading your comments we acknowledge that this claim may be too aggressive and have relaxed it to: \\u201cGiven the same amount of compute, larger models benefit more from subnetwork inference than small ones.\\u201d**\\n\\n**19**\\n\\nAgreed! We first performed a grid search from 1e-4 to 1e4 in logarithmic steps, and then a second, finer-grained grid search between the two best performing values. Following Ritter et al. 2018, Kristiadi et al. 2020, we perform this search after training, using a validation set. Similarly to those previous works, we found tuning the prior precision to empirically improve results. **Full descriptions are in the new implementation detail appendix.**\"}",
"{\"title\": \"Thank you for the feedback! (2/2)\", \"comment\": \"### Details on our prior precision grid searches\\n\\nWe first performed a grid search from 1e-4 to 1e4 in logarithmic steps, and then a second, finer-grained grid search between the two best performing values. Following Ritter et al. 2018, Kristiadi et al. 2020, we perform this search after training, using a validation set. Similarly to those previous works, we found tuning the prior precision to empirically improve results.\\n\\nWith regards to the values found. We acknowledge that they might seem large. However it is difficult to gain intuition about the implications of prior variance specification in the high-dimensional weight space of a NN. Some possible explanations 1) In very large covariance matrices large jitter values can be needed to ensure positive semidefiniteness. 2) Large overparameterized NNs are underspecified by the data and require strong prior constraints to produce reasonably bounded uncertainty estimates.\\n\\nWe also note that, unlike Ritter et al. 2018, who scale their Hessian matrix to obtain reasonable results, we do not need to do this. We attribute this to our use of the linearised predictive function, where there is no mismatch between GGN the model and posterior. \\n\\n### Relation to sparse Bayesian deep learning\\n\\nIndeed subnetwork inference is related to more traditional sparse Bayesian methods in deep learning. **As such, we added a discussion of such approaches in the related work section.** \\n\\nWe did not include them initially, since the end goal is different. Generally speaking, the papers you cited use Bayesian reasoning to perform model selection. On the other hand, we aim to make inference tractable in a fixed model. \\nIt is unclear to us how we would perform a fair comparison with the methods you mention. We do not actually compress or prune weights. However, we estimate (co-)variances over only a subset of the weights. Importantly, this means that our approach retains the full predictive power of the full network to retain high predictive accuracy. \\n\\n### Moving ECE and Brier to the main text\\n\\nThanks for the suggestion. Indeed, subnetwork inference delivers strong calibration performance. Unfortunately we could not fit the results in the main text. They are in the appendix for now (see Figure 7 in Appendix B) but we will try to move them for the camera ready version.\"}",
"{\"title\": \"Thank you for the feedback! (1/2)\", \"comment\": \"Dear AnonReviewer1,\\n\\nWe thank you for your helpful feedback! It has helped us make our paper stronger and more clear. We hope that our updated draft alleviates your concerns with our work. We address individual points below:\\n\\n### On the diagonal approximation for subnetwork selection\", \"you_bring_up_a_good_point\": \"we estimate a full covariance posterior over a subset of weights but the subset is chosen by making a diagonal approximation. Due to the large parameter space of BNNs, it is necessary to use a crude approximation somewhere. Capturing all weight correlations is simply intractable. Our paper proposes to not use a crude approximation for inference, but instead use a crude approximation for subnetwork choice. We then do expressive inference over that subnetwork. In practice we find this to significantly outperform a direct diagonal posterior approximation (See Sections 5.1, 5.3).\\n**We have revised our claims about subnetwork selection optimality and now simply present our approach as an empirically well-performing subnetwork selection strategy (with theoretical motivation).**\\n\\nWe appreciate your concerns regarding the faithfulness of diagonal subnetwork selection. We conduct the experiment you suggest, comparing our approach to random selection on MNIST rotations and CIFAR corruptions. Empirically, we find random subnetwork selection to perform poorly (similar to MAP) on ResNet18. It performs much worse than on the toy data in Section 5.1. We hypothesise this is because it is substantially more difficult to randomly find a good subnetwork in a large model such as a ResNet18, than it is in a small fully-connected network. **We have added the random baseline to the updated manuscript (see \\u201cOurs (Rand)\\u201d in Figure 4).**\\n\\n### Diagonal or Full Approach to subnetwork selection in toy experiments\\n\\nWe in fact always consider a diagonal approximation when selecting subnetworks, even in the 1D toy regression example where the full GGN is tractable. Thus all sections (including the toy experiments) use diagonal subnetwork selection. **We have updated our prose to clarify this.**\", \"following_from_our_previous_answer\": \"our toy experiments aim to show that making a diagonal approximation for subnetwork selection, but then using a full-covariance approach for inference (\\u201cWass\\u201d in 5.1) is much better than using a diagonal approximation directly for inference (\\u201cDiag\\u201d in 5.1). An important conclusion from our work is that using crude approximations for subnetwork selection in combination with expressive approximations for inference is significantly better than using crude approximations for inference directly (Sections 5.1, 5.3). **We have more strongly emphasised this point.**\\n\\n### On our choice of priors for the Image experiments (Sec 5.3)\\n\\nWe would like to clarify that all models (including deep ensembles and SWAG) use the same prior precision during training (i.e. the standard weight decay of 1e-4 for ResNet18) for comparability, e.g. all trained models are identical. In fact, for each experiment repetition, we only trained 6 different models: The first is used for the results in: \\u201cMAP\\u201d, \\u201cOurs\\u201d, \\u201cSWAG\\u201d, \\u201cDiag-Laplace\\u201d and as the first element of \\u201cEnsemble.\\u201d We trained additional \\u201cEnsemble\\u201d elements and, finally, 1 network with \\u201cDropout\\u201d.\\n\\nAt test time, we tune the prior precision used for the Laplace approximation on a validation set for each approach individually, as done in (Ritter et al. 2018, Kristiadi et al. 2020). We use a grid search approach. Although this results in \\u201cOurs\\u201d, \\u201cDiag-Laplace\\u201d and \\u201cSWAG\\u201d using different priors, we believe that is the best setting for comparison as we present each method in its strongest configuration. \\n**Our updated text includes an additional appendix on our experimental setup to clarify points like these and make our results easier to reproduce.**\"}",
"{\"title\": \"Paper update overview\", \"comment\": [\"We thank the reviewers for their time in reading our paper, insightful comments and helpful suggestions. We apologise for the late response. We have taken this time to implement your suggestions into our manuscript, resulting in the following **significant changes:**\", \"We have added an additional experiment comparing our method to deep ensembles with a much larger number of ensemble members as previously (i.e. 50+ instead of 5), and studying how the number of ensemble members affects performance (see Figure 5 in Appendix B).\", \"We have added image classification results for VOGN, a scalable, state-of-the-art approach for mean-field variational inference in DNNs (Osawa et al., 2019). See Figure 4.\", \"We have added image classification results with ResNet50 (as opposed to ResNet18) to strengthen our claim about scalability in the number of model parameters (see Figure 8 in Appendix B).\", \"We have added subnetwork inference with random subnetwork selection as a baseline to our image classification experiments in Section 5.3; see baseline \\u201cOurs (Rand)\\u201d in Fig. 4.\", \"We have significantly revised Section 4. We have generalized our theory to generalized linear models (i.e. including both regression and classification) learnt by gradient descent (as used in our experiments), and better justified why the Laplace approximation is a faithful approximation to the true posterior for classification. We have also clarified the subnetwork selection procedure.\", \"We have revised our claims about subnetwork selection optimality and now instead present our approach as an empirically well-performing subnetwork selection strategy (with theoretically grounded motivation).\", \"We rephrased any references to our approach as a \\u201cpruning\\u201d method to instead a \\u201csubnetwork selection\\u201d method for clarity (as we do not do weight pruning in the classical sense). This involved updating Fig. 2 to state the percentage of weights retained instead of those \\u201cpruned\\u201d.\", \"We have added a section describing our experimental setup and the datasets we use in detail; see Appendix C.\", \"We have included a discussion on sparse Bayesian Deep Learning methods in the related work (see end of \\u201cInference over Subspaces\\u201d paragraph in Section 6).\", \"We have relaxed the claim from experimental section 5.2. It now reads: \\u201cGiven the same amount of compute, larger models benefit more from subnetwork inference than small ones.\\u201d\", \"We re-made Figure 1 to be more legible and visually appealing.\", \"We have added significantly more detail to the explanation of the linearized Laplace approximation in Section 3, Step #3, in order to improve clarity and intuition.\", \"We have done a number of updates to phrasing to enhance clarity and language error corrections throughout the paper as described in individual reviewer responses below.\", \"Please refer to our individual responses to all reviewers for further clarifications.\"]}",
"{\"title\": \"Review\", \"review\": [\"#### Summary\", \"The authors focus on the important problem of scalable approximate inference in Bayesian NNs. More specifically, they propose a method for scalable BNNs via a (full-covariance Gaussian) Laplace approximation on a (Wasserstein-based) pruned subnetwork within a deterministically-trained model. They include a theoretical analysis for a simple generalized linear model, and experiments on 1D regression, tabular regression, and larger-scale image classification with CIFAR-10 (using the dataset shift setup from Ovadia et al., (2019)). From the experiments, they show that their method generally outperforms comparable methods (including deep ensembles) on metric performance and on the ability to capture in-between uncertainty.\", \"#### Strengths\", \"Scalable approximate inference for Bayesian models is an important research area.\", \"Expressive, diverse approximate posteriors is also an important area of research, especially given the limitations of mean-field VI and recent literature.\", \"The proposed method demonstrates better results for robustness to dataset shifts, as well as in-between uncertainty that mean-field VI misses.\", \"In general, the paper is well-written, clearly-motivated, includes both theoretical and empirical results, and adequately compares to, or discusses, relevant literature in the space.\", \"#### Weaknesses\", \"The authors push on the idea of *scalable* approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a high bar from a resources standpoint).\", \"As I noted down below, the experiments currently lack results for the standard variational BNN with mean-field Gaussians. More generally, I think it would be great to include the remaining models from Ovadia et al. (2019). More recent results from ICML could also useful to include (as referenced in the related works sections).\", \"#### Recommendation\", \"Overall, I believe this is a good paper, but the current lack of experiments on a dataset larger than CIFAR-10, while also focusing on scalability, make it somewhat difficult to fully recommend acceptance. Therefore, I am currently recommending marginal acceptance for this paper.\", \"#### Additional comments\", \"p. 5-7: Including tables of results for each experiment (containing NLL, ECE, accuracy, etc.) in the main text would be helpful to more easily assess\", \"p. 7: For the MNIST experiments, in Ovadia et al. (2019) they found that variational BNNs (SVI) outperformed all other methods (including deep ensembles) on all shifted and OOD experiments. How does your proposed method compare? I think this would be an interesting experiment to include, especially since the consensus in Ovadia et al. (2019) (and other related literature) is that full variational BNNs are quite promising but generally methodologically difficult to scale to large problems, with relative performance degrading even on CIFAR-10.\", \"##### Minor\", \"p. 6: In the phrase \\\"for 'in-between' uncertainty\\\", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., $`in-between'$).\", \"p. 7: s/out of sitribution/out of distribution/\", \"p. 8: s/expensive approaches 2) allows/expensive approaches, 2) allows/\", \"p. 8: s/estimates 3) is/estimates, and 3) is/\", \"In the references:\", \"Various words in many of the references need capitalization, such as \\\"ai\\\" in Amodei et al. (2016), \\\"bayesian\\\" in many of the papers, and \\\"Advances in neural information processing systems\\\" in several of the papers.\", \"Dusenberry et al. (2020) was published in ICML 2020\", \"Osawa et al. (2019) was published in NeurIPS 2019\", \"Swiatkowski et al. (2020) was published in ICML 2020\", \"p. 13, supplement, Fig. 5: error bar regions should be upper and lowered bounded by [0, 1] for accuracy.\", \"p. 13, Table 2: Splitting this into two tables, one for MNIST and one for CIFAR-10, would be easier to read.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting ideas but various clarifications are needed\", \"review\": \"The paper proposes to approximate the posterior distribution of a Bayesian neural network by an approximation that consists of a deterministic component. The authors select a sub network and infer approximate posterior distributions over the weights in the sub network. All other weights are estimated via MAP point estimation. A sufficiently small sub-network allows high fidelity posterior approximations that do not make restrictive mean field assumptions to be tractable.\\n\\nThe paper is generally well written and easy to follow. The idea that BNNs have too many parameters for reliably inferring posterior distributions over all of them is a reasonable one. Splitting the posterior approximation into a deterministic and stochastic component to deal with this issue is interesting. The experiments do indicate improvements over fully factorized posterior approximations.\", \"concerns\": [\"The process of selecting the sub-network over which the full posterior distribution is inferred is crucial and herein lies my main concern with the approach presented in the paper. Minimizing the Wasserstein distance between the subnetwork posterior p(W_s | data) and the true posterior over all weights p(W | data) is sensible. However, since the true posterior is intractable, the authors instead appear to minimize the distance to an Laplace approximated posterior q(W | data). While this is fine for smaller models when the approximation can use a full covariance, why does it make sense for larger models where the authors use diagonal approximations to the generalized Gauss Newton matrix (Section 5.3)? If the goal is to do better than such diagonal approximations then why treat the subnetwork with the lowest Wasserstein distance to such crude approximations as optimal? How much worse is the random selection strategy on the tasks listed in section 5.3?\", \"In the toy experiments, when the full covariance posteriors approximations are available, using the deterministic + stochastic sub-network approximation does a little bit worse than the full covariance approximation but better than the diagonal approximation. If instead the diagonal posterior approximation is used to select the subnetwork in these models, does the resulting approximation still improve upon the diagonal approximation (as seems to be happening in the experiments in 5.3)?\", \"The experiment in section 5.3 has other curious issues. Different methods appear to be using different priors (Gaussians with different precisions). How do we then know that the benefits reported stem from the proposed approximation rather than the differences in the model, especially since diagonal Laplace is using a Gaussian prior with a precision that is 80 times smaller? What priors were used for deep ensembles and SWAG?\", \"The grid search by which the prior precisions were selected needs more details. The presented numbers are surprisingly small suggesting that a-priori with high probability we expect all weights to be zero, and hence the prior predictive functions to be all zero as well. This does not seem like a sensible prior.\", \"At the very least there needs to be a discussion about sparse Bayesian deep learning techniques (and preferably an empirical comparison)[1, 2, 3], that use sparsity inducing priors to prune away weights and nodes from a larger network instead of the approach presented here.\", \"(Minor) The conclusion of 5.3 that subnetwork posteriors are better calibrated are not supported by Figure 4. I would suggest moving the ECE and Brier score plots from the appendix to Figure 4.\", \"Overall, although I have several concerns they primarily stem from experimental issues in 5.3. Assuming that the authors are able to sufficiently address these in the rebuttal, I am leaning towards a borderline accept.\", \"[1] https://www.jmlr.org/papers/v20/19-236.html\", \"[2] https://papers.nips.cc/paper/7372-posterior-concentration-for-sparse-deep-learning.pdf\", \"[3] https://arxiv.org/pdf/2002.10243.pdf\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good idea, solid theoretical foundation, but empirically weak\", \"review\": \"The authors present a new method for Bayesian deep learning motivated by the difficulty of posterior inference in the \\\"overparameterized\\\" regime of deep neural network models. The proposed method provides a principled strategy for selecting a subset of the neural network's parameters (forming a so-called \\\"subnetwork\\\") for which a full-covariance approximate posterior can be computed. The authors use the well-studied Laplace approximation with the generalized Gauss-Newton Hessian approximation for the covariance. An empirical analysis is presented which attempts to assess the efficacy of the proposed method in prediction accuracy and uncertainty quantification.\\n\\nThe presented approach is novel and appears to be a promising contribution to the study of Bayesian neural networks. As a fellow Bayesian, I applaud the authors' efforts. Unfortunately, the paper has a number of significant weaknesses which I detail below. The authors' experimental results appear to me to not sufficiently support some of their claims. There are also a number of formatting issues. As such, I cannot recommend accepting this work in its current state. I would be happy to revisit my rating after revision and discussion.\", \"pros\": [\"The basic idea behind the method is very interesting and constructive for the field\", \"Theoretical justification is solid\", \"Overall well written with only minor clarity issues\"], \"cons\": \"- Experimental results lack thoroughness\\n- Results do not seem to adequately support authors' (somewhat bold) claims\\n- Lacking detail in certain areas of the method description\\n\\n\\nI will organize my comments for the authors by section.\\n\\n#### Section 1\\n1. \\\"In turn, we can apply more expensive, but more faithful, posterior approximations to just that subnetwork to achieve better uncertainty quantification than if we apply cheaper, but more crude, approximations to the full network.\\\"\\nThis sentence is too long and too difficult to read. Please try rephrasing to make it more clear.\\n\\n#### Section 2\\n\\n2. It would be nice to see the subnetwork posterior fully defined in terms of the fixed weights, e.g:\\n$$p(W_s | y,X,W^*) \\\\propto p(y|X,W_s,W^*)p(W_s)p(W^*) = p(W_s|y,X)p(W^*)$$\\nI don't meant to be pendantic here. It wasn't obvious to me (at first) that the delta functions were, in fact, a degenerate prior over the fixed weights, $W^*$; this would make it more clear.\\n\\n3. It might be worth discussing here or later what implications the degenerate prior has for the subnetwork posterior. This seems to, at least, preclude any ideas about possibly applying gradient-optimized MAP directly to the subnetwork posterior.\\nFurthermore, what are the benefits of this approach as opposed to assigning a non-degenerate Gaussian prior with fixed variance to the remaining weights?\\n\\n#### Section 4\\n\\n4. \\\"We now analyze the following procedure...\\\"\\nAre you analyzing the procedure for selecting a subnetwork? Or the entire procedure described previously? This is not made clear.\\n\\n5. In step 1, you specify the analytical solution for w_MAP, but in section 3 you describe MAP as being performed using stochastic gradient optimization. Which one are you using? Related to previous point, is this the same step as outlined in section 3? Or a separate step?\\n\\n6. In general, this section needs work. It is not clear to me where you actually detail how the subnetwork is selected. You discuss approximating the optimal subnetwork w.r.t to the Wasserstein distance in equation 11, but this equation requires M_S to be already available. M_S is generated by \\\"a (one-shot) procedure of choice\\\"; I expected this choice to be clearly explained in this section, but this is not the case.\\n\\n#### Section 5.1\\n\\n7. It's a bit confusing that you say \\\"50%, 97%, and 99% of model parameters\\\" when really you mean that this percentage of weights were pruned. It would be visually more intuitive as well in Figure 2 if you named these 50%, 3%, 1%, since the posterior size is getting *smaller* with each one.\\n\\n8. Just to clarify, homeoscedastic here is w.r.t to the sequence of data points? i.e. you have one variance rather than one per data point?\\n\\n9. Please provide some explanation for the prior precision being set to 3.\\n\\n10. Bolding most of the last sentence in section 5.1 is unnecessary and looks weird. I would suggest bolding individual words or phrases in the sentence, or just not bolding at all.\\n\\n#### Section 5.2\\n\\n11. 1e4 ->$10^4$\\n\\n12. It would be helpful to provide a very brief summary of the three datasets as well as what the \\\"gap variants\\\" are (it's fairly straightforward, just creating \\\"gaps\\\" in the training data). I cannot find the \\\"kin8nm\\\" dataset on UCI, so at bare minimum this needs to be clarified.\\n\\n13. Figure 3 is quite difficult to interpret at first glance. You should at least use a discretized perceptually uniform colormap to make more visually apparent the pattern in increasing network size. I would also consider using a line plot rather than scatter plot here, but this is a matter of taste.\\n\\n14. Why are you using the log likelihood as opposed to the full posterior probability? Since we're in a Bayesian setting here, it seems worth considering the priors.\\n\\n15. While log likelihood can, in principle, serve as an indirect proxy for uncertainty calibration, there are important caveats to consider which make it unconvincing as a standalone measure (see \\\"Pitfalls of In-Domain Uncertainty Estimation...\\\", Ashukha et al. 2020). Furthermore, a quick scan of recent literature (cited in this work) confirms that most authors tend to use multiple methods of assessment (e.g. Brier score, accuracy calibration curves, etc), not just log likelihood. The raw likelihood scores also don't provide any interpretability in *how much better* one model is than another, i.e. what concrete impact does the difference have on uncertainty quantification? Thus, **it's unconvincing that here (and elsewhere) you rely entirely on log ilkelihood** to demonstrate the alleged superiority of your method.\\n\\n16. It's noteworthy that 2/3 of the \\\"gap\\\" datasets produce inconsistent results, particularly considering that they are designed specifically to test out-of-sample uncertainty. This needs to be discussed.\\n\\n17. typo: modelling -> modeling\\n\\n18. In light of points 15 and 16, I do not think you have sufficient empirical basis to make the claim that \\\"...it is better to perform subnetwork inference in a large model than full network inference in a small one\\\". The results are simply too weak to support this. I suggest either running a more comprehensive experiment or dialing back the confidence of this claim.\\n\\n#### Section 5.3\\n\\n19. You mention \\\"grid search\\\" multiple times but do not provide any indication of the grid you searched over.\\n\\n20. typo: sitribution -> distribution\\n\\n21. Please specify the error metrics being used in figure 4 (and discussed in the text).\\n\\n22. Once again, marginally higher likelihood scores are not, in my view, sufficient basis to claim that your method assigns \\\"high uncertainty to out of distribution points\\\" or that it is better calibrated. This is especially the case considering that there seems to be little to no improvement in the error. The case for improved \\\"robustness\\\" also appears to be equally unfounded on these grounds.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good PoC but the main experiment raises questions\", \"review\": \"The paper presents a new way for approximating posteriors in Bayesian DNN. The network is split into two subnets. One uses only point estimates while another one uses full (non-diagonal) Gaussian approximation. The structure of that subnet is found by taking largest second derivatives of Hessian of linearized DNN (the authors call it generalized Gauss-Newton (GGN) matrix). The authors show that under very specific conditions such choice correposnds to minimization of Wasserstein-2 distance between their approximation and the true posterior. In the experimental part they provide a set of explorative experiments showing that it may be better to use their approximation for inference in large newtork than both using standard (simple) approximations in large network and full Bayesian inference in small network. This is very nice methodologically and I welcome such demonstration but this can only be considered as a (good) proof of concept. The flagship experiment however looks very inconvincing (see below).\\n\\nPros.\\n1. Interesting idea of finding a better approximation for the posterior on subset of parameters.\\n2. Methodologically nice PoC.\\n3. Thorough comparison with alternative similar techniques such as SWAG.\\n\\nCons.\\n1. The authors claim that they theoretically characterize the descrepancy and derive optimal strategy (see contribution 3) but they (0) consider linearized approximation of DNN (they admit this); (1) do this ONLY for regression problem although their flagship experiment is on classification problems; (2) the method they derive is based on un-natural assumption that covariance matrix is diagonal (it the matrix is diagonal there is no way to approximate with a full submatrix anyway). I would not call it an optimal strategy - it rather looks like a reasonalbe heuristic.\\n2. My major concern is section 5.3. The authors claim that their method estimates uncertainty better than all baselines including deep ensembles (DE). I am afraid in its current form the comparison is not fair. They use only DE of size 5 while their method requires approx. (42K)*(42K) = 1756B of parameters that is enough to keep in memory about 160 initial networks. So it would be fair to compare aganist DE of size 160 since we know that larger DE estimate uncertainties better. It is not that surprizing that the suggested method outperforms other baselines since all of them require much less memory. So I would recomend the authors to compare (1) their current model aganist DE that requires similar amount of memory; (2) their reduced model (that requires approximately the same amount of memory as baselines) aganist other baselines to check wether the proposed algorithm may still estimate uncertainty better given the same memory budget.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
7Yhok3vJpU | High-Likelihood Area Matters --- Rewarding Correct,Rare Predictions Under Imbalanced Distributions | [
"guangxiang zhao",
"Lei Li",
"Xuancheng Ren",
"Xu Sun",
"Bin He"
] | Learning from natural datasets poses significant challenges for traditional classification methods based on the cross-entropy objective due to imbalanced class distributions. It is intuitive to assume that the examples from rare classes are harder to learn so that the classifier is uncertain of the prediction, which establishes the low-likelihood area. Based on this, existing approaches drive the classifier actively to correctly predict those incorrect, rare examples. However, this assumption is one-sided and could be misleading. We find in practice that the high-likelihood area contains correct predictions for rare class examples and it plays a vital role in learning imbalanced class distributions. In light of this finding, we propose the Eureka Loss, which rewards the classifier when examples belong to rare classes in the high-likelihood area are correctly predicted. Experiments on the large-scale long-tailed iNaturalist 2018 classification dataset and the ImageNet-LT benchmark both validate the proposed approach. We further analyze the influence of the Eureka Loss in detail on diverse data distributions. | [
"classification",
"imbalance",
"long-tailed",
"likelihood",
"focal loss"
] | Reject | https://openreview.net/pdf?id=7Yhok3vJpU | https://openreview.net/forum?id=7Yhok3vJpU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"YRjw_z79vfh",
"0YjSrnCxtZ2",
"Y4JMBmx-y_s",
"HYso_InX05",
"ng8ix4KoOT-",
"w5Y6Y9KDXDk",
"mInilwyz0eT",
"BEAS7F3O-4",
"QRQRZxS-51N",
"3WcGM2zccmw",
"opqKNRDTtVc"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040357374,
1606306306022,
1606305784194,
1606305373447,
1606305194442,
1606304316271,
1606302158838,
1603955749659,
1603896380064,
1603861146865,
1603477132848
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3770/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3770/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3770/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3770/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3770/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3770/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3770/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3770/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3770/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3770/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This submission got 1 reject and 3 marginally below the threshold. The concerns in the original reviews include (1) lack of theoretical justification. The motivation and claim are from empirical observation; (2) the performance improvement is minor compared with the existing methods; (3) some experiment settings and details are not explained clearly. Though the authors provide some additional experiments to the questions about the experiments, reviewers still keep their ratings. The rebuttal did not address their questions. AC has read the paper and all the reviews/discussions. AC has the same recommendation as the reviewers. The major concerns are (1) the theoretical justification is not clear. The additional explanation given by the authors in their rebuttal, i.e., the prediction becomes sharper and thus the model generalization ability can be improved, is not justified. (2) the experiments are not very convincing and can be further improved in the following two aspects: (1) the motivation experiments should be conducted in a consistent manner, instead of using simplified EL in some cases; (2) the effectiveness of EL should be more significant otherwise it is not clear whether the claim is true or not. At the current status of this submission, AC cannot recommend acceptance for the submission.\"}",
"{\"title\": \"We will release the code and a example of implementation for Eureka Loss is in the Supplemetary Material\", \"comment\": \"The pytorch version of Eureka Loss is available in the Supplemetary Material, this new criterion is easy to plug in your code.\"}",
"{\"title\": \"Part2: Details about baselines and comparisons\", \"comment\": \"Response:\\n\\nDue to the space limit, we did not explain baselines and comparisons in detail in the initial submission. Now we have included more details about baselines and comparisons in Section 2 and Section 5.2 of the revision. \\n\\na) Are the baselines augmented with CB? BBN and Decoupling-LWS do not use In the old version, we described BBN (Zhou et al., CVPR20) and Decoupling-LWS (Kang et al., ICLR20) in the second paragraph in the Section \\u2018Related Work\\u2019, they are both state-of-the-art class-balanced methods (CB) and use CE at the beginning of the training, and they belong to deferred CB in general. Therefore, the comparisons between deferred EL, deferred CB+ EL and deferred CB are fair, and the augmented EL are better than these advanced class-balanced methods. We report the performance of deferred CB+EL to show that our method is additive with the deferred class-balanced training.\\n\\nThe comparison between CB and EL as well as deferred CB and deferred EL show that rewarding correct predictions for tail classes not only less impair the learning of head classes but also learn better for tail classes compared to penalizing more for tail classes. MBJ and FSA are recently proposed state-of-the-art feature transferring method in this field and they are less related to our work, we take them into comparison to keep completeness of comparisons, in this comparison our method deferred EL and deferred CB+EL surpasses them by a large margin. By the way, MBJ is augmented with class re-balancing strategy and FSA is a two-stage method in which the standard CE training is adopted in the first phase. \\n\\nb) Architectural and training details are described in C 3.2 \\u2018Training settings\\u2019, we fix these variables in our experiments.\\n\\n c) The standard deviation In this paper, we have run several experiments and reported the mean value. The average standard deviations for the main resutls on Cifar10, Cifar100, Coco Detection, ImageNet LT and iNaturalist 2018 are about 0.6, 0.6, 0.2, 0.3, 0.7, 0.3, they are relatively small. We accept your suggestion, and we have reported some of results including standard deviation in Table 9 and table 10 in the rebuttal revision. We will report the detailed deviation for each result in the final version. Besides, although the improvements of HFL on FL are relatively small, the improvements of Eureka Loss on Cross Entropy and other baselines are big.\"}",
"{\"title\": \"Explain the idea and the transition from HFL to Eureka Loss\", \"comment\": \"Explain the idea:\\n\\nAs shown in Figure 5, the example of inaccurate penalty prediction is also rewarded with accurate prediction. The loss in the high likelihood area becomes steeper. This induces model gives prediction near 0 or 1, and the decision becomes clearer, which may increase the generalization ability of the model. The reward gives rare classes more encouragement, which makes the model get rid of the learning dilemma and be encouraged to learn more difficult patterns.\", \"explain_the__transition_from_hfl_to_eureka_loss\": \"Eureka Loss is steeper in high-likelihood area than HFL, it rewards more than HFL for correct predictions and achieves better performance. We have added a subsection B.4 to expain the motivation for using EL.\\n\\nIn section 2.2, we propose Halted Focal Loss(HFL) and compare it to Focal Loss(FL) to illustrate the potential of the high-likelihood area. However, its loss is no steeper than Cross Entropy (CE). Moreover, Focal Loss does not beat CE in the setting of multi-class classification. In order to bridge the gap between the possibly weak motivation experiment of Halted Focal Loss and the proposed method Eureka Loss. We perform the experiments of HFL on large-scale long tailed image classification task iNaturalist 2018 and propose simplified Eureka Loss in the rebuttal revision (subsection B.4 \\u2018Complementary experiment to the Motivation Experiment\\u2019) \\n\\nIn the simplified Eureka Loss, the encouragement is removed, and to keep the low-likelihood area unchanged, a new bonus term starts rewarding the model from p=0.5, so it is a piece-wise function like HFL, but the loss in high-likelihood area is even steeper than CE. As is shown in Table 10, HFL(t=0.5) is also better than HFL and FL in terms accuracy of tail classes on the large scale long-tailed classification dataset iNaturalist 2018, This result once again shows that high-likelihood area matters and near-correct predictions of rare classes play a major role. \\n\\nBut HFL is the combination of Focal Loss (in the low likelihood area) and Cross Entropy (in the high-likelihood area) and the performance is constrained. Unlike HFL, the loss of Simplified Eureka Loss is built on CE and the loss is much steeper than Cross Entropy in the high-likelihood area, it outperforms Cross Entropy(CE) and HFL in terms of all metrics, especially on the subset of tail classes. Eureka Loss reported in Table 2 is a continuous version of simplified Eureka Loss with an additional encouragement for rare classes, similar to HFL(t=0.5), this setting which rewards more for rare classes achieves the best overall performance. We hope the results of simplified Eureka Loss could explain the transition from HFL to Eureka Loss in the paper.\\n\\n\\n\\nWe have updated the Figure 5 the test likelihood distribution after training with each method, and we have included HFL in it.\"}",
"{\"title\": \"Explain the idea behind Eureka Loss and clarify the issue about hyper-parameters.\", \"comment\": \"Explain the idea: As shown in Figure 5, the example of inaccurate penalty prediction is also rewarded with accurate prediction. The loss in the high likelihood area becomes steeper. This induces model gives prediction near 0 or 1, and the decision becomes clearer, which may increase the generalization ability of the model. The reward gives rare classes more encouragement, which makes the model get rid of the learning dilemma and be encouraged to learn more difficult patterns.\", \"hyperparameter\": \"The selection of hyperparameter is much easier than related methods like CB, e.g. the beta is set to 0.9999 for all long-tailed image classifcations but it should be tuned for every distribtion in their paper.\\n\\nNegative loss is not Wrong, loss can be negative and has been variously called a reward function, a profit function, a utility function, a fitness function in previous work. Moreover, we can also add a constant to keep it postive, the sign does not matter but the monotonicity and the Steepness matter. We introduce the bonus item to encourage the model instead of penalizing the model for rare classes, and the bonus proves effective in our paper, either for Eureka Loss in Table 2 or Simplified Eureka Loss in Table 10. Moreover, we can see from the Figure 4 and Figure 5 that models trained with bonus for correct predictions make clearer decisions.\"}",
"{\"title\": \"Part1: Response to the questions about comparisons and we have included a complemetary experiment with bigger improvements and large-scale dataset to stenghen the motivation.\", \"comment\": \"1.\\tHyper-parameters of Focal Loss:\", \"response\": \"We reported mean performance of several runs in the Section2.2 using the optimal setting discussed in the response 1, in section 2.2, we propose Halted Focal Loss(HFL) and compare it to Focal Loss(FL) to illustrate the potential of the high-likelihood area. However, its loss is no steeper than Cross Entropy (CE). Moreover, Focal Loss does not beat CE in the setting of multi-class classification.\\n\\nIn order to bridge the gap between the possibly weak motivation experiment of Halted Focal Loss and the proposed method Eureka Loss. We perform the experiments of HFL on large-scale long tailed image classification task iNaturalist 2018 and propose simplified Eureka Loss in the rebuttal revision (subsection B.4 \\u2018Complementary experiment to the Motivation Experiment\\u2019)\\nIn the simplified Eureka Loss, the encouragement is removed, and to keep the low-likelihood area unchanged, a new bonus term starts rewarding the model from p=0.5, so it is a piece-wise function like HFL, but the loss in high-likelihood area is even steeper than CE.\\n\\nAs is shown in Table 10, HFL(t=0.5) is also better than HFL and FL in terms accuracy of tail classes on the large scale long-tailed classification dataset iNaturalist 2018, This result once again shows that high-likelihood area matters and near-correct predictions of rare classes play a major role. But HFL is the combination of Focal Loss (in the low likelihood area) and Cross Entropy (in the high-likelihood area) and the performance is constrained. Unlike HFL, the loss of Simplified Eureka Loss is built on CE and the loss is much steeper than Cross Entropy in the high-likelihood area, it outperforms Cross Entropy(CE) and HFL in terms of all metrics, especially on the subset of tail classes. \\n\\nEureka Loss reported in Table 2 is a continuous version of simplified Eureka Loss with an additional encouragement for rare classes, similar to HFL(t=0.5), this setting which rewards more for rare classes achieves the best overall performance. \\nWe hope the results of HFL on iNaturalist 2018 and the results of simplified Eureka Loss make the motivation stronger\"}",
"{\"title\": \"The first concern may originate from the misunderstanding and we have included more details about baselines and comparisons in the rebuttal revision to address the second concern.\", \"comment\": \"1.\\tAs to our motivation, we do not claim that not to penalizing the incorrect predictions, our motivation is that the role of high-likelihood area is overlooked and we should increase their relative importance to the low-likelihood area by rewarding the correct predictions in the meanwhile. Both HFL and EL include this idea, and neither reduces the penalty for inaccurate predictions.\\nAs shown in the left subfigure in the Figure 2 and defined in the Formula 6, we compare the Halted Focal Loss (HFL) to the Focal Loss (FL) to demonstrate the relative importance of high-likelihood area, but we do not change the loss landscape in the low-likelihood area and thus do not stop penalizing incorrect predictions. We plot Eureka Loss (EL) in the right subfigure of Figure 1 and its variants in the left subfigure of Figure 3, and define EL generally in the Formula 7, the additional bonus in the EL does not alleviate the penalization for incorrect predictions, and it only strengthens the relative importance of optimization in the high-likelihood area.\\n2.\\tDue to the space limit, we did not explain baselines and comparisons in detail in the initial submission. Now we have included more details about baselines and comparisons in Section 2 and Section 5.2 of the rebuttal revision. \\nIn the old version, we described BBN (Zhou et al., CVPR20) and Decoupling-LWS (Kang et al., ICLR20) in the second paragraph in the Section of Related Work, they are both state-of-the-art class-balanced methods (CB) and use CE at the beginning of the training, and they belong to deferred CB in general. \\nTherefore, the comparisons between deferred EL, deferred CB+ EL and deferred CB are fair, and the augmented EL are better than these advanced class-balanced methods. We report the performance of deferred CB+EL to show that our method is additive with the deferred class-balanced training. \\nThe comparison between CB and EL as well as deferred CB and deferred EL show that rewarding correct predictions for tail classes not only less impair the learning of head classes but also learn better for tail classes compared to penalizing more for tail classes.\\nMBJ and FSA are recently proposed state-of-the-art feature transferring method in this field, we take them into comparison to keep completeness of comparisons, in this comparison our method deferred EL and deferred CB+EL surpasses them by a large margin.\"}",
"{\"title\": \"The motivation is not convincing enough.\", \"review\": \"These are several concerns:\\n1. In the view of motivation, I don't think the motivation is strong enough and is convincing. Also, I don't think rewarding correct predictions but not penalizing incorrect ones is a reasonable way. In my opinion, rewarding the correct predictions may be a good way, but penalizing the incorrect ones should also be important. \\n2. In the view of experiments, though the authors add Table 7 in the appendix, which is the result for training 90 epochs, I still doubt why Eureka Loss does not work better than recent works when training 200 epochs (which is also a common setting recently). And it seems that using CE at the beginning of training is important, and +CB$^+$ works the best. Moreover, In table 2, the results on \\\"few\\\" are especially not very good comparing with others, which makes it harder for me to believe that rewarding the high-likelihood area really matters a lot for tail classes. It seems that the experiment results are not strong enough to support the proposed opinion.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Intriguing submission; experiments might be improved\", \"review\": \"The submission makes an intriguing claim that retaining focus on correctly predicted rare classes can improve performance for training with class-imbalanced datasets.\\n\\nTo illustrate this claim, the paper shows that one can find improvements at overall accuracy if a combination of the Focal Loss (which weights down examples with high predictive likelihood) and the cross-entropy loss is used such that the loss transitions from the Focal Loss to the CE for examples with predictive confidence above a threshold, for examples belonging to the top rarest classes. On long-tailed CIFAR-10, this produces a mild improvement at overall accuracy (around 0.5%) when the top 40% rarest classes receive this mixed loss. Further experiments with COCO-detection finds sparse improvements (around 0.4%) when applying the mixed loss to the tail classes. \\n\\nBased on the above findings, the paper argues for not weighting down confident predictions, especially if these belong to rare classes. However, perhaps these experiments are insufficient to arrive at such a conclusion? To ensure that the minor improvements in CIFAR-10 are in fact due to the claimed reasoning, one could also look at other combinations of the losses that do not conform to the claim. For example, apply the loss to the top k% most confident examples (without stratifying by rare classes), randomly select k% of images, etc. For COCO-detection, apply HFL to the head classes, and FL to the tail classes. Since improvements are so small, it would also be nice to see some standard deviation bars over multiple trials. Also, were the choices of Focal Loss hyper-parameters made to elicit their best performance? From Figure 2, it looks like it underperforms the cross-entropy loss.\\n\\nThe paper proposes a new loss meant to \\\"reward the well-classified rare examples\\u201d. This augments the cross-entropy loss with a log(1-p_y) term scaled with a number that reflects the frequency of class y, such that rarer classes are scaled higher. \\n\\nExperiments have been conducted on 2 image classification datasets and 1 dialogue dataset. In all cases, the proposed loss appears to result in improvements over baselines. \\n\\nSome questions/comments about the experiments:\\n - It appears that the proposed loss performs particularly well when combined with CB. Are the competing methods also similarly augmented? \\n - For Table 4, why is the Focal Loss only evaluated for 2 settings of gamma? Shouldn\\u2019t there be a hyper-parameter search and the best gamma used?\\n - There are a lot of comparisons, with a lot of numbers being taken from past reported results. For all such comparisons, has it been ensured that the architectural and training details are fixed across comparisons? Otherwise the comparisons might not be fair, especially given that reported improvements are minor.\\n - Especially when improvements are minor, it becomes important to look at aggregate numbers, so I\\u2019d suggest reporting standard deviations over multiple trials for all experiments.\", \"some_typos\": \"\\u201cdown applications\\u201d \\u2014> \\u201cdownstream applications\\u201d\\n\\u201ca effective number\\u201d \\u2014> \\u201can effective number\\u201d\\n\\u201cthus the likelihood\\u201d \\u2014> \\u201cso that the likelihood\\u201d\\n\\u201cdeferred courage\\u201d \\u2014> \\u201cdeferred encouragement\\\"\\n\\nOverall, the paper is clearly written and reports exhaustive experiments (with the caveats/questions above). While the motivating experiments in Section 2.2 are not very compelling, in part due to the very minor improvements, the key intuition that the classification of rare-class hard examples should be continued to be encouraged (so that their predictive confidence doesn\\u2019t drop as these examples are weighted down by some of the other methods) sounds interesting, although some of the phrasing about \\u201crewarding well-classified examples\\u201d can be a bit awkward. My main concerns as of now are about experimental details, which are described above in the questions.\", \"post_rebuttal\": \"Thanks to the authors for responding. I'm still not sure if the experiments are particularly compelling. There appear to be differences amongst the baselines with regards to class balancing, and the motivating section is still weak; there are new experiments on a larger dataset, but now with a different loss (simplified EL) which is close enough to the proposed loss that this does not work very well as a motivation anymore. Apart from this, taking some of the comments from the other reviewers and the authors' responses into account, I am retaining my initial rating.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The experimental setting needs to be clarified\", \"review\": \"This paper deals with learning imbalanced class distributions. First, it empirically finds that the high-likelihood area for the rare classes benefits classification. Then, based on the findings, it proposes a new learning objective called Eureka Loss, which can be viewed as a combination of the frequency-based and likelihood-based methods to reward the classifier when examples belong to rare classes in the high-likelihood area are correctly predicted. Empirical results on two typical tasks (i.e. image classification and language generation tasks) illustrate its superiority compared with other baselines.\\n\\n\\n###########################################################################################\", \"pros\": \"1. Overall, it is well-written. \\n2. It clearly discusses the existing two methods (i.e. frequency-based methods and likelihood-based methods). Furthermore, it highlights the limitation of likelihood-based methods that they neglect the correctly-predicted tail class examples.\\n3. The motivation for the design of the new learning objective(i.e., Eureka Loss) is based on the empirical finding that the high-likelihood area of the rare examples is important to improve the performance.\\n\\n###########################################################################################\", \"cons\": \"1. The finding is mainly on empirical observations, which may lack theoretical support. Why is the high-likelihood area of the rare examples is important for generalization?\\n2. For the experimental settings, e.g. iNaturalist 2018, the i.i.d. assumption does not hold for the training and test set.\\n3. For the experimental results, how to tune the hyperparameter of the Eureka Loss, in validation set or test set? Since the reason in 2, I guess the hyper-parameter selection becomes difficult.\", \"minor_comments\": \"For the last subfigure in Figure 1, the ordinate value for the loss is negative, which is wrong.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting finding but lacking explanation/understanding\", \"review\": \"Summary:\\n\\n- This paper made a finding that weighting up correct predictions for rare class examples also can help to improve the performance of imbalanced classification. In light of this finding, it proposes the Eureka Loss to add additional gradients for examples belong to rare classes in the high-likelihood area when correctly predicted. Experiments on several large-scale benchmarks demonstrate its effectiveness.\", \"pros\": [\"The paper is clearly written and easy to follow.\", \"The experiments are thorough and demonstrate the effectiveness.\"], \"cons\": [\"While the finding is quite interesting, I think the design of the proposed algorithm is quite arbitrary. It's not clear to me why the authors choose to add a term for rare classes rather than changing the weights directly. Why don't the authors use HFL in the end?\", \"Currently it seems that there lacks complementary theory/intuition that could explain why weighting up the already correctly classified rare examples help with the performance.\"], \"additional_questions\": \"- Figure 4 seems quite interesting. It seems that the functionality of Eureka Loss is quite different from HFL. I could intuitively understand that the Eureka loss function would encourage the examples to have likelihood of either 1 or 0. Have the authors visually checked the examples with a likelihood of 0? Does that mean training on a carefully selected subset gives better performance?\\n\\n----\\npost-rebuttal update \\n\\nI thank the authors for the responses. While I still think the idea is potentially interesting and original, I could not increase the score given the fact that this manuscript is naturally incremental without theoretical justifications.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.