forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
B1eXbn05t7 | Open-Ended Content-Style Recombination Via Leakage Filtering | [
"Karl Ridgeway",
"Michael C. Mozer"
] | We consider visual domains in which a class label specifies the content of an image, and class-irrelevant properties that differentiate instances constitute the style. We present a domain-independent method that permits the open-ended recombination of style of one image with the content of another. Open ended simply means that the method generalizes to style and content not present in the training data. The method starts by constructing a content embedding using an existing deep metric-learning technique. This trained content encoder is incorporated into a variational autoencoder (VAE), paired with a to-be-trained style encoder. The VAE reconstruction loss alone is inadequate to ensure a decomposition of the latent representation into style and content. Our method thus includes an auxiliary loss, leakage filtering, which ensures that no style information remaining in the content representation is used for reconstruction and vice versa. We synthesize novel images by decoding the style representation obtained from one image with the content representation from another. Using this method for data-set augmentation, we obtain state-of-the-art performance on few-shot learning tasks. | [
"content",
"style",
"recombination",
"image",
"leakage filtering",
"content representation",
"leakage",
"visual domains",
"class label",
"properties"
] | https://openreview.net/pdf?id=B1eXbn05t7 | https://openreview.net/forum?id=B1eXbn05t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HylFQRVTkV",
"B1gOw6nOCX",
"Bkld7a3OCX",
"BJxFbahuRm",
"SJxASsu527",
"ByxmcFO527",
"rJlBWXdUh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544535585125,
1543191904450,
1543191839767,
1543191809086,
1541208902436,
1541208459040,
1540944637051
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1157/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1157/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1157/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1157/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1157/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1157/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1157/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper is on the borderline. From my reading, the paper presents a reasonable idea with quite good results on novel image generation and one-shot learning. On the other hand, the comparison against the prior work (both generation task and one-shot classification task) is not convincing. I also feel that there are many work with similar ideas (I listed some below, but these are not exhaustive/comprehensive list), but they are not cited or compared, I am not sure about if the proposed concept is novel in high-level. Although some implementation details of this method may provide advantages over other related work, such comparison is not clear to me.\\n\\nDisentangling factors of variation in deep representations using adversarial training\", \"https\": \"//arxiv.org/pdf/1711.06454.pdf\\n\\nFinally, I feel that the writing needs improvement. Although the method is intuitive and has simple idea, the paper seems to lack full details (e.g., principled derivation of the model as a variant of the VAE formulation) and precise definitions of terms (e.g., second term of LF loss).\", \"http\": \"//openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Disentangling_Factors_of_CVPR_2018_paper.pdf\\nCVPR 2018\\n\\nSeparating Style and Content for Generalized Style Transfer\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"decision\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their careful evaluation and feedback. Regarding related work you cite, your references [1] and [2] are variations on the neural style transfer method of Gatys et al. [2016]. In this approach, content is defined as the image itself, and style is defined in terms of spatially invariant image statistics (the Gram matrix). This model is often described as a texture-transfer method, although the notion of texture can be quite abstract. Although this method does perform open-ended recombination (it can work for any new pair of images defining content and style), it is limited to transferring texture and not arbitrary style. For example, it could not re-render faces in different poses as in our Figure 1. It\\u2019s therefore highly unlikely to work well for tasks such as face or character style transfer. The structured GAN--your reference [3]--is shown to work only with a fixed set of classes. We appreciate that the method could be extended use an embedding representation to work with open-ended content, but this extension is beyond the scope of work for a comparison. We have incorporated a discussion of the related work [1,2,3] in in our manuscript.\", \"regarding_the_u_net_skip_connections_and_their_effect_on_the_leakage_filtering_objective\": \"Leakage filtering places constraints on the recombined images, rather than on the latent representations. It is therefore compatible with architectures using skip connections. Predictability minimization is a regularizer on the latent representation, and would be incompatible with skip connections, but we do not explore that case due to the poor performance of predictability minimization.\\n\\nRegarding how the final term of the leakage filtering loss, $L_{LF}$ is computed: the histogram loss [Ustinova et al., 2016] is used for evaluating the content in a reconstruction, exactly as we used the histogram loss for determining the content embedding in the first place. It is a simple and elegant approach.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their thorough evaluation and comments. In response to suggested comparisons with prior work, we have updated our paper with additional citations. However, the work cited by the reviewer does not address the problem we tackle. Our goal is to transfer style with open-ended content; Siddharth et al. [2017] and Tzeng et al. [2017] are concerned with a fixed set of content classes. Our goal is to perform content-style recombination, whereas Sohn et al. [2017] is primarily concerned with image completion and segmentation. While Tzeng et al. [2017] uses adversarial training via the GAN objective, it is otherwise not related to predictability minimization [Schmidhuber, 1992], which is a distinct adversarial method.\", \"regarding_the_trade_off_of_different_losses\": \"we explore varying the loss coefficients for CE, PM, and LF. These results are shown in Figure 8 in the Appendix. All components of STOC are needed to attain maximal performance.\", \"regarding_low_shot_learning_on_omniglot\": \"We ran new simulations, included in the updated paper, showing the boost due to traditional data augmentation approaches (rigid image transformations), and showing a significant additional benefit of augmentation by synthetic examples. This puts our work on the same footing as the (still unpublished) DAGAN model by Antoniou et al. [2017], which includes some (undescribed) form of traditional data augmentation. With matched baselines, we perform comparably to DAGAN. DAGAN is designed specifically to perform data augmentation, whereas we are using data augmentation as a quantitative evaluation metric. DAGAN is unable to obtain the other results we report, such as recombination of content and style (Figures 1, 4, and 5), and does not explicitly perform content-style decomposition. Although DAGAN is also unpublished at present, we believe it\\u2019s valuable to have these two quite distinct methods in the literature as evidence suggesting intrinsic limitations to the benefit that can be obtained from synthetically generated examples.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the thoughtful evaluation and feedback. Regarding the investigation of different model configurations, in the Appendix (Figure 8) we vary the coefficients of the various costs in the training objective function on the naturally-evaluated, synthetically-trained (NEST) task. We show that each component cost contributes to the model\\u2019s overall performance. Regarding time complexity, predictability minimization incorporates a GAN-like adversarial objective, which makes it strictly inferior to STOC in time complexity and--as we show in the paper--in the quality of synthesized images. The penalty in the STOC leakage filtering loss is proportional to the number of within- and between-class pairs that are drawn from P^+ and P^- in a minibatch.\"}",
"{\"title\": \"A novel method for content and style recombination in open domains\", \"review\": \"In this paper, the authors study an interesting problem called open-ended content style recombination, i.e., recombining the style of one image with the content of another image. In particular, the authors propose a VAE (variational autoencoder) based method (i.e., Style Transfer onto Open-Ended Content, STOC), which is optimized over a VAE reconstruction loss and/or a leakage filtering (LF) loss. More specifically, there are four variants of STOC, including CC (content classifier), CE (content encoding), PM (predictability minimization, Section 2.1) and LF (leakage filtering, Section 2.2). The main advantage of STOC is its ability to handle novel content from open domains. Experimental results on image synthesis and data set augmentation show the effectiveness of the proposed method in comparison with the state-of-the-art methods. The authors also study the comparative performance of four variants, i.e., CCF, CE, PM and LF.\\n\\nOverall, the paper is well presented.\\n\\nSome comments/suggestions:\\n\\n(i) The authors are suggested to include an analysis of the time complexity of the proposed method (including the four variants).\\n\\n(ii) The authors are suggested to include more results with different configurations such as that in Table 1 in order to make the results more convincing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well-written, but incomplete comparisons to prior work\", \"review\": \"SUMMARY\\nThe paper considers several methods for building generative models that disentangle image content (category label) and style (within-category variation). Experiments on MNIST, Omniglot, and VGG-Faces demonstrate that the proposed methods can learn to generate images combining the style of one image and the content of another. The proposed method is also used as a form of learned data augmentation, where it improves one-shot and low-shot learning on Omniglot.\", \"pros\": [\"The paper is well-written and easy to follow\", \"The proposed methods CC, CE, PM, and LF are all simple and intuitive\", \"Improving low-shot learning via generative models is an interesting and important direction\"], \"cons\": \"- No comparison to prior work on generation results\\n- Limited discussion comparing the proposed methods to other published alternatives\\n- No ablations on Omniglot or VGG-Faces generation\\n- Low-shot results are not very convincing\\n\\nCOMPARISON WITH PRIOR WORK\\nThere have been many methods that propose various forms of conditional image generation in generative models, such as conditional VAEs in [Sohn et al, 2015]; there have also been previous methods such as [Siddharth et al, 2017] which disentangle style and content using the same sort of supervision as in this paper. Given the extensive prior work on generative models I was a bit surprised to see no comparisons of images generated with the proposed method against those generated by previously proposed methods. Without such comparisons it is difficult to judge the significance of the qualitative results in Figures 3, 5, and 6. In Figure 3 I also find it difficult to tell whether there are any substantial differences between the four proposed methods.\\n\\nThe proposed predictiability minimization is very related to some recent approaches for domain transfer such as [Tzeng et al, 2017]; I would have liked to see a more detailed discussion of how the proposed methods relate to others.\\n\\nOMNIGLOT / VGG-FACES ABLATIONS\\nThe final model includes several components - the KL divergence term from the VAE, two terms from LF, and a WGAN-GP adversarial loss. How much do each of these terms contribute the quality of the generated results?\\n\\nLOW-SHOT RESULTS\\nI appreciate low-shot learning as a testbed for this sort of disentangled image generation, but unfortunately the experimental results are not very convincing. For one-shot performance on Omniglot, the baseline Histogram Embedding methods achieves 0.974 accuracy which improves to 0.975 using STOC. Is such a small improvement significant, or can it be explained due to variance in other factors (random initializations, hyperparameters, etc)?\\n\\nFor low-shot learning on Omniglot, the proposed method is outperformed by [Antoniou et al, 2017] at all values of k. More importantly, I\\u2019m concerned that the comparison between the two methods is unfair due to the use of different dataset splits, as demonstrated by the drastically different baseline accuracies. Although it\\u2019s true that the proposed method achieves a proportionally larger improvement over the baseline compared with [Antoniou et al, 2017], the differences in experimental setup may be too large to draw a conclusion one way or the other about which method is better.\\n\\nOVERALL\\nAlthough the paper is well-written and presents several intuitive methods for content/style decomposition with generative models, it\\u2019s hard to tell whether the results are significant due to incomplete comparison with prior work. On the generation side I would like to see a comparison especially with [Siddharth et al, 2017]. For low shot learning I think that the proposed method shows some promise, but it is difficult to draw hard conclusions from the experiments. For these reasons I lean slightly toward rejection.\\n\\nMISSING REFERENCES\\nSiddharth et al, \\u201cLearning Disentangled Representations with Semi-Supervised Deep Generative Models\\u201d, NIPS 2017\\n\\nSohn, Lee, and Yan, \\u201cLearning structured output representation using deep conditional generative models\\u201d, NIPS 2015\\n\\nTzeng, Hoffman, Darrell, and Saenko, \\u201cAdversarial Discriminative Domain Adaptation\\u201d, CVPR 2017\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"UPDATE:\\nThanks for your response. As you mentioned, methods like [1] and [2] do perform open-ended recombination. Note that these methods perform not only texture transfer but also color transfer, while the proposed method seems to perform mostly only color transfer. As shown in Figure 6, essentially what the method does is transfer the color of the style image to the content image, sometimes with a little tweak, making the image distorted. One could say that in terms of image style transfer, the proposed method actually underperforms [1] and [2]. \\n\\nHence I agree with R2 that comparison is still necessary for the submission to be more convincing and complete.\\n\\n------------------------------\\n\\nThis paper proposed to use a mechanism of leakage filtering to separate styles and content in the VAE encoding, and consequently enable open-ended content-style recombination. Essentially the model tries to maximize the similarity between images in S^+ and minimize the similarity between those in S^-.\", \"i_have_several_questions\": \"One concern that I have is the relationship/difference between this work and previous work on style transfer, especially universal/zero-shot style transfer as in [1,2]. In the introduction and related work sections, the authors argue that most previous work assumes that content classes in testing are the same as those in training, and that they are not general purpose. Note that various works on style transfer already address this issue, for example in [1, 2]. For those models, content is represented by high-level feature maps in neural networks, and style is represented by the Gram matrix of the feature maps. The trained model is actually universal (invariant to content and styles). Actually these methods use even less supervision than STOC since they do not require labels (e.g., digit labels in MNIST).\\n\\nThis brings me to my second concern on proper baselines. Given the fact that previous universal/zero-shot style transfer models focus on similar tasks, it seems necessary to compare STOC to them and see what the advantages of STOC is. Similar experiments can be conducted for the data augmentation tasks.\\n\\nIn Sec. 4, the authors mentioned that U-Net skip connection is used. Does it affect the effectiveness of the content/style separation, since the LF objective function is mostly based on the encoding z, which is supposed \\u2018skipped\\u2019 in STOC. Will this lead to additional information leakage?\\n\\nIt is not clear how the last term of L_{LF} is computed. Could you provide more details?\\n\\nThe organization and layout of figures could be improved. The title/number for the first section is missing.\", \"missing_references\": \"[1] Universal style transfer via feature transforms, 2017\\n[2] ZM-Net: Real-time zero-shot image manipulation network, 2017\\n[3] Structured GAN, 2017\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkeX-3Rqtm | Training Hard-Threshold Networks with Combinatorial Search in a Discrete Target Propagation Setting | [
"Lukas Nabergall",
"Justin Toth",
"Leah Cousins"
] | Learning deep neural networks with hard-threshold activation has recently become an important problem due to the proliferation of resource-constrained computing devices. In order to circumvent the inability to train with backpropagation in the present of hard-threshold activations, \cite{friesen2017} introduced a discrete target propagation framework for training hard-threshold networks in a layer-by-layer fashion. Rather than using a gradient-based target heuristic, we explore the use of search methods for solving the target setting problem. Building on both traditional combinatorial optimization algorithms and gradient-based techniques, we develop a novel search algorithm Guided Random Local Search (GRLS). We demonstrate the effectiveness of our algorithm in training small networks on several datasets and evaluate our target-setting algorithm compared to simpler search methods and gradient-based techniques. Our results indicate that combinatorial optimization is a viable method for training hard-threshold networks that may have the potential to eventually surpass gradient-based methods in many settings. | [
"hard-threshold network",
"combinatorial optimization",
"search",
"target propagation"
] | https://openreview.net/pdf?id=rkeX-3Rqtm | https://openreview.net/forum?id=rkeX-3Rqtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lh6DVweV",
"H1eV7W9nT7",
"SJgYmJCF6X",
"rJlwCC6Kp7",
"HylcG0aYaX",
"SkxCpX2yaQ",
"B1eTCTj9n7",
"rylmu9RdnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545189315739,
1542394140400,
1542213408770,
1542213327483,
1542213137786,
1541551046150,
1541221845007,
1541102186994
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1156/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1156/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1156/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1156/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1156/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1156/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1156/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1156/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Good idea, but research not yet ripe. Missing extensive comparison with alternative approaches.\", \"metareview\": \"The paper proposes a novel local combinatorial search algorithm for the discrete target propagation framework of Friesen & Domingos 2018, and shows a few promising empirical results.\\n\\nReviewers found the paper well written and clear, and two of them were enthusiastic about the direction of this research. \\nBut all reviewers agreed that the paper is too preliminary, particularly in its empirical coverage. More extensive experiments are needed to compare with competitive approaches form the literature, for the task of training hard-threshold networks. Experiments would need to evaluate the algorithms on larger models and data more representative of the field, to measure how the approach can scale, and to convince of the superiority or advantage of the proposed method.\", \"recommendation\": \"Reject\", \"confidence\": \"5: The area chair is absolutely certain\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"I look forward to the next version of this work.\"}",
"{\"title\": \"Looking forward to the next version\", \"comment\": \"Great, I wish you the best of luck!\"}",
"{\"title\": \"Thank you for the thorough review. We\\u2019ve addressed some of your points in the comment body\", \"comment\": \"1. We began with the most basic and general approach to local search for the target setting algorithm, which we present as the \\u201cnaive\\u201d approach in the paper. We went through many possible improvements to the method and ultimately presented as GRLS the best alternative we could develop. In the future we\\u2019ll expand on this point in the paper, continue to improve our algorithm (especially to address scalability concerns), and compare with alternative search methods attempted along the way (e.g. a genetic algorithm, and others mentioned by Reviewer #3).\\n\\n2. Perhaps it was unclear, but we did not intend to offer a justification for the straight-through estimator, so much as make it clear where exactly the performance improvements offered by their method originate. In particular, that their method is ultimately just an improved gradient estimator. As you mentioned, this serves mainly to motivate the consideration of search methods, rather than going a different direction, and, for example, further improving the gradient estimation approach (although that may still be of interest to others).\\n\\n3. Agreed\\n\\n4. This is a typo from an earlier iteration of the paper, will be fixed in revision. With respect to the local nature of the search: when the probability of flipping an entry is low, the expected number of entries in a target vector to be flipped is low and this induces in expectation a local neighborhood around a candidate target vector. In our experiments we set the flip probabilities to achieve an expectation of about 5% of the target vector flipped.\\n\\n6. a) We will make this simplification. b) this is quite a serious typo. Algorithm 1 should read \\u201c=\\u201c, as dictated by the reasoning in the \\u201cSetting the probabilities\\u201d section.\\n\\n7. We will likely need to address scalability issues of GRLS (as noted by other reviewers) before we can consider how GRLS could overcome this difficulty. With respect to the claim you mentioned, we offered that claim under the supposition that a search-based method such as GRLS would have more difficulty solving the credit assignment problem at lower layers than FTPROP as the number of layers increases, but you are right that more evidence is needed and this claim could be false.\\n\\n8. Missing entries are caused by parameter-experiment combinations that were not tested, but would be included in a revised paper. Bolded entries indicate the best performing parameter set for the given method on the dataset corresponding to the column the entry lies in. \\n\\n9. Indeed we will provide definitions in revisions.\\n\\n10. Algorithm 1 is a proposed alternative to solve the target setting subproblem which occurs at each layer during a single pass of Friesen and Domingos target propagation method.\\n\\n11. This was a chart for MNIST. We will make that clear. With respect to providing the chart for CIFAR10, our efforts were severely compute-constrained and it would have taken us about 3-4 weeks to generate a comparable chart for CIFAR10. Surely it is something we can include in the future though.\\n\\n12. Quite possibly. It is outside the scope of this particular paper, but we are actively thinking about other applications.\\n\\n13. Yes, we will address these.\\n\\n14. a)The datasets are likely not separable. The claim was more to indicate that 0 is the best analytical lower bound we have on the loss. Indeed the loss on a non-binarized network would provide a better bound.\\nb) Yes, we certainly hope to do that in the future.\\n\\n15. In the appendix we discuss the additional costs to training time of using GRLS. Perhaps we could afford to specifically measure these during experiments though. We will make reference to what is contained in the appendices in the main body.\\n\\n18. The quoted statement is not false --- but yes, that was a mistake and we will delete that reference.\"}",
"{\"title\": \"Thank you for the detailed comments\", \"comment\": \"Thank you for the detailed comments. We will certainly refer to [1] as we brainstorm approaches to address scalability and further improve our method, as advised by both you and Reviewer #1. Thank you also for the survey of literature on approaches different from FTPROP and GRLS for training neural networks with binary activation functions [2]-[10]. We were not aware of these works. Indeed a proper justification for target propagation methods in general (Friesen and Domingos, as well as ours) should compare against the state of the art in the area. Our group is just as surprised as you re that the FTPROP paper did not do this, and we will be trying to find the subset of [2]-[10] (and the broader literature) which represents leading alternative techniques to test against in a future iteration of this paper.\"}",
"{\"title\": \"Good idea but far from a proper publication\", \"review\": \"TargetProp\\n\\nThis paper addresses the problem of training neural networks with the sign activation function. A recent method for training such non-differentiable networks is target propagation: starting at the last layer, a target (-1 or +1) is assigned to each neuron in the layer; then, for each neuron in the layer, a separate optimization problem is solved, where the weights into that neuron are updated to achieve the target value. This procedure is iterated until convergence, as is typical for regular networks. Within the target propagation algorithm, the target assignment problem asks: how do we assign the targets at layer i, given fixed targets and weights at layer i+1? The FTPROP algorithm solves this problem by simply using sign of the corresponding gradient. Alternatively, this paper attempts to assign targets by solving a combinatorial problem. The authors propose a stochastic local search method which leverages gradient information for initialization and improvement steps, but is essentially combinatorial. Experimentally, the proposed algorithm, GRLS, is sometimes competitive with the original FTPROP, and is substantially better than the pure gradient approximation method that uses the straight-through estimator.\\n\\nOverall, I do like the paper and the general approach. However, I think the technical contribution is thin at the moment, and there is no dicussion or comparison with a number of methods from multiple papers. I look forward to discussing my concerns with the authors during the rebuttal period. However, I strongly believe that the authors should spend some time improving the method before submitting to the next major conference. I am confident they will have a strong paper if they do so.\", \"strengths\": [\"Clarity: a well-written paper, easy to read and clear w.r.t. the limitations of the proposed method.\", \"Approach: I really like the combinatorial angle on this problem, and strongly believe this is the way forward for discrete neural nets.\"], \"weaknesses\": \"- Algorithm: GRLS, in its current form, is quite basic. The Stochastic Local Search (SLS) literature (e.g. [1]) is quite rich and deep. Your algorithm can be seen as a first try, but it is really far from being a powerful, reliable algorithm for your problem. I do appreciate your analysis of the assignment rule in FTPROP, and how it is a very reasonable one. However, a proper combinatorial method should do better given a sufficient amount of time.\\n- Related work: references [2-10] herein are all relevant to your work at different degrees. Overall, the FTPROP paper does not discuss or compare to any of these, which is really shocking. I urge the authors to implement some or all of these methods, and compare fairly against them. Even if your modified target assignment were to strictly improve over FTPROP, this would only be meaningful if the general target propagation procedure is actually better than [2-10] (or the most relevant subset).\\n- Scalability: I realize that this is a huge challenge, but it is important to address it or at least show potential techniques for speeding up the algorithm. Please refer to classical SLS work [1] or other papers and try to get some guidance for the next iteration of your paper.\\n\\nGood luck!\\n\\n[1] Hoos, Holger H., and Thomas St\\u00fctzle. Stochastic local search: Foundations and applications. Elsevier, 2004.\\n[2] Stochastic local search for direct training of threshold networks\\n[3] Training Neural Nets with the Reactive Tabu Search\\n[4] Using random weights to train multilayer networks of hard-limiting units\\n[5] Can threshold networks be trained directly?\\n[6] The geometrical learning of binary neural networks\\n[7] An iterative method for training multilayer networks with threshold functions\\n[8] Backpropagation Learning for Systems with Discrete-Valued Functions\\n[9] Training Multilayer Networks with Discrete Activation Functions\\n[10] A Max-Sum algorithm for training discrete neural networks\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Contribution incremental and limited\", \"review\": \"The paper discusses a method for learning neural networks with hard-threshold activation. The classic perceptron network is a one-layer special case of this. This paper discusses a method called guided random local search for optimizing such hard-threshold networks. The work is based on a prior work by Friesen&Domingos (2018) on a discrete target propagation approach by separating the network into a series of Perceptron problem (not Perception problems as quoted by this paper).\\nI feel the proposed method is mainly formulated in Equation (3), which makes sense but not very surprising. The proposed random local search is not very exciting either. Finally, the empirical results presented do not seem to justify the superiority of the proposed method over existing methods. Overall, the paper is too preliminary.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Borderline paper; interesting approach but insufficient contribution to warrant acceptance\", \"review\": \"Summary:\\nThe paper presents a novel combinatorial search algorithm for the discrete target propagation framework developed in Friesen & Domingos (2018). Experiments on small datasets with small models demonstrate some potential for the proposed approach; however, scalability remains a concern.\", \"pros\": [\"I like the goal of the work and think that if the targeted problem were to be solved it would be an interesting contribution to the field.\", \"The proposed search algorithm is reasonable and works OK.\", \"The paper is mostly well written and clear.\", \"The experiments are reasonably thorough.\"], \"cons\": [\"The paper states that it is a feasibility study on search methods for learning hard-threshold networks, however, it only evaluates the feasibility of one combinatorial search method.\", \"It\\u2019s not made clear whether other approaches were also investigated or what the authors learned from their exploration of this approach.\", \"The actual algorithm is not very well explained, despite being the main contribution of the paper.\", \"The datasets and models are small and not necessarily representative of the requirements of the field.\", \"Scalability remains a serious concern with the proposed approach.\", \"It\\u2019s not clear to me that the paper presents a sufficient enough contribution to warrant acceptance.\", \"Overall, I like the direction but do not feel that the paper has contributed enough to warrant acceptance. The authors should use the experiments they\\u2019ve run and also run more experiments in order to fully analyze their method and use this analysis to improve their proposed approach.\"], \"questions_and_comments\": \"1.\\tDid you try alternative local search algorithms or did you just come up with a single approach and evaluate it? What did you learn from the experiments and the development of this algorithm that will let you create a better algorithm in the next iteration?\\n\\n2.\\tI think that it is unfair to say that \\u201cit suggests a simpler, independent justification for the performance improvements obtained by their method.\\u201d in reference to the work of Friesen & Domingos (2018), given that the straight-through estimator is not well justified to begin with and their work in fact provides a justification for it. I do agree that it is important to investigate alternative heuristics and approaches within the discrete target propagation framework, however.\\n\\n3.\\tSections 2 and 3 do not clearly define what L_i is and where it comes from. Since these do not normally exist in a deep network they should be clearly defined.\\n\\n4.\\t\\u201cstep 2(b)\\u201d is not well defined in section 3.1.1. I assume that this refers to lines 4-8 of Algorithm 1? The paper should explain this procedure more clearly in the text. Further, I question the locality of this method, as it seems capable of generating any possible target setting as a neighbor, with no guarantee that the generated neighbors are within any particular distance of the uniform random seed candidate. Please clarify this.\\n\\n5.\\tI believe that a negative sign is missing in the equation for T_i in \\u2018Generating a seed candidate\\u2019. For example, in the case where |N| = 1, then T_i = sign(dL/dT_i) would set the targets to attain a higher loss, not lower. Further, for |N|=1, this seems to essentially reduce to the heuristic method of Friesen & Domingos (2018). \\n\\n6.\\tIn the \\u2018Setting the probabilities\\u2019 section:\\n(a) All uses of sign(h) can be rewritten as h (since h \\\\in {-1, +1}), which would be simpler.\\n(b) The paper contradicts itself: it says here \\u2018flip entries only when sign(dL/dh) = sign(h)\\u2019 but Algorithm 1 says \\u2018flip entries only when sign(dL/dh) != sign(h)\\u2019. Which is it?\\n(c) What is the value of a_h in the pseudocode? (i.e., how is this computed in the experiments)\\n\\n7.\\tIn the experiments, the paper says that \\u2018[this indicates] that the higher dimensionality of the CIFAR-10 data manifold compared to MNIST may play a much larger role in inhibiting the performance of GRLS.\\u2019 How could GRLS overcome this? Also, I don\\u2019t agree with the claim made in the next sentence \\u2013 there\\u2019s not enough evidence to support this claim as the extra depth of the 4-layer network may also be the contributing factor.\\n\\n8.\\tIn Table 2, why are some numbers missing? The paper should explain what this means in the caption and why it occurs. Same for the bolded numbers.\\n\\n9.\\tThe Loss Weighting, Gradient Guiding, Gradient Seeding, and Criterion Weighting conditions are not clearly defined but need to be to understand the ablation experiments. Please define these properly.\\n\\n10.\\tThe overall structure of the algorithm is not stated. Algorithm 1 shows how to compute the targets for one particular layer but how are the targets for all layers computed? What is the algorithm that uses Algorithm 1 to set the targets and then set the weights? Do you use a recursive approach as in Friesen & Domingos (2018)?\\n\\n11.\\tIn Figure 2, what dataset is this evaluation performed on? It should say in the caption. It looks like this is for MNIST, which is a dataset that GRLS performs well on. What does this figure look like for CIFAR-10? Does increasing the computation for the heuristic improve performance or is it also flat for a harder dataset? This might indicate that the initial targets computed are useful but that the local search is not helping. It would be helpful to better understand (via more experiments) why this is and use that information to develop a better heuristic.\\n\\n12.\\tIt would be interesting to see how GRLS performs on other combinatorial search tasks, to see if it is a useful approach beyond this particular problem.\\n\\n13.\\tIn the third paragraph of Section 4.2, it says \\u2018The results are presented in Table 3.\\u2019 I believe this should say Figure 3. Also, the ordering of Figure 3 and Table 3 should be swapped to align with the order they are discussed in the text. Finally, the caption for Table 3 is insufficiently explanatory, as are most other captions; please make these more informative.\\n\\n14.\\tIn Section 4.3:\\n(a), the paper refers to Friesen & Domingos (2018) indicating that zero loss is possible if the dataset is separable. However, what leads you to believe that these datasets are separable? A more accurate comparison would be the loss for a powerful non-binarized baseline network. \\n(b) Further, given the standard error of GRLS, it\\u2019s possible that its loss could be substantially higher than that of FTPROP as well. It would be interesting to investigate the cases where it does much better and the cases where it does much worse to see if these cases are informative for improving the method.\\n\\n15.\\tWhy is there no discussion of training time in the experiments? While it is not surprising that GRLS is significantly slower, it should not be ignored either. The existence of the Appendix should also be mentioned in the main paper with a brief mention of what information can be found in it.\\n\\n16.\\tIn Algorithm 1, line 2 is confusingly written. Also, notationally, it\\u2019s a bit odd to use h both as an element and as an index into T.\\n\\n17.\\tThere are a number of capitalization issues in the references.\\n\\n18.\\tThe Appendix refers to itself (\\u201cadditional hyperparameter details can be found in the appendices\\u201d).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
ryeX-nC9YQ | Dimension-Free Bounds for Low-Precision Training | [
"Zheng Li",
"Christopher De Sa"
] | Low-precision training is a promising way of decreasing the time and energy cost of training machine learning models.
Previous work has analyzed low-precision training algorithms, such as low-precision stochastic gradient descent, and derived theoretical bounds on their convergence rates.
These bounds tend to depend on the dimension of the model $d$ in that the number of bits needed to achieve a particular error bound increases as $d$ increases.
This is undesirable because a motivating application for low-precision training is large-scale models, such as deep learning, where $d$ can be huge.
In this paper, we prove dimension-independent bounds for low-precision training algorithms that use fixed-point arithmetic, which lets us better understand what affects the convergence of these algorithms as parameters scale.
Our methods also generalize naturally to let us prove new convergence bounds on low-precision training with other quantization schemes, such as low-precision floating-point computation and logarithmic quantization. | [
"low precision",
"stochastic gradient descent"
] | https://openreview.net/pdf?id=ryeX-nC9YQ | https://openreview.net/forum?id=ryeX-nC9YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1lQKImreN",
"BklWPAQcR7",
"Bkg0g0WqAQ",
"Byx19PntCm",
"H1xqG42YA7",
"ryg6W3EaaX",
"H1gEemfTn7",
"SygLsgMVnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545053819046,
1543286360986,
1543278070185,
1543255943035,
1543255058296,
1542437893257,
1541378796479,
1540788382252
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1155/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1155/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1155/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1155/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1155/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1155/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1155/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1155/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"As the reviewers pointed out, the strength of the paper mostly comes from the analysis of the non-linear quantization which depends on the double log of the Lipschitz constants and other parameters. The AC and reviewers agree with the dimension-independent nature of the bounds, but also note that dimension-independent gound may not necessarily be significantly stronger than the dimension-dependent bounds as the metric of measuring the difficulty of the problem also matters. Although the paper does seem to lack result that shows the empirical benefit of the non-linear quantization. In considering the author response and reviewer comments, the AC decided that this comparison was indeed important for understanding the contribution in this work, and it is difficult to assess the scope of the contribution without such a comparison.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Revision updated\", \"comment\": \"Dear readers and reviewers, we have uploaded the revised version of our paper, and we made the following changes:\\n\\nWe fixed some typos and font problems.\\nWe removed some confusing mentions of SVRG [1] and HALP [1] in the main body of the paper, which should have been moved to in the appendix before revision.\\nWe had included an extension of our work to SVRG and HALP in the original appendix, but after feedback from the reviewers it seems that this part of our results caused confusion and did not add much to our claims. We\\u2019ve cut the extension to SVRG and HALP from the appendix so that our work is now focused entirely on the analysis of LP-SGD.\\n\\n\\n[1] Christopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R Aberger, Kunle Olukotun, and Christopher R\\u00e9. High-accuracy low-precision training. arXiv preprint arXiv:1803.03383, 2018.\"}",
"{\"title\": \"Response to reviewer 4\", \"comment\": \"We thank you for your positive feedback and pertinent questions.Your review summarizes the key points of our paper, including the study of LP-SGD bounds in terms of L1 gradient Lipschitz continuity, nonlinear quantization schemes and the assignment of exponent bits in floating-point quantization.\\n\\nRegarding the cons as you point out, we agree with you that our paper is based on a loss-unaware analysis, which may be limited. Our theory in this work gives us an idea of how dimension affects the performance of quantized SGD, by providing conditions under which its performance does not vary with dimension. Nevertheless, it is possible that a more loss-aware analysis could produce a tighter result (or even allow us to extend to a non-convex setting). Analyzing loss-aware explanations for the success of low-precision training in theory and in practice is a future direction of our work.\\n\\nRegarding the technical problem, when using logarithmic quantization, our theory for the convergence bound (Theorem 2) holds when we set \\\\zeta to be less than 1 / \\\\kappa, which is \\\\mu / L. The big-O expression for the number of bits presented after Theorem 2 is an asymptotic analysis that assumes sufficiently small values of epsilon, for which the chosen value of \\\\zeta satisfies \\\\zeta < 1 / \\\\kappa. Even without this simplifying asymptotic analysis, all the parameters that depend on \\\\ell_1 are still within the double-logarithm. We revised the proof in B.3 to make this more clear and explicit.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank you for your careful reading and detailed review. Your review brings out the important parts of our work including the advantage of dimension-independence, the extension to nonlinear quantization, and the empirical validation.\\n\\nIn our work, we use a different tight bound to show that the performance of low-precision training is not dependent on dimensionality inherently, but instead can be bound in terms of parameters such as \\\\ell_1 and \\\\sigma_1, which, in some cases, can be dependent on dimensionality but are not always. Thus under conditions where these parameters are fixed, we get a dimension-free result.\\n\\nWe have two responses to your concerns.\\n\\nFirst, as shown in Fig 1(a), the standard dimension-dependent bound is in some sense tight, so we should expect to see classes of problems for which the performance depends strongly on $d$. For these classes of problems, our parameters L_1 and \\\\sigma_1 will also increase strongly with $d$, as you point out. However, there are classes of problems for which this does not happen, and for the class we study in Figure 2(a), the performance does not depend on $d$ either, which is what our theory predicts. Our theory provides dimension-independent rates for low-precision SGD only when our assumptions (our bounds on the \\\\ell_1 parameters) hold, not for all optimization problems in general.\\n\\nSecond, even in the worst-case scenario when the parameters L_1 and \\\\sigma_1 do depend strongly on dimension, our results in Table 1 show that, by using non-linear quantization, we can actually put those terms inside double \\\\log and get a O(\\\\log\\\\log d) upper bound when it comes to the number of bits required. Although it can not be said to be fully dimension-independent, this is substantially better than the O(\\\\log d) bound from previous work on linear quantization.\", \"regarding_the_minor_issues\": \"LP-SVRG and HALP are two algorithms for low-precision training proposed in [De Sa et al., 2018], and we had extended our result to the analysis of these two algorithms, but we moved this part to the appendix due to the space constraint and caused this confusion. As our main contributions, as pointed out by the reviewers, do not depend on this analysis, we have decided to cut it from the appendix in our revised manuscript. This should help avoid confusion by allowing the paper to focus solely on the main-body claims about low-precision SGD. (And \\\\tilde{w} in theorem 2 is actually a typo, which should be \\\\bar w_T, same as what we wrote in theorem 1 as well as the theorems we added in the appendix.)\\n\\n\\nChristopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R Aberger, Kunle Olukotun, and Christopher R\\u00e9. High-accuracy low-precision training. arXiv preprint arXiv:1803.03383, 2018.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank you for a particularly detailed review and constructive feedback. Your review summarizes the highlights of our work and helps us understand what parts of our work are not explained well enough and may cause confusion.\\n\\nIn our work, we use a different tight bound to show the dimension-independence of low-precision training under particular conditions, which are identified as Assumption 1-4. While Assumption 1 and 3 are standard assumptions on lipschitz continuity and strong convexity, we added assumption 2 and 4 to achieve a stronger result. Assumption 2 is analogous to Assumption 1 but used the \\\\ell_1 norm instead of \\\\ell_2. This is motivated by both theoretical analysis from lemma 4, i.e. the bound we showed in Fig 1(a), and empirical results where we observed the dimension-independence. And in Assumption 4, we bound the gradients at the optimum point both in \\\\ell_1 and \\\\ell_2 norm. The intuition for Assumption 4 is that, since the average gradient \\\\nabla f = 1 / n \\\\sum_i \\\\nabla f_i is 0 at the optimum, the gradients samples \\\\nabla f_i should not be too large and thus can be bounded by some constant \\\\sigma and \\\\sigma_1. This sort of assumption is actually necessary for the analysis of low-precision training, since otherwise we have no way to bound the variance of the gradient samples. For example, in a previous work on quantized net [Li et al., 2017], they assumed the bound on global gradient $G$, which is a stronger assumption than the \\\\ell_2 part of our Assumption 4 since ours only requires the bound at the optimum point. These assumptions, though somewhat nonstandard due to their \\\\ell_1 dependence, are natural to consider from a theoretical perspective, and are commonly observed in experiments, such as networks with sparse entries.\\n\\nIn response to the smaller comments/questions, in order:\\n\\n1. Our main contribution is identifying the conditions under which we can provide a convergence bound for low-precision training in which the dimension $d$ does not appear. We also introduced an analysis of non-linear quantization which strongly weakens the effect of the dimension term (put in a double \\\\log) even without assuming any extra \\\\ell_1 bounds. We used \\u201cdimension-independent\\u201d in this sense.\\n\\n2. In some empirical settings, such as those with sparse entries, our assumptions do hold with good constants. Our assumptions do not hold for training neural networks, since that is a non-convex problem.\\n\\n3. It seems very likely that we could prove dimension-independent bounds for methods using quantized gradients under the same assumptions. Basically the same analysis should work. \\n\\n4 & 5. We have fixed the formatting errors, and we thank the reviewer for these detailed comments.\\n\\n\\nHao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein. Training\", \"quantized_nets\": \"A deeper understanding. In Advances in Neural Information Processing Systems,\\npages 5813\\u20135823, 2017.\"}",
"{\"title\": \"An in-depth study of quantization errors and quantized convex optimization in low-precision training\", \"review\": \"This paper provides an in-depth study of the quantization error in low-precision training and gives consequent bounds on the low-precision SGD (LP-SGD) algorithm for convex problems under various generic quantization schemes.\\n\\n[pros]\\nThis paper provides a lot of novel insights in low-precision training, for example, a convergence bound in terms of the L1 gradient Lipschitzness can potentially be better than its L2 counterpart (which is experimentally verified on specially designed problems). \\n\\nI also liked the discussions about non-linear quantization, how they can give a convergence bound, and even how one could optimally choose the quantization parameters, or the number of {exponent, significance} bits in floating-point style quantization, in order to minimize the convergence bound.\\n\\nThe restriction to convex problems is fine for me, because otherwise essentially there is not a lot interesting things to say (for quantized problems it does not make sense to talk about \\u201cstationary points\\u201d as points are isolated.)\\n\\nThis paper is very well-written and I enjoyed reading it. The authors are very precise and unpretentious about their contributions and have insightful discussions throughout the entire paper.\\n\\n[cons]\", \"my_main_concern_is_that_of_the_significance\": \"while it is certainly of interest to minimize the quantization error with a given number of bits as the budget (and that\\u2019s very important for the deployment side), it is unclear if such a *loss-unaware* theory really helps explain the success of low-precision training in practice.\\n\\nAn alternative belief is that the success comes in a *loss-aware* fashion, that is, efficient feature extraction and supervised learning in general can be achieved by low-precision models, but the good quantization scheme comes in a way that depends on the particular problem which varies case by case. Admittedly, this is a more vague statement which may be harder to analyze or empirically study, but it sounds to me more reasonable for explaining successful low-precision training than the fact that we have certain tight bounds for quantized convex optimization. \\n\\n[a technical question]\\nIn the discussions following Theorem 2, the authors claim that the quantization parameters can be optimized to push the dependence on \\\\sigma_1 into a log term -- this sounds a bit magical to me, because there is the assumption that \\\\zeta < 1/\\\\kappa, which restricts setting \\\\zeta to be too large (and thus restricts the \\u201cacceleration\\u201d of strides from being too large) . I imagine the optimal bound only holds when the optimal choice of \\\\zeta is indeed blow 1/\\\\kappa?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A solid contribution to understanding quantization for SGD\", \"review\": \"The paper considers the problem of low precision stochastic gradient descent. Specifically, they study updates of the form x_{t + 1} = Q (x_t - alpha * g_t), where g_t is a stochastic gradient, and Q is a quantization function. The goal is to produce quantization functions that simultaneously increase the convergence rate as little as possible, while also requiring few bits to represent. This is motivated by the desire to perform SGD on low precision machines.\\n\\nThe paper shows that under a set of somewhat nonstandard assumptions, previously studied quantization functions as well as other low precision training algorithms are able to match the performance of non-quantized SGD, specifically, losing no additional dimension factors. Previous papers, to the best of my knowledge, did not prove such bounds, except under strong sparsity conditions on the gradients. I did not check their proofs line-by-line however they seem correct at a high level.\\n\\nI think the main discussion about the paper should be about the assumptions made in the analysis. As the authors point out, besides the standard smoothness and variance conditions on the functions, some additional assumptions about the function must be made for such dimension independent bounds to hold. Therefore I believe the main contribution of this paper is to identify a set of conditions under which these sorts of bounds can be proven. \\n\\nSpecifically, I wish to highlight Assumption 2, namely, that the ell_1 smoothness of the gradients can be controlled by the ell_2 difference between the points, and Assumption 4, which states that each individual function (not just the overall average), has gradients with bounded ell_2 and ell_1 norm at the optimal point. I believe that Assumption 2 is a natural condition to consider, although it does already pose some limitations on the applicability of the analysis. I am less sold on Assumption 4; it is unclear how natural this bound is, or how necessary it is to the analysis. \\n\\nThe main pros of these assumptions are that they are quite natural conditions from a theoretical perspective (at least, Assumption 2 is). For instance, as the authors point out, this gives very good results for sparse updates. Given these assumptions, I don\\u2019t think it\\u2019s surprising that such bounds can be proven, although it appears somewhat nontrivial. The main downside is that these assumptions are somewhat limiting, and don\\u2019t seem to be able to explain why quantization works well for neural network training. If I understand Figure 4b correctly, the bound is quite loose for even logistic regression on MNIST. However, despite this, I think formalizing these assumptions is a solid contribution.\\n\\nThe paper is generally well-written (at least the first 8 pages) but the supplementary material has various minor issues.\\n\\nSmaller comments / questions:\\n\\n- While I understand it is somewhat standard in optimization, I find the term \\u201cdimension-independent\\u201c here somewhat misleading, as in many cases in practice (for instance, vanilla SGD on deep nets), the parameters L and kappa (not to mention L_1 and kappa_1) will grow with the dimension.\\n\\n- Do these assumptions hold with good constants for training neural networks? I would be somewhat surprised if they did.\\n\\n- Can one get dimension independent bounds for quantized gradients under these assumptions?\\n\\n- The proofs after page 22 are all italicized.\\n\\n- The brackets around expectations are too small in comparison to the rest of the expressions.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Misleading title\", \"review\": \"This paper discusses conditions under which the convergence of training models with low-precision weights do not rely on model dimension. Extensions to two kinds of non-linear quantization methods are also provided. The dimension-free bound of the this paper is achieved through a tighter bound on the variance of the quantized gradients. Experiments are performed on synthetic sparse data and small-scale image classification dataset MNIST.\\n\\nThe paper is generally well-written and structure clearly. However, the bound for linear quantization is not fundamentally superior than previous bounds as the \\\"dimension-free\\\" bound in this paper is achieved by replacing the bound in other papers using l2 norm with l1 norm. Note that l1 norm is related to the l2 norm as: \\\\|v\\\\|_1 <= \\\\sqrt{d}\\\\|v\\\\|_2, the bound can still be dependent on dimension, thus the title may be misleading. Moreover, the assumptions 1 and 2 are much stronger than previous works, making the universality of the theory limited. The analysis on non-linear quantization is interesting, which can really theoretically improve the bound. It would be nice to see some more empirical results on substantial networks and larger datasets which can better illustrate the efficacy of the proposed non-linear quantization.\", \"some_minor_issues\": \"1. What is HALP in the second contribution before Section 2?\\n2. What is LP-SVRG in Theorem 1?\\n3. What is \\\\tilde{w} in Theorem 2?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1eX-nA5KX | VHEGAN: Variational Hetero-Encoder Randomized GAN for Zero-Shot Learning | [
"Hao Zhang",
"Bo Chen",
"Long Tian",
"Zhengjue Wang",
"Mingyuan Zhou"
] | To extract and relate visual and linguistic concepts from images and textual descriptions for text-based zero-shot learning (ZSL), we develop variational hetero-encoder (VHE) that decodes text via a deep probabilisitic topic model, the variational posterior of whose local latent variables is encoded from an image via a Weibull distribution based inference network. To further improve VHE and add an image generator, we propose VHE randomized generative adversarial net (VHEGAN) that exploits the synergy between VHE and GAN through their shared latent space. After training with a hybrid stochastic-gradient MCMC/variational inference/stochastic gradient descent inference algorithm, VHEGAN can be used in a variety of settings, such as text generation/retrieval conditioning on an image, image generation/retrieval conditioning on a document/image, and generation of text-image pairs. The efficacy of VHEGAN is demonstrated quantitatively with experiments on both conventional and generalized ZSL tasks, and qualitatively on (conditional) image and/or text generation/retrieval. | [
"Deep generative models",
"deep topic modeling",
"generative adversarial learning",
"variational encoder",
"zero-short learning"
] | https://openreview.net/pdf?id=S1eX-nA5KX | https://openreview.net/forum?id=S1eX-nA5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rylNUeHakV",
"HkxBMZfiAX",
"BkgfOBcOAQ",
"HkgzAN5u0m",
"SJed3mcuC7",
"Hyg7NsYJaX",
"rkeaSSZc3X",
"Syl6BrmO2Q",
"S1lGVauM9X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1544536140093,
1543344397341,
1543181674268,
1543181513752,
1543181231884,
1541540651206,
1541178692990,
1541055812818,
1538587945790
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1154/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1154/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1154/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1154/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1154/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1154/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1154/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1154/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1154/Authors"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper received borderline ratings due to concerns regarding novelty and experimental results/settings (e.g. zero shot learning). On my side, I believe that the proposed method would need more evaluations on other benchmarks (e.g., SUN, AWA1 and AWA2) for both ZSL and GZSL settings to make the results more convincing. Overall, none of the reviewers championed this paper and I would recommend weak rejection.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"Clarifications of our main contributions\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your constructive feedback. We have added more discussions in our revision to explain why we choose certain components to construct the proposed VHEGAN.\\n\\nAs also noted in your reviews, VHEGAN is not limited to the ZSL application. We choose to focus on ZSL mainly because 1) it is the original motivation for us to develop VHEGAN, 2) it allows us to provide not only interpretable visualization on the inferred latent space, but also objective quantitative comparison to verify its effectiveness in a concrete setting. Having excellent performance in the challenging text based ZSL tasks, we believe VHEGAN can also be generated to many other different machine learning tasks, and we are working on these extensions. \\n\\nWhile there are several concerns/suggestions on the choices of the modeling components of VHEGAN, we'd like to emphasize that the VHEGAN framework and the idea of randomizing the noise of the GAN generator with another deep generative model are our key contributions. \\n\\nThe VHEGAN framework is very flexible. As also noted in your reviews, one may easily substitute its current VHE decoder (Poisson GBN), VHE encoder (Weibull upward-downward variational autoencoder), DCGAN discriminator, and DCGAN generator with different modules, either for improved ZSL or image/text generation performance, or for other applications different from the ZSL task focused in this paper. We are incorporating your suggestions to run more experiments (e.g., replacing DCGAN with StachGAN/AttGAN), and will report updated experimental results in our next revision if the paper gets accepted.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Q1: It is doubtful that this corresponds to the term zero-shot learning; dealing with the case that the unseen class and the seen class are notably different from each other\", \"a1\": \"Most of existing ZSL algorithms try to learn a mapping from images to texts or attributes on seen classes, and fix the mapping for the unseen class to obtain the textual descriptions, which implies the assumptions that the mapping extracts the similar features in both seen and unseen images, and that the mapping approximates the similar features in both the seen and unseen textual descriptions. In other words, they are dealing ZSL as the case that the seen and unseen classes are related in some feature space to some extent, though they have not specified the meaning of the shared features explicitly. In our work, we give the ZSL task an interpretable shared latent space, connecting the relationships between the images structures and the key words, whose effectiveness is validated on benchmark datasets following the widely used experimental protocols for text-based ZSL. Just as you mentioned, the less the relationship between the seen and unseen classes, the more challenging the task is. Compared with CUB2011-easy dataset, the Cub2011-hard and Flower datasets are more challenging, where the proposed VHEGAN remains the best.\", \"q2\": \"The problem is only valid when the unseen class distribution is very similar to the given classes. For example, the text description of unseen classes should be well represented to the topics from seen classes.\", \"a2\": \"Although the seen and unseen classes are related in some feature space, which is hard to be defined, as we mentioned in A1, the entire class distributions are not \\u2018very\\u2019 similar, since they are different classes after all. Certainly, as discussed before, this type of similar image-text relationships is a basic assumption in ZSL and we follow the widely used experimental protocols. What\\u2019s more, besides representing every specific text description of each class, the more important effect of explicitly learned topics is to well express the shared information between the seen and unseen classes, making the knowledge efficiently transfer from the seen to unseen classes.\", \"q3\": \"Similarly, GAN learns images from the seen classes, and by nature, GAN would not generate the proper images of the unseen class if the image distribution of the unseen class is different to the already seen class. In the paper, the classes are very similar to each other (birds, flowers) and that would be the reason GAN worked in this model.\", \"a3\": \"According to the available ZSL protocols, the seen and unseen classes are related more or less. Besides, compared with the original GAN with an uninformative noise as the source distribution, in our model, the source distribution is related to the image distribution in semantic, which makes it possible to generate different images distributions from that of the seen classes.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for his/her comments and suggestions.\", \"q1\": \"The text representation in the paper is simply bag-of-words which limits the application to some extent. In a broader context, image captioning using generative model seems quite relevant.\", \"a1\": \"Image captioning often aims at generating a sequential text describing a certain image, which, however, might not be suitable for the ZSL classification task. First, a text representation is used to define a class of images (not just a single image). In such circumstance, compared with the sequential information, the key words are more general and effective to define a certain image class. Besides, the text representation for a class of images varies from a few words (e.g., a sentence or several tags) to hundreds of sentences (e.g., in encyclopedia), whose key words can be captured more robustly by PGBN than sequential models. The experimental results also illustrate that VHEGAN is able to capture semantically important words from both a long document and a short sentence, as illustrated in Figs. 2-6 in the revised manuscript.\", \"q2\": \"In Table 1, is it possible to report the top-5 accuracy on CUB-easy and top-1 accuracy on Oxford-Flower dataset? Otherwise, it is not very convincing that proposed approach is better than the state-of-the-art approach GAZSL. Reviewer feels that the zero-shot classification result is weak. In Table 1 and Table 2, it seems that GAZSL (Zhu et al. 2018) outperforms the proposed approach.\", \"a2\": \"We only report top-1 accuracy on CUB-easy for GAZSL in Table 1 since the authors in Zhu et al. (2018) only list top-1 accuracy. With the code provided for Zhu et. (2018), we run it by ourselves and achieve 65.24% accuracy, a little lower than VHEGAN-layer3, as shown in the revised manuscript.\\n\\nFrom Table 1, we can see that GAZSL performs better only on CUB-easy and worse on both CUB-hard and Flower than VHEGAN does. Besides, as discussed in the manuscript, CUB-hard and Flower are more challenging datasets, which illustrates the generalization of VHEGAN. What\\u2019s more, lower error bars are achieved by our proposed models, demonstrating the robustness and effectiveness of the proposed posterior representation at multiple stochastic layers. \\n\\nIn addition, although GAZSL exhibits higher accuracy on some metrics, it relies on an extra visual part detection that needs additional resources and elaborate tuning for different classes. As a result, as the original GAZSL only designs part detector for Birds, it is not suitable to perform ZSL on Flower. This problem may limit GAZSL in practical applications which contain many different types of objects.\", \"q3\": \"the text-to-image generation results look reasonably good. But the resolution and quality of generated images are far from state-of-the-art. One suggestion is to train the VHE model with an improved image generator.\", \"a3\": \"Thank you for your suggestion. The resolution and quality of the generated images are affected by the type of GAN used in VHEGAN. Compared with generating high-quality images, for the ZSL task the paper is focused on, extracting image features to help the PGBN to learn a better latent space for the classification task is more important. Thus, we propose a framework to combine VAE-like model with GAN through a shared latent space, in which many types of GAN can be selected. According to the experiments, using DCGAN as image generator can realize a satisfactory classification results, which validates the effectiveness of our combination of PGBN and GAN for ZSL task. We leave the use of more sophisticated GAN in VHEGAN for future research.\", \"q4\": \"One suggestion is to train the VHE model with an improved image generator.\", \"stackgan\": \"Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, Zhang et al. In CVPR 2017.\", \"attngan\": \"Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks, Xu et al. In CVPR 2018.\\nAlso, reviewer would expect to see an improved image generator can lead to a better ZSL performance.\", \"a4\": \"Thank you for suggesting these two relevant papers. Due to the time constraint, we were not able to update our experimental results by replacing DCGAN with more sophisticated GANs. If the paper gets accepted, we will cite these two papers and try to use StackGAN and AttnGAN to replace DCGAN and see whether we can get improved results.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for his/her comments and suggestions.\\n\\nFor the text-based ZSL task, the key to success is to explore the relationship between all the images in a class (not a single image) and its class-specific textual description. For this purpose, a Variational Hetero-Encoder is proposed for this task, exhibiting good performance with a shared latent space extracted from these two modes. Though PGBN and GAN have been used to represent text and image, respectively, they have not been successfully combined to find the relationship between these two modes and the ZSL task. Compared with PGBN, an encoder network is used perform end-to-end inference, serving as the source distribution for GAN, which further reinforce the effect of text on image generation.\", \"using_pgbn_instead_of_recurrent_vae_and_using_gan_instead_of_vae_are_due_to_the_following_reasons\": \"First, it has been observed that the fully-connected VAE is not expressive enough for 64*64 RGB images and the deconvolutional VAE is hard to converge, which can be solved by GAN. \\n\\nSecond, our task focuses on ZSL classification. Although a sequential description could be excellent at defining a specific individual image, the key words are more effective to define a class of images (not a specific image). In addition, recurrent VAE often fails for long sequential texts, e.g., the description of an image class in encyclopedia often consists of thousands of words. Therefore, we extract bag-of-words features and use PGBN for textual generation, which is more suitable for the ZSL task. Indeed, we have clearly identified semantic connections between the images and some key words in the class-specific textual descriptions, which have been effectively captured by our model, as illustrated in Figs. 2-6 in the revised manuscript. Following your suggestion, we have added more discussions and illustrations (highlighted with italic text style).\\n\\nThank you for your suggestion that we can position this work as a text/image embedding/generation work, and then use ZSL as one of the applications. One reason we are focusing on ZSL in this paper is because ZSL is an application that we can provide rigorous quantitative comparison with previous work on this task. We are extending the proposed work to more applications and will report our findings in our future work. \\n\\nAs for the writing, in particular the first paragraph, we will make careful changes if the paper gets accepted.\"}",
"{\"title\": \"Extensive experiments but limited novelty\", \"review\": \"This paper developed a generative model to perform simultaneous embedding/generation of images/texts, with application to zero-shot learning. The experiments are extensive.\\n\\nThe novelty of this work is lacking.\\nThe proposed method consists of a bag of existing models proposed by previous works.\\nBut why using a certain model is not justified or explanined.\\nFor example, for image generation, why use GAN instead of VAE.\\nFor text encoding, why use PGBN, instead of recurrent VAE.\\n\\nThe method seems deteched from the problem of ZSL.\\nThroughout the paper, the authors mostly talk about how to perform joint embedding of texts and images. They give ZSL a touch, but as a side thing.\\nI would suggest the authors to position this work as a text/iamge embedding/generation work. Then use ZSL as an application.\\n\\nThe writing needs to be significantly improved. In the first paragraph describing the problem of ZSL, the authors end up with talking about the evaluation metric of ZSL.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"[Review] VHEGAN: Variational Hetero-Encoder Randomized GAN for Zero-Short Learning\", \"review\": [\"[Paper Summary]\", \"This work suggests a new model incorporating deep topic model (text decoder), VHE (image encoder), and GAN. The topic model and the VHE shares the topic parameters, and the GAN generate an image regarding the topic. Then, for ZSL, the image is encoded to corresponding topic parameters, and the parameter can tell which text description (unseen) is matched with the highest probability. GAN model is used to generate an image given the topic distribution. During the training of the GAN, the VHE and topic model is jointly trained and can enhance the ZSL performance marginally.\", \"[pros]\", \"This work successfully incorporated the topic model and image encoding/decoding. All the individual parts are already given, but I think incorporating them in terms of a unified probabilistic model is also meaningful for this field.\", \"This work shows superior performance on the image to text ZSL problem.\", \"This work mapped the text to image, image to text mapping in a generative manner.\", \"[cons]\", \"The problem is only valid when the unseen class distribution is very similar to the given classes. For example, the text description of unseen classes should be well represented to the topics from seen classes.\", \"It is doubtful that this corresponds to the term zero-shot learning; dealing with the case that the unseen class and the seen class are notably different from each other.\", \"Similarly, GAN learns images from the seen classes, and by nature, GAN would not generate the proper images of the unseen class if the image distribution of the unseen class is different to the already seen class. In the paper, the classes are very similar to each other (birds, flowers) and that would be the reason GAN worked in this model.\", \"(minor) The likelihood of the text (image) given topic should be provided and compared to the existing models.\", \"[Summary]\", \"The reviewer is personally interested in the proposal of the work, but concern that ZSL is difficult to be the main target of the paper because the model can only deal with the classes with (very) similar semantics, and this is the main reason for the rating. The testing with more diverse class should be given, or solid explanation of the mentioned problem would be required.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting paper, borderline results.\", \"review\": \"Paper Summary: This paper studies the zero-shot learning problem with deep generative models. More specifically, it proposed a hybrid framework that combines VAEs (more precisely, the variational hetero-encoder or VHE) and GANs all together. The entire model is composed of an image encoder (Weibull upward-downward variational encoder), a text decoder (Poisson Gamma belief network), and an image generator (generative adversarial network). Once learned, the generative models can be directly used for zero-shot classification and various image generation applications. In the experiments, two benchmark datasets CUB and Oxford-Flowers are used.\\n\\n==\\nNovelty/Significance:\\nZero-shot learning is a challenging task and he main motivation of the paper (using generative model) is interesting. The text representation in the paper is simply bag-of-words which limits the application to some extent. In a broader context, image captioning using generative model seems quite relevant.\\n\\nDiverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space, Wang et al. In NIPS 2017.\\n\\n==\", \"quality\": \"Overall, reviewer feels this is a very interesting work. However, the results from the paper is quite mixed. It is not yet convincing whether the proposed approach is the state-of-the-art in zero-shot learning or text-to-image generation. \\n\\nFirst, this paper demonstrates the power of generative models in text-to-image generation and other applications. However, reviewer feels that the zero-shot classification result is weak. In Table 1 and Table 2, it seems that GAZSL (Zhu et al. 2018) outperforms the proposed approach.\", \"q1\": \"In Table 2, is it possible to report the top-5 accuracy on CUB-easy and top-1 accuracy on Oxford-Flower dataset? Otherwise, it is not very convincing that proposed approach is better than the state-of-the-art approach GAZSL.\\n\\nSecond, the text-to-image generation results look reasonably good. But the resolution and quality of generated images are far from state-of-the-art. One suggestion is to train the VHE model with an improved image generator.\", \"stackgan\": \"Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, Zhang et al. In CVPR 2017.\", \"attngan\": \"Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks, Xu et al. In CVPR 2018.\\n\\nAlso, reviewer would expect to see an improved image generator can lead to a better ZSL performance.\", \"typo\": \"In the title: Zero-Short \\u2192 Zero-Shot.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Typo correction\", \"comment\": \"Our submission has a typo that will be corrected: \\\"zero-short learning\\\" shall be changed to \\\"zero-shot learning\\\"\"}"
]
} |
|
rkxQ-nA9FX | Theoretical Analysis of Auto Rate-Tuning by Batch Normalization | [
"Sanjeev Arora",
"Zhiyuan Li",
"Kaifeng Lyu"
] | Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization. While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking. Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates. It is shown that even if we fix the learning rate of scale-invariant parameters (e.g., weights of each layer with BN) to a constant (say, 0.3), gradient descent still approaches a stationary point (i.e., a solution where gradient is zero) in the rate of T^{−1/2} in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates. A similar result with convergence rate T^{−1/4} is also shown for stochastic gradient descent. | [
"batch normalization",
"scale invariance",
"learning rate",
"stationary point"
] | https://openreview.net/pdf?id=rkxQ-nA9FX | https://openreview.net/forum?id=rkxQ-nA9FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkgQZfs0J4",
"ByeehnIq0Q",
"HkxDvh89A7",
"SJgUKj8cC7",
"rJgNr6BHRQ",
"SygQ2bBxR7",
"HkgOWxN0T7",
"B1eSf1V0aX",
"H1xdKxiDp7",
"rylog8i62m",
"S1gZOpsFnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544626683494,
1543298216079,
1543298142724,
1543297918195,
1542966588475,
1542635947055,
1542500351692,
1542500108735,
1542070399985,
1541416434721,
1541156201401
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1153/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1153/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1153/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1153/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1153/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1153/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1153/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1153/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1153/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1153/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1153/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper conducted theoretical analysis of the effect of batch normalisation to auto rate-tuning. It provides an explanation for the empirical success of BN. The assumptions for the analysis is also closer to the common practice of batch normalization compared to a related work of Wu et al. 2018.\\n\\nOne of the concerns raised by the reviewer is that the analysis does not immediately apply to practical uses of BN, but the authors already discussed how to fill the gap with a slight change of the activation function. Another concern is about the lack of empirical evaluation of the theory, and the authors provide additional experiments in the revision. R1 also points out a few weaknesses in the theoretical analysis, which I think would help improve the paper further if the authors could clarify and provide discussion in their revision.\\n\\nOverall, it is a good paper that will help improve our theoretical understanding about the power tool of batch normalization.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good theoretical contribution to understanding batch normalization.\"}",
"{\"title\": \"Thanks again for your thoughtful review! Experiments are added.\", \"comment\": \"Thanks again for your thoughtful review.\\n\\nTheory ---almost by definition---may not lead to immediate practical applications. Sometimes the goal is better understanding. That has proved difficult for BN, as described in the introduction. \\n\\nPlease also note that we are not proposing some new algorithm which could achieve the same test error as existing methods with less tuning, but are trying to understand why BN helps optimization in the training process. \\n\\nWe've now uploaded a new revision with additional experiments, which exhibits the advantage of auto rate-tuning led by BN in training.\"}",
"{\"title\": \"Thanks for your appreciation! Experiments are added.\", \"comment\": \"Thanks for your valuable review! We've added an experiment section in the new revision, showing how BN helps convergence in the training process.\"}",
"{\"title\": \"Experiment results are added in the new revision\", \"comment\": \"We thank the reviewers for their valuable comments. We uploaded a new revision with additional experiments showing the advantage of auto rate-tuning behavior of BN in training.\", \"two_settings_were_studied\": \"1. Training VGG with BN on cifar10, using standard SGD (without momentum, learning rate decay, weight decay). \\n2. Training VGG with BN on cifar10, using Projected SGD: at each iteration, the algorithm first takes a gradient update and then projects each scale-invariant parameter to the sphere with radius equal to its 2-norm before this iteration, i.e., rescales each scale-invariant parameter such that they maintain their norms during training. \\n\\nIn both settings, learning rate for scale variant parameter is 0.1 but rates for scale invariant parameters vary from 0.01 to 100, a very large range. The plots show that in setting 1 the training loss of SGD lways gets very small, while in setting 2, the training loss of PSGD remains large for lr > 1. \\n\\nThe only difference between SGD and PSGD is that the implicit rate-tuning behavior on scale invariant parameters is blocked because of the fixed norm of scale-invariant parameters. So we can conclude that the auto rate-tuning phenomenon does happen here and it helps convergence in training when learning rate is large.\"}",
"{\"title\": \"Got it\", \"comment\": \"Thanks.\"}",
"{\"title\": \"Some Clarification\", \"comment\": \"(i)-(ii)\\nMy point was that BN has been never experimentally studied in the asymptotic regime where the results of the paper apply. The shown auto-tuning rate is an interesting property, but there is no evidence that it is relevant to the experimental successes of BN that are mentioned.\\n\\n(iii) Thanks for clarification. The paper of Wu et al. 2018 claims in particular: \\\" The recently proposed batch normalization ... is robust to the choice of Lipschitz constant of the gradient in loss function, allowing one to set a large learning rate without worry\\\". I see now that this work does not make this claim formal and according to the authors' explanation above making it formal takes all the derivations of the submission.\\n\\n(iv) The theoretical advantage of the shown auto-rate tuning is not completely clear. It is not excluded that introducing normalization while easing the learning rate tuning for scale-invariant parameters is making it harder for scale-variant ones. There is a learning rate to tune in the end, no matter how many parameters are scale-invariant.\"}",
"{\"title\": \"Lemma 2.4 is correct and the issue of G_t is fixed\", \"comment\": \"Thanks for your positive feedback.\\n\\n(1). Lemma 2.4, Point 1: The gradient in your example is indeed perpendicular to w which can be seen as follows.\\n\\nw\\u2019 * \\\\nabla L(w) = w\\u2019 * (2/w\\u2019*w)(Aw - L(w)*w) = (2/w\\u2019*w)(w\\u2019Aw - L(w)*(w\\u2019*w)) = (2/w\\u2019*w)(w\\u2019Aw - w\\u2019Aw) = 0.\\n\\nIn case of one variable vector, our proof is to take the derivative of c on both sides of F(w) = F(cw), which is the definition of scale-invariance. Then the left-hand side becomes 0 and the right-hand side becomes w\\u2019 * \\\\nabla F(cw) by chain rule. Taking c = 1, we can conclude that w\\u2019 * \\\\nabla F(w) = 0.\\n\\n(2). Theorem 2.5: Sorry G_t should be G_t^{(i)}. We will correct this typo in the next revision of this paper.\\n\\nFor t = 0, G_t^{(i)} are all initialized to some value. The recursion formula for G_t^{(i)} is shown in equation (9).\"}",
"{\"title\": \"Thanks for your careful review.\", \"comment\": \"Thanks for your careful review! As mentioned in the intro, we are trying to give some principled insight into benefits of BN, which has proved tricky. Also, it is noted in the paper that BN probably has many desirable properties, of which auto-rate tuning is just one.\\n\\n(i) Speed of SGD vs GD: \\nNote that \\u201ctime\\u201d here refers to number of iterations, not epochs. We are not aware of results establishing SGD is faster in this measure. (As noted on p2, we are working within the standard paradigm of convergence rates in optimization. The only new part is the automatic rate tuning behavior shown for most parameters when BN is used.) \\n\\n(ii) \\u201cusually training is stopped much before convergence, in the hope of finding solutions close to minimum with high probability.\\u201d \\nWe\\u2019re assuming training proceeds until gradient is small (stationary point). We are not aware of any prior analysis of speed of convergence that deviates from this assumption. Perhaps the reviewer is thinking of early stopping in context of better generalization? \\n\\n(iii) \\u201cclarify difference from Wu et al. (2018)\\u201d\\nWu et al. 2018 introduces a *new* algorithm inspired by weight normalization (WN) and studies its convergence rate to stationary point. This algorithm can be seen as an explicit way to tune the learning rate (thus it is conceptually analogous Adagrad). They don't have any results about WN or BN itself. Their analysis could be adapted to GD on one-neuron network with WN or BN without scale-variant parameters (gamma and beta). Even this adaptation is not immediate because the goal of this work is to find a stationary point on the unit sphere rather than R^d. Finally, they prove no results for SGD, whereas our paper does. \\n\\n\\n(iv) \\u201csingle learning rate doesn\\u2019t apply for all parameters\\u201d \\nCorrect. The algorithm can use a single learning rate for scale-invariant parameters but needs a tuned rate for the scale-variant ones. In feedforward nets, the number of scale-variant parameters scales as the number of nodes and the number of scale-invariant parameters scales as the number of edges (up to weight sharing). Thus the vast majority of parameters are scale-invariant.\\n\\n\\n(v) \\u201cRelation between original loss and loss using BN.\\u201d \\nOur results hold for the loss of batch-normalized network (\\u201cBN-loss\\u201d) which is different from the loss of the original network (\\u201cBN-less loss\\u201d). Probably the reshaping of loss function due to BN is very important but currently hard to analyse theoretically because we lack a good mathematical understanding of the loss landscape (even BN-less).\"}",
"{\"title\": \"Theoretical Analysis of Auto Rate-Tuning by Batch Normalization\", \"review\": [\"Strengths:\", \"The paper gives theoretical insight into why Batch Normalization is useful in making neural network training more robust and is therefore an important contribution to the literature.\", \"While the actual arguments are somewhat technical as is expected from such a paper, the motivation and general strategy is very easy to follow and insightful.\", \"Weaknesses:\", \"The bounds do not immediately apply in the batch normalization setting as used by neural network practitioners, however there are practical ways to link the two settings as pointed out in section 2.4\", \"As the authors point out, the idea of using a batch-normalization like strategy to set an adaptive learning rate has already been explored in the WNGrad paper. However it is valuable to have a similar analysis closer to the batch normalization setting used by most practitioners.\", \"Currently there is no experimental evaluation of the claims, which would be valuable given that the setting doesn't immediately apply in the normal batch normalization setting. I would like to see evidence that the main benefit from batch normalization indeed comes from picking a good adaptive learning rate.\", \"Overall I recommend publishing the paper as it is a well-written and insightful discussion of batch normalization. Be aware that I read the paper and wrote this review on short notice, so I didn't have time to go through all the arguments in detail.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A good paper\", \"review\": \"The paper is well written and easy to follow. The topic is apt.\\n\\nI don\\u2019t have any comments except the following ones.\\n\\nLemma 2.4, Point 1: The proof is confusing. Consider the one variable vector case. Assuming that there is only one variable w, then \\\\nabla L(w) is not perpendicular to w in general. The Rayleigh quotient example L(w) = w\\u2019*A*w/ (w\\u2019*w) for a symmetric matrix A, then \\\\nabla L(w) = (2/w\\u2019*w)(Aw - L(w)*w), which is not perpendicular to w. \\nEven if we constrain ||w ||_2 = 1, then also \\\\nabla L(w) is not perpendicular to w.\\nAm I missing something?\\n\\nWhat is G_t in Theorem 2.5. It should be defined in the theorem itself. There is another symbol G_g which is a constant.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A theoretical result about asymptotic convergence with normalization but weakly related to the practical success of BN\", \"review\": \"* Description\\n\\nThe work is motivated by the empirical performance of Batch Normalization and in particular the observed better robustness of the choice of the learning rate. Authors analyze theoretically the asymptotic convergence rate for objectives involving normalization, not necessarily BN, and show that for scale-invariant groups of parameters (appearing as a result of normalization) the initial learning rate may be set arbitrary while still asymptotic convergence is guaranteed with the same rate as the best known in the general case. Offline gradient descent and stochastic gradient descent cases are considered.\\n\\n* Strengths\\n\\nThe work addresses better theoretical understanding of successful heuristics in deep learning, namely batch normalization and other normalizations. The technical results obtained are non-trivial and detailed proofs are presented. Also I did not verify the proofs the paper appears technically correct and technically clear. The result may be interpreted in the following form: if one chooses to use BN or other normalization, the paper gives a recommendation that only the learning rate of scale-variant parameters need to be set, which may have some practical advantages. Perhaps more important than the rate of convergence, is the guarantee that the method will not diverge (and will not get stuck in a non-local minimum). \\n\\n* Criticism\\nThis paper presents non-trivial theoretical results that are worth to be published but as I argue below its has a weak relevance to practice and the applicability of the obtained results is unclear.\\n-- Concerns regarding the clarity of presentation and interpretation of the results.\\n \\nThe properties of BN used as motivation for the study, are observed non-asymptotically with constant or empirically decreased learning rate schedules for a limited number of iterations. In contrast, the studied learning rates are asymptotic and there is a big discrepancy. SGD is observed to be significantly faster than batch gradient when far from convergence (experimental evidence), and this is with or without normalization. In practice, the training is stopped much before convergence, in the hope of finding solutions close to minimum with high probability. There is in fact no experimental evidence that the practical advantages of BN are relevant to the results proven. It makes a nice story that the theoretical properties justify the observations, but they may be as well completely unrelated. \\n\\nAs seen from the formal construction, the theoretical results apply equally well to all normalization methods. It occludes the clarity that BN is emphasized amongst them. \\n\\nConsidering theoretically, what advantages truly follow from the paper for optimizing a given function? Let\\u2019s consider the following cases.\\n1. For optimizing a general smooth function with all parameters forming a single scale-invariant vector. In this case, the paper proves that no careful selection of the learning rate is necessary. This result is beyond machine learning and unfortunately I cannot evaluate its merit. Is it known / not known in optimization?\\n\\n2. The case of data-independent normalization (such as weight normalization).\\nWithout normalization, we have to tune learning rate to achieve the optimal convergence. With normalization we still have to tune the learning rate (as scale-variant parameters remain or are reintroduced with each invariance to preserve the degrees of freedom), then we have to wait for the phase two of Lemma 3.2 so that the learning rate of scale-invariant parameters adapts, and from then on the optimal convergence rate can be guaranteed.\\n\\n3. The case of Batch Normalization. Note that there is no direct correspondence between the loss of BN-normalized network (2) and the loss of the original network because of dependence of the normalization on the batches. In other words, there is no setting of parameters of the original network that would make its forward pass equivalent to that of BN network (2) for all batches. The theory tells the same as in case 2 above but with an additional price of optimizing a different function.\\n\\nThese points remain me puzzled regarding either practical or theoretical application of the result. It would be great if authors could elaborate. \\n\\n\\n-- Difference from Wu et al. 2018\\n\\nThis works is cited as a source of inspiration in several places in the paper. As the submission is a theoretical result with no immediate applicability, it would be very helpful if the authors could detail the technical improvements over this related work. Note, ICLR policy says that arxiv preprints earlier than one month before submission are considered a prior art. Could the authors elaborate more on possible practical/theoretical applications?\\n \\n\\n* Side Notes (not affecting the review recommendation)\\n\\nI believe that the claim that \\u201cBN reduces covariate shift\\u201d (actively discussed in the intro) was an imprecise statement in the original work. Instead, BN should be able to quickly adapt to the covariate shift when it occurs. It achieves this by using the parameterization in which the mean and variance statistics of neurons (the quantities whose change is called the covariate shift) depend on variables that are local to the layer (gamma, beta in (1)) rather than on the cumulative effect of all of the preceding layers.\\n\\n* Revision\\nI took into account the discussion and the newly added experiments and increased the score. The experiments verify the proven effect and make the paper more substantial. Some additional comments about experiments follow.\\nTraining loss plots would be more clear in the log scale.\\nComparison to \\\"SGD BN removed\\\" is not fair because the initialization is different (application of BN re-initializes weight scales and biases). The same initialization can be achieved by performing one training pass with BN with 0 learning rate and then removing it, see e.g. Gitman, I. and Ginsburg, B. (2017). Comparison of batch normalization and weight normalization algorithms for the large-scale image classification.\\nThe use of Glorot uniform initializer is somewhat subtle. Since BN is used, Glorot initialization has no effect for a forward pass. However, it affects the gradient norm. Is there a rationale in this setting or it is just a more tricky method to fix the weight norm to some constant, e.g. ||w||=1?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1lz-3Rct7 | Three Mechanisms of Weight Decay Regularization | [
"Guodong Zhang",
"Chaoqi Wang",
"Bowen Xu",
"Roger Grosse"
] | Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of $L_2$ regularization.
Literal weight decay has been shown to outperform $L_2$ regularization for optimizers for which they differ.
We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization.
Our results provide insight into how to improve the regularization of neural networks. | [
"Generalization",
"Regularization",
"Optimization"
] | https://openreview.net/pdf?id=B1lz-3Rct7 | https://openreview.net/forum?id=B1lz-3Rct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syxwvjn-eV",
"SyxswZfEkV",
"SJgG4O8op7",
"rygRc8IopQ",
"HklYSxUoTX",
"B1eBMzYupm",
"rJx5uduA2m",
"Skldcfu0hm",
"rygtlWdRnm",
"rJlhD5wRnQ",
"B1g5xlXqnm",
"B1eS7cTdhm",
"rJx3XFFv2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544829790933,
1543934306806,
1542314025910,
1542313621610,
1542312001118,
1542128141142,
1541470322329,
1541468815538,
1541468401394,
1541466724121,
1541185521816,
1541098013331,
1541015844017
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1152/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1152/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1152/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1152/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1152/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1152/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1152/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1152/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1152/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1152/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1152/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1152/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Paper decision\"}",
"{\"title\": \"I stick to my rating\", \"comment\": \"The authors have taken my comment into account in the new revision of the paper and adequately addressed issues pointed out by other reviewers. So, I keep my rating unchanged.\"}",
"{\"title\": \"Re: Comments\", \"comment\": \"Thank you for your new comments. We will update the paper according to your suggestions (Q2 and Q3).\"}",
"{\"title\": \"Re: Related work\", \"comment\": \"Thank you for pointing out this work.\\n\\nSection 2.2 of this paper is indeed related to our mechanism 1. However, the argument of effective learning rate was first identified by van Laarhoven 2017 and we did properly discuss the relationship with van Laarhoven 2017 (also see the response to AnonReview1). In the upcoming version, we will cite the paper you mentioned.\"}",
"{\"comment\": \"The first mechanism, increasing the effective learning rate, is also identified in this work https://arxiv.org/abs/1709.04546 (Sec. 2.2 and 3.2). The authors may want to discuss how they are related.\", \"title\": \"Related work\"}",
"{\"title\": \"Comments\", \"comment\": \"Q1: Agreed\", \"q2\": \"You are right about weight decay on gamma only affecting the complexity of the model due to the last layer which can be merged with the softmax layer weights (as also pointed out by van Laarhoven). May be mention this below Eq. 5 (while citing van Laarhoven) to remind the reader of this fact.\", \"q3\": \"On page 6 (left of Figure 4), I recommend changing the sentence \\n\\\"In all cases, we observe that whether weight decay was applied to the top (fully connected) layer did not appear to matter;\\\"\\nto something like \\n\\\"In all cases, we observe that whether weight decay was applied to the top (fully connected) layer did not have a significant impact;\\\"\", \"q4\": \"OK\", \"q5\": \"Thank you for clarifying. I can see the technical mistake made in the 1st submission involving expectation over the input-output Jacobian for ReLU networks. However the current Theorem 1 on deep linear network makes the claim weak and the authors have used earlier work on deep linear networks as a justification.\\n\\nQ6,7,8,9: OK\", \"comments\": \"There were a few technical mistakes in the original submission that were overlooked by the reviewers and the authors have themselves identified and corrected them. However, these corrections have made the results for the second order methods weaker (section 4.2) since they apply to deep linear networks, which is a bit disappointing. But I still think this paper deserves to be read because 1. even though based on intuitions from deep linear networks, experiments are shown for deep non-linear networks confirming the insights drawn from them; 2. other sections have complementary analysis of weight decay for additional cases.\\n\\n(I have increased my original score by 1)\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for the useful feedback. We have updated the paper (especially 4.2) taking into account several of your comments.\", \"q1\": \"Mechanism 1 is more of a discussion on existing work rather than novel contribution\\nWe agree that the argument of \\\"effective learning rate\\\" itself is not novel and has been observed by van Laarhoven 2017. \\nHowever, we don't think the mechanism 1 is just a discussion of existing work. Particularly, van Laarhoven 2017 didn\\u2019t show any experiments that weight decay improves generalization performance. In Figure 2 of van Laarhoven 2017, they only showed that small learning rate is preferred when weight decay is applied. The important point we made is that weight decay actually improves the generalization performance even with well-tuned learning rate parameter and the gain of applying weight decay cannot be achieved by tunning the learning rate directly (we shouldn't ignore the interaction between the learning rate and weight decay).\\n\\nFurthermore, van Laarhoven 2017 was just talking about L2 regularization which is not equivalent to weight decay in adaptive gradient methods. We don't think the author realized the subtle difference between L2 regularization and weight decay. In the combination of L2 regularization and adaptive gradient methods, the argument of effective learning rate might not hold exactly since L2 regularization can affect both the scale and direction of the weights. In our paper, we extend the argument of \\\"effective learning rate\\\" to first-order optimization algorithms (including SGD and Adam) by identifying the subtle difference between L2 regularization and weight decay.\", \"q2\": \"The effect of weight decay on the gamma parameter of batch-norm.\\nAs discussed in van Laarhoven 2017, only the gamma of the last BN layer affects the complexity of the network. The role of it is quite similar to the scale of the last fully connected layer since you can always merge the gamma parameter into the last fc layer. In practice, the gain of regularizing the gamma parameter of the last BN layer is quite small which is consistent with our observation that regularizing the last fc layer gives marginal improvement. That's why we fixed the gamma parameter throughout the paper.\", \"q3\": \"In Figure 2 and 4, there is a noticeable difference between training without weight decay, and training with weight decay only on the last layer.\\nIn Figure 2, the gap is pretty small (<1%). \\nIn Figure 4, regularizing the last layer does help a little bit (~1%) while the improvement of regularizing conv layers is much larger (~3%). \\nAccording to your suggestion, we revised our statements in 4.1 to make the arguments softer.\", \"q4\": \"In the line right above remark 1, what does \\u201cassumption\\u201d refer to?\\nIt does refer to spherical Gaussian input distribution. We have improved the writing for this part, it should be much clearer now.\", \"q5\": \"Regarding the equivalence of L2 norm of theta under Gauss-Newton metric and the Frobenius norm of input-output Jacobian, why does f_theta need to be a linear function without any non-linearity?\\nThat\\u2019s because we want the input-output Jacobian to be independent of the input x (which is not true for non-linear networks). Under this assumption, we can take J_x out of the expectation (see revised Theorem 1).\", \"note\": \"if the (all) input x has entries \\u00b11 (so that xx^T is an identity matrix), then the assumption of f_theta being linear is not necessary. In that case, it is easy to show that the Gauss-Newton norm is proportional to the expectation of squared Jacobian norm over input distribution.\", \"q6\": \"In remark 1, what does it mean by \\u201cFurthermore, if G is approximated by KFAC\\u201d?\\nThis original claim is a little misleading, we have rewritten this part. Basically, when G is approximated by K-FAC (it's intractable to use exact G in practice), the K-FAC Gauss-Newton norm is still proportional to the squared Jacobian norm, but the constant becomes (L+1), not (L+1)**2.\", \"q7\": \"In the 1st line of the last paragraph of page 6, what are the general conditions under which the connection between Gauss-Newton norm and Jacobian norm does not hold true?\\nIf the network is not linear, then the connection will not hold exactly. We need the assumption of the network being linear so that the input-output Jacobian J_x is independent of the input x.\", \"q8\": \"In Figure 5, how are the different points in the plots achieved? By varying hyper-parameters?\\nSorry, we didn't explain Figure 5 clearly in the submitted version. Different points are achieved by varying optimizers and architectures (we mentioned that on page 7 of the updated version). Specifically, we trained feed-forward networks with a variety optimizers on both MNIST and CIFAR-10. For MNIST, we used simple fully-connected networks with different depth and width. For CIFAR-10, we adopted the VGG family (From VGG11 to VGG19).\", \"q9\": \"Missing citations\\nThank you for pointing out missing citations. We added multiple citations in the latest version.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the positive feedback.\\n\\nWe have revised the conclusion section to discuss the observed results and potential new directions for future work.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for the insightful comments. According to your suggestions, we revised the statements of the paper (including 4.1) to make them clearer.\", \"q1\": \"what is the definition of \\\"effective learning rate\\\"\\nFor \\\"effective learning rate\\\", you can understand it as the \\\"learning rate\\\" for normalized networks (see equation 9).\", \"q2\": \"regularization just changes the learning rate (Mechanism 1)\", \"note\": \"it's true for weight decay in general (not L2-norm). We also tested weight decay in the case of Adam (see Figure 2) where weight decay and L2 regularization are not identical.\", \"q3\": \"why reducing the scale of the weights increase the effective learning rate\\nAs explained in equation 9, the effective learning is inversely proportional to the weight norm.\", \"q4\": \"The sentence starting (in point #1.) with \\\"As evidence,\\\", what is the evidence for?\\nSee Figure 2 and Figure 4. Most of the generalization effect of weight decay is due to applying it to layers with BN.\", \"q5\": \"The improvement provided by weight decay is uniform across the board.\\nWeight decay does improve the performance consistently, but the mechanisms behind are different (depending on the optimization algorithm and network architecture). Figure 1 and Table 1 are mostly to emphasize the difference between L2 regularization and weight decay so as to motivate three mechanisms.\", \"q6\": \"Argument of Mechanism 1 (or effective learning rate)\\nIn mechanism 1, we basically argue that the scaling of weights for BN layers doesn't influence the underlying function (see equation 8), so it doesn't meaningfully constrain the function to be simple (you can always scale down the weights but the function represented by the network is still the same, also see the first paragraph in 4.1). However, the scaling of the weights does influence the updates (see equation 9) by controlling the effective learning rate. The regularization effect of weight decay is achieved by scaling the weights, and therefore the effective learning rate.\", \"q7\": \"Proposition 1 and Theorem 1 are extensions from Martens & Gross, 2015\\nWe have removed Proposition 1 in the latest version. Theorem 1 (Lemma 2 in the latest version) is not just an extension from the K-FAC paper (martens & Grosse, 2015). Actually, it has little to do with the K-FAC paper. We don't think it's trivial for the following reasons:\\n\\n- Theorem 1 (Lemma 2 in the latest version) is new and it heavily relies on the Lemma 1 (gradient structure) which has nothing to do with the original K-FAC (Martens & Grosse, 2015) paper. \\n- Theorem 1 (Lemma 2 in the latest version) is an important part to connect Gauss-Newton norm to approximate Jacobian norm. The result of approximate Jacobian norm is non-trivial and we didn't see any similar theoretical result before. In practice, it's quite expensive to directly regularize Jacobian norm due to the extra computation overhead. In this work, we provide a simple yet cheap way to approximately regularize Jacobian norm and we believe it's useful and novel.\", \"q8\": \"K-FAC (convergence?)\\nK-FAC is currently the most popular approximate natural gradient method in training deep neural networks. It works very well (due to the use of curvature information) in practice and we didn't see any convergence issue. Recently, Bernacchia, 2018 [1] provided convergence guarantee for natural gradient in the case of deep linear networks (where the loss is non-convex). Beyond that, they also gave some theoretical justifications for the performance of K-FAC.\\n\\n[Reference]\\n[1] Exact natural gradient in deep linear networks and application to the nonlinear case\"}",
"{\"title\": \"Paper revision\", \"comment\": \"We have updated the paper and improved the writing a lot. In particular, we rewrote the section 4.1 and 4.2 as requested by AnonReview1 and AnonReview2.\"}",
"{\"title\": \"Writing needs improvement; many handwavy explanations\", \"review\": \"I have read the author's response, and I would like to stick to my rating. From the authors' response on the convergence issue, the result from [1] does not directly apply since the activation function that the authors use in this paper is relu (not linear). Having said that, authors didn't find any issues empirically.\", \"q7\": \"Yes, I agree that the result depends on the gradient structure of the relu activations. But my point was that, it is still a calculation that one has to carry out, and the insight we gain from the calculation seem computational: that one can regularize jacobian norm easily. True, but is that necessary? Or in other words, can we use techniques (not-so) recent implicit regularization literature to analyze KFAC? I still think that the work is good, these are just my questions.\\n====\\n\\nThe paper investigates how weight decay (according to the authors, this is done by scaling weights at each iteration) can be used as a regularizer while using standard first order methods and KFAC. As far as I can see, the experimental conclusion seem pretty consistent with other papers that the authors themselves cite (for eg: Neelakantan et al. (2015); Martens & Grosse, 2015. \\n\\nIn page 2, the authors mention the three different mechanisms by which weight decay has a regularizing effect. First, what is the definition of \\\"effective learning rate\\\"? If the authors mean that regularization just changes the learning rate in some case, that is true. In fact, it is only true while using l2-norm. I looked through the paper, and I couldn't find one. Similarly, I find point #1. to be confusing: why does reducing the scale of the weights increase the effective learning rate? (This confusion carries over to/remains in section 4.1.). The sentence starting (in point #1.) with \\\"As evidence,\\\", what is the evidence for? Is it for the previous statement that weight decay helps as a regularizer? Looking at Figure 1., Table 1., I can see that weight decay is actually helpful even with BN+D. In fact, the improvement provided by weight decay is uniform across the board. \\n\\nThe conclusion of mechanism 1 is that for layers with BN, weight decay is implicitly using higher learning rate and not by limiting the capacity as pointed out by van Laarhoven (2017). The two paragraphs below (12) are contradictory or I'm missing something: first paragraph says that \\\"This is contrary to our intuition that weight decay results in a simple function.\\\" but immediately below, \\\"We show empirically that weight decay only improves generalization by controlling the norm, and therefore the effective learning rate.\\\" Can the authors please explain what the \\\"effective learning rate\\\" argument is?\\n\\nProposition 1 and theorem 1 are extensions from Martens & Gross, 2015, I didn't fully check the calculations. I glanced through them, and they mostly use algebraic manipulations. The main empirical takeaway as the authors mention is that: weight decay in both KFAC-F and KFAC-G serves as a complexity regularizer which sounds trivial (assuming Martens & Grosse, 2015) since in both of these cases, BN is not used and the fact that weight decay is regularization using the local norm. \\n\\nIf I understand correctly, KFAC is an approximate second order method with the approximation chosen to be such that it is invariant under affine transformations. Are there any convergence guarantees at all for either of these approaches? Newton's method, even for strongly convex loss functions, requires self-concordance to ensure convergence, so I'm a bit skeptic when using approximate (stochastic) Jacobian norm. \\n\\nSome of the plots have loss values, some have accuracy etc., which is also confusing while reading. I strongly suggest that Figure 1 be shown differently, especially the x-axis! Essentially weight decay improves the accuracy about 2-4% but it is hard to interpret that from the figure.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Solid work on understanding of weight decay regularization\", \"review\": \"This paper identifies and investigates three mechanisms of weight decay regularization. The authors consider weight decay for DNN architectures with/without BN and different types of optimization algorithms (SGD, Adam, and two versions of KFAC). The paper unravels insights on weight decay regularization effects, which cannot be explained only by traditional L2 regularization approach. This understanding is of high importance for the further development of regulations techniques for deep learning.\", \"strengths\": [\"The authors draw connections between identified mechanisms and effects observed in prior work.\", \"The authors provide both clear theoretical analysis and adequate experimental evidence supporting identified regularization mechanisms.\", \"The paper is organized and written clearly.\", \"I cannot point out any flaws in the paper. The only recommendation I would give is to discuss in more detail possible implications of the observed results for new methods of regularization in deep learning and potential directions for future work. It would emphasize the significance of the obtained results.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice insights about second order methods\", \"review\": \"This paper discusses the effect of weight decay on the training of deep network models with and without batch normalization and when using first/second order optimization methods.\\n\\nFirst, it is discussed how weight decay affects the learning dynamics in networks with batch normalization when trained with SGD. The dominant generalization benefit due to weight decay comes from increasing the effective learning rate of parameters on which batch normalization is applied. The authors therefore hypothesize that a larger learning rate has a regularization effect.\\n\\nSecond, the role of weight decay is discussed when training with second order methods without batch normalization. Under the approximation of not differentiating the curvature matrix used in second order method, it is shown that using weight decay is equivalent to adding to the loss an L2 regularization in the metric space of the curvature matrix considered. It is then shown that if the curvature matrix is the Gauss-Newton matrix, this L2 regularization (and hence the weight decay) is equivalent to the Frobenius norm of the input-output Jacobian when the input has a spherical Gaussian distribution. Similar arguments are made about KFAC with Gauss-Newton norm. The generalization benefit due to weight decay in this case is claimed based on the recent paper by Novak et al 2018 which empirically shows a strong correlation between input-output Jacobian norm and generalization error.\\n\\n\\nFinally, the role of weight decay is discussed for second order methods when using batch normalization. In this case it is discussed for Gauss-Newton KFAC that the benefit mostly comes from the application of weight decay on the softmax layer and the effect of weight decay on other weights cancel out due to batch normalization. A comparison between Gauss-Newton KFAC and Fischer KFAC is also made. Thus the generalization benefit is presumably attributed to the second order properties of KFAC and a smaller norm of softmax layer weights.\", \"comments\": \"The paper is technically correct and proofs look good.\\n\\nI have mixed comments about this paper. I find the analysis in section 4.2 and 4.3 which discuss about the role of weight decay for second order methods (with and without batch-norm) to be novel and insightful (described above). \\n\\nBut on the other hand, I feel section 4.1 is more of a discussion on existing work rather than novel contribution. Most of what is said, both analytically and experimentally, is a repetition of van Laarhoven 2017, except for a few details. It would have been interesting to carefully study the effect of weight decay on the gamma parameter of batch-norm which controls the complexity of the network along with the softmax layer weights as it was left for future work in van Laarhoven 2017. But instead the authors brush it under the carpet by saying they did not find the gamma and beta parameters to have significant impact on performance, and fixed them during training. I also find the claim of section 4.1 to be a bit mis-leading because it is claimed that weight decay applied with SGD and batch normalization only has benefits due to batch-norm dynamics, and not due to complexity control even though in Fig 2 and 4, there is a noticeable difference between training without weight decay, and training with weight decay only on last layer. Furthermore, when hypothesizing the regularization effect of large learning rate in section 4.1, a large body of literature that has studied this effect has not been cited. Examples are [1], [2], [3].\", \"i_have_other_concerns_which_mainly_stem_from_lack_of_clarity_in_writing\": \"1. In the line right above remark 1, it is not clear what \\u201cassumption\\u201d refer to. I am guessing the distribution of the input being spherical Gaussian?\\n2. In remark 1, regarding the claim about the equivalence of L2 norm of theta under Gauss-Newton metric and the Frobenius norm of input-output Jacobian, why does f_theta need to be a linear function without any non-linearity? I think the linearity part is only needed for the KFAC result.\\n3. In remark 1, what does it mean by \\u201cFurthermore, if G is approximated by KFAC\\u201d? For linear f_theta, given lemma 1 and theorem 1, the claimed equivalence always holds true, no?\\n4. In the 1st line of last paragraph of page 6, what are the general conditions under which the connection between Gauss-Newton norm and Jacobian norm does not hold true?\\n5. In figure 5, how are the different points in the plots achieved? By varying hyper-parameters?\", \"a_minor_suggestion\": \"in theorem 1 (and lemma 1), instead of assuming network has no bias, it can be said that the L2 regularization term does not have bias terms. This is more reasonable because bias terms have no effect on complexity and so it is reasonable to not apply weight decay on bias.\\n\\nOverall I think the paper is good *if* section 4.1 is sorted out and writing (especially in section 4.2) is improved. For these reasons, I am currently giving a score of 6, but I will increase it if my concerns are addressed.\\n\\n[1] a bayesian perspective on generalization and stochastic gradient descent\\n[2] Train longer, generalize better: closing the generalization gap in large batch training of neural networks\\n[3] Three Factors Influencing Minima in SGD\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rkgfWh0qKX | Do Language Models Have Common Sense? | [
"Trieu H. Trinh",
"Quoc V. Le"
] | It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge (Marcus, 2018). Here we show surprising evidence that language models can already learn to capture certain common sense knowledge. Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement. On the Winograd Schema Challenge (Levesque et al., 2011), language models are 11% higher in accuracy than previous state-of-the-art supervised methods. Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best results (Jastrzebskiet al., 2018). Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision. | [
"language models",
"common sense",
"probability",
"statement",
"commonsense",
"prior knowledge",
"marcus",
"surprising evidence"
] | https://openreview.net/pdf?id=rkgfWh0qKX | https://openreview.net/forum?id=rkgfWh0qKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1esaea7gN",
"H1e8tLW9hm",
"rkx4ukiFnm",
"SJlmKFV4nX",
"BklPaakus7",
"rJeWt75Qsm",
"HyetCdcY5X",
"HJedsCKt5m",
"SJeCbxuFqX",
"ryxyxGDt5Q",
"BJxXFkLtqm"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544962243494,
1541179005542,
1541152619597,
1540798843256,
1539993023009,
1539707769189,
1539053776563,
1539051167877,
1539043334165,
1539039718895,
1539035002621
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1151/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1151/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1151/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1151/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1151/AnonReviewer3"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1151/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1151/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper adapts language models (LMs), recurrent models trained on large corpus to produce the next word in English, to two commonsense reasoning tasks: the Winograd schema challenge and commonsense knowledge extraction. For the former, the language model score itself is used to obtain substantial gains over existing approaches for this challenging task, while a slightly more involved training procedure adapts the LMs to commonsense extraction. The reviewers appreciated the simplicity of the changes to existing LMs and the impressive results (especially on the WSC).\", \"the_reviewers_point_out_the_following_potential_weaknesses\": \"(1) clarity issues in the writing and the presentation, (2) a lack of novelty in the proposed approach, given a number of recent work has shown the ability of language models to perform commonsense reasoning, and (3) critical methodological issues in the evaluation that raise questions about the significance of the results. A lack of response from the authors meant that there was no further discussion needed, and the reviewers encourage the authors to take the feedback to improve further versions of the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Clarity and Evaluation Issues\"}",
"{\"title\": \"some interesting results, but could use more rigor and empirical exploration\", \"review\": \"This paper evaluates language models for tasks that involve \\\"commonsense knowledge\\\" such as the Winograd Schema Challenge (WSC), Pronoun Disambiguation Problems (PDP), and commonsense knowledge base completion (KBC).\", \"pros\": \"The approach is relatively simple in that it boils down to just applying language models. \\n\\nThe results outperform prior work, in some cases by pretty large margins. \\n\\nThe language models are quite large and it appears that this is the first time that large-scale language models have been applied seriously to the Winograd Schema Challenge (rather than, say, to the NLI version of it in GLUE, to which it is hard to compare these results). \\n\\nSome of the additional and ablation experiments are interesting.\", \"cons\": \"While this paper has some nice results, there are some aspects of it that concern me, specifically related to hyperparameter tuning and experimental rigor:\", \"there_are_three_methods_given_for_using_an_lm_to_make_a_prediction\": \"full, full-normalized, and partial. For PDP, full (or perhaps full-normalized?) works best, while for WSC, partial works best. The differences among methods, at least for WSC, are quite large: from 2% to 10% based on Figure 3. I don't see a numerical comparison for PDP, so I'm not sure how these methods compare on it. Since the datasets are so small, there is no train/dev/test split, so how were these decisions made? They seem to be oracle decisions. This is concerning to me, as there is not much explanation given for why one method is better than another method.\\n\\nMy guess is that the reason why partial works better than full for WSC is because the WSC sentences were constructed such that the words up to and including the ambiguous pronoun were written such that it would be difficult to identify the antecedent of the pronoun. The rest of the sentence would be needed to identify the antecedent. I'll assume for this discussion that the sentence can be divided into three parts x, y, and z, where x is the part before the pronoun, y is the phrase that replaces the pronoun, and z is the part after the pronoun. Then p(z|xy), which is partial scoring, corresponds to p(xyz)/p(xy), which can be viewed as \\\"discounting\\\" or \\\"normalizing for\\\" the probability of putting y in place of the pronoun given the context x. For WSC, I think one of the goals in writing the instances is to make the \\\"true\\\" p(xy) approximately equal for both values of y. The language model will not naturally have this be the case (i.e., that p(xy) is the same for both antecedents), so dividing by p(xy) causes the resulting partial score to account for the natural differences in p(xy) for different antecedents. This could be explored empirically. For example, the authors could compute p(xy) for both alternatives for all PDP and WSC instances and see if the difference (|p(xy_1) - p(xy_2)|, where y_1 and y_2 are the two alternatives) is systematically different between WSC and PDP. Or one could see if p(xy) is greater for the antecedent that is closer to the pronoun position or if it is triggered by some other effects. It could be the case that the PDP instances are not as carefully controlled as the WSC instances and therefore some of the PDP instances may exhibit the situation where the prediction can be made partially based on p(xy). The paper does not give an explanation for why full scoring works better for PDP and chalks it up to noise from the small size of PDP, but I wonder if there could be a good reason for the difference.\\n\\nThe results on KBC are positive, but not super convincing. The method involves fine-tuning pretrained LMs on the KBC training data, the same training data used by prior work. The new result is better than prior work (compared to the \\\"Factorized\\\", the finetuned LM is 2.1% better on the full test set, and 0.3% better on the novelty-based test set), but also uses a lot more unlabeled data than the prior work (if I understand the prior work correctly). It would be more impressive if the LM could use far fewer than the 100K examples for fine-tuning. Also, when discussing that task, the paper says: \\\"During evaluation, a threshold is used to classify low-perplexity and high-perlexity instances as fact and non-fact.\\\" How was this threshold chosen?\\n\\nI also have a concern about the framing of the overall significance of the results. While the results show roughly a 9% absolute improvement on WSC, the accuracies are still far from human performance on the WSC task. The accuracy for the best pretrained ensemble of LMs in this paper is 61.5%, and when training on WSC-oriented training data, it goes up to nearly 64%. But humans get at least 92% on this task. This doesn't mean that the results shouldn't be taken seriously, but it does suggest that we still have a long way to go and that language models may only be learning a fraction of what is needed to solve this task. This, along with my concerns about the experimental rigor expressed above, limits the potential impact of the paper.\\n\\n\\nMinor issues/questions:\\n\\nIn Sec. 3.1: Why refer to the full scoring strategy as \\\"naive\\\"? Is there some non-empirical reason to choose partial over full?\\n\\nThe use of SQuAD for language modeling data was surprising to me. Why SQuAD? It's only 536 articles from Wikipedia. Why not use all of Wikipedia? Or, if you're concerned about some of the overly-specific language in more domain-specific Wikipedia articles, then you could restrict the dataset to be the 100K most frequently-visited Wikipedia articles or something like that. \\n\\nI think it would be helpful to give an example from PDP-60.\\n\\nSec. 5.1: How is F_1(n) defined? I also don't see how a perfect score is 1.0, but maybe it's because I don't understand how F_1(n) is defined.\\n\\nSec. 6.1: Why would t range from 1 to n for full scoring? Positions before k are unchanged, right? So q_1 through q_{k-1} would be the same for both, right?\\n\\nIn the final example in Figure 2, I don't understand why \\\"yelled at\\\" is the keyword, rather than \\\"upset\\\". Who determined the special keywords?\\n\\nI was confused about the keyword detection/retrieval evaluation. How are multi-word keywords handled, like the final example in Figure 2? The caption of Table 5 mentions \\\"retrieving top-2 tokens\\\". But after getting the top 2 tokens, how is the evaluation done?\\n\\nSec. 6.3 says: \\\"This normalization indeed fixes full scoring in 9 out of 10 tested LMs on PDP-60.\\\" Are those results reported somewhere in the paper? Was that normalization used for the results in Table 2?\\n\\nSec. 6.3 says: \\\"On WSC-273, the observation is again confirmed as partial scoring, which ignores c [the candidate] altogether, strongly outperforms the other two scorings in all cases\\\" -- What is meant by \\\"which ignores c altogether\\\"? c is still being conditioned on and it must not be ignored or else partial scoring would be meaningless (because c is the only part that differs between the two options).\", \"typos_and_minor_issues\": \"Be consistent about \\\"common sense\\\" vs. \\\"commonsense\\\".\\n\\nBe consistent about \\\"Deepnet\\\" vs. \\\"DeepNet\\\" (Tables 2-3).\\n\\nSec. 1:\\n\\\"even best\\\" --> \\\"even the best\\\"\\n\\\"such as Winograd\\\" --> \\\"such as the Winograd\\\"\\n\\\"a few hundreds\\\" --> \\\"a few hundred\\\"\\n\\\"this type of questions\\\" --> \\\"this type of question\\\"\\n\\\"does not present\\\" --> \\\"is not present\\\"\\n\\\"non-facts tuples\\\" --> \\\"non-fact tuples\\\"\\n\\nSec. 2:\\n\\\"solving Winograd\\\" --> \\\"solving the Winograd\\\"\\n\\\"Store Cloze\\\" --> \\\"Story Cloze\\\"\\n\\\"constructed by human\\\" --> \\\"constructed by humans\\\"\\n\\nSec. 4:\\nWhat is \\\"LM-1-Billion\\\"?\\nWhy SQuAD?\\n\\\"Another test set in included\\\" --> \\\"Another test set is included\\\"\\n\\nSec. 5.2:\\nCheck margin in loss_new\\n\\n\\\"high-perlexity\\\" --> \\\"high-perplexity\\\"\\n\\nSec. 6:\", \"figure_2_caption\": \"\\\"keyword appear\\\" --> \\\"keyword appears\\\"\\n\\nSec. 6.2:\\n\\\"for correct answer\\\" --> \\\"for the correct answer\\\"\", \"appendix_a\": \"\\\"acitvation\\\" --> \\\"activation\\\"\", \"appendix_b\": \"\", \"figure_4_caption\": \"\\\"is of\\\" --> \\\"is\\\"\\nThe right part of Figure 4 has some odd spacing and hyphenation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Two somewhat disconnected small contributions\", \"review\": \"This paper uses a language model for scoring of question answer candidates in the Winograd schema dataset, as well as introduces a heuristic for scoring common-sense knowledge triples.\", \"quality\": \"\", \"pros\": \"The suggested model outperforms others on two datasets.\", \"cons\": \"The suggested models are novel in themselves. As the authors also acknowledge, using language models for scoring candidates is a simple baseline in multiple-choice QA and merely hasn't been tested for the Winograd schema dataset.\", \"clarity\": \"The paper is confusing in places. It should really be introduced in the abstract what is meant by \\\"common sense\\\". Details of the language model are missing. It is only clear towards the end of the introduction that the paper explores two loosely-related tasks using language models.\", \"originality\": \"\", \"significance\": \"Other researchers within the common-sense reasoning community might cite this paper. The significance of this paper to a larger representation learning audience is rather small.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Re: Reproducibility\", \"comment\": \"Hi! Thank you for using our code and report your results here. It seems some numbers from the table are different than what we had and the latest release of Tensorflow indeed produces those number. We are checking if there is a mismatch in terms of software or the language model version. Either way, we will make updates so reported results match with the open-source release.\"}",
"{\"title\": \"Studying whether LM encode common-sense information. Novelty, clarity and methodology concerns\", \"review\": \"This paper experiments with pre-trained language models for common sense tasks such as Winograd Schema Challenge and ConceptNet KB completion. While the authors get high numbers on some of the tasks, the paper is not particularly novel, and suffers from methodology and clarity problems. These prevent me from recommending its acceptance.\\n\\nThis paper shows that pre-trained language models (LMs) can be used to get strong improvements on several datasets. While some of the results obtained by the authors are impressive, this result is not particularly surprising in 2018. In the last year or so, methods based on pre-trained LMs have been shown extremely useful for a very wide number of NLP tasks (e.g., Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018). Moreover, as noticed to by the authors, Schwartz et al. (2017) demonstrated that LM perplexity can be useful for predicting common-sense information for the ROC story cloze task. As a result, the technical novelty in this paper is somewhat limited.\", \"the_paper_also_suffers_from_methodological_problems\": \"-- The main results observed by the author, the large improvement on the (hard!) Winograd schema challenge, is questionable: The GLUE paper (Wang et al., 2018) reports that the majority baseline for this dataset is about 65%. It is unclear whether the authors here used the same version of the dataset (the link they put does not unambiguously decide one way or another). If so, then the best results published in the current paper is below the majority baseline, and thus uninteresting. If this is not the same dataset, the authors should report the majority baseline and preferably also run their model on the (hard) version used in GLUE. \\n-- The authors claim that their method on ConceptNet is unsupervised, yet they tune their LM on triplets from the training set, which makes it strongly rely on task supervision.\\n\\nFinally, the paper suffers clarity issues. \\n-- Some sections are disorganized. For instance, the experimental setup mentions experiments that are introduced later (the ConceptNet experiments). \\n-- The authors mention two types of language models (word and character level), and also 4 text datasets to train the LMs on, but do not provide results for all combinations. In fact, it is unclear in table 2 what is the single model and what are the ensemble (ensemble of the same model trained on the same dataset with different seeds? or the same model with different datasets?).\\n-- The authors do not address hyper-parameter tuning. \\n-- What is the gold standard for the \\\"special word retrieved\\\" data? how is it computed?\", \"other_comments\": \"-- Page 2: \\\"In contrast, we make use of LSTMs, which are shown to be qualitatively different (Tang et al., 2018) and obtain significant improvements without fine-tuning.\\\": 1. Tang et al. (2018) do not discuss fine-tuning. 2. Levy et al. (ACL 2018) actually show interesting connections between LSTMs and self-attention.\\n-- Schwartz et al. (2017) showed that when using a pre-trained LM, normalizing the conditional probability of p(ending | story) by p(ending) leads to much better results than p(ending | story). The authors might also benefit from a similar normalization. \\n-- Page 5: how is F1 defined?\", \"minor_comments\": \"-- Page 2: \\\" ... despite the small training data size (100K instances).\\\": 100K is typically not considered a small training set (for most tasks at least)\\n-- Page 5: \\\"... most of the constituent documents ...\\\": was this validated in any way? how?\\n-- The word \\\"extremely\\\" is used throughout the paper without justification in most cases.\", \"typos_and_such\": \"\", \"page_1\": \"\\\"... a relevant knowledge to the above Winograd Schema example, **does** not present ... \\\": should be \\\"is\\\"\", \"page_5\": \"\\\"In the previous sections, we ***show*** ...\\\": showed\", \"page_7\": \"\\\"For example, with the ***test*** ...\\\": \\\"test instance\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Hi,\\n\\nThanks for your work ! Using your code available on Github, I tried to reproduce the results on the Winograd Schema Challenge. Regarding the ensemble of 10LMs and the ensemble of 14LMs, I get a similar accuracy (61.5% and 63.7% accuracy). However, regarding the performance of the single LM, I don't get the same accuracy. I have the following results:\\n\\nModel | LM1 | LM2 | LM3 | LM4 | LM5 | LM6 | LM7 | LM8 | LM9 | LM10 | LM11 | LM12 | LM13 | LM14 \\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nAcc. |54.6% |50.2% |54.2% |55.0% |54.2% |55.0% | 55.3% | 56.8%|57.9% | 57.5% | 55.7% | 58.2% | 60.8% | 56.0%\\n\\nThe results clearly show that the performance of the single LM is not random and that they capture patterns that are useful for the task. However, I don't understand what is the accuracy reported in table 3, 56.4% for a single LM and the accuracy reported at the end of paragraph 5.1 'with one word level LM achiving 62.6% accuracy'. Could you comment on that ?\", \"title\": \"Reproducibility\"}",
"{\"comment\": \"Hi there,\\n\\nI believe the original poster has raised an important question about your paper, and I agree that you are not directly answering his question. Repeating results like 64.4%, 62.6, potential for 70% has nothing at all to do with the important question of *model selection*. You say you \\\"gradually train and add new LMs to the ensemble up to 10 LMs originally\\\". However, the question is, what made you choose *certain* LM's over others at each step. For example, you would choose to add an LM-2 which would vary from the base setting in terms of some hyperparameter, and then suddenly jump (seemingly arbitrarily) to another LM choice, say, char LM-4. What drove these arbitrary choices? Was it a greedy process on the Winograd Schema Challenge accuracy? I understand that you say that *each* LM had a validation perplexity to it; is this what you used to choose certain LM's over others? And if that is the case, I'm surprised that none of these details were included in the paper (as well as even mentioning that the validation perplexity was used). Ultimately, when you consider that you \\\"gradually\\\" ensembled on WSC, which is a *test set*, and observed accuracies en route, this is precisely antithetical with the purpose of a test set.\", \"title\": \"Re-wording OP's question.\"}",
"{\"title\": \"I see your point, validation perplexity is what we used.\", \"comment\": \"> This amounts to 40 possible LMs, of which you choose 14. This number (14) is a hyperparameter in itself.\\n\\nI see what you mean, which we have already addressed in the previous answer. We do not train a very large number of LMs and then tune the ensemble size or selection. We gradually train and add new LMs to the ensemble up to 10 LMs originally (Section 5.1) and observe diminishing returns, so we push further using another direction (Section 5.2) of customizing data.\\n\\nWhy stop at 14? We have very large ensembles that achieve only slightly better results (64.4\\\\%), which is not meaningful as 64% accuracy is still very far from human level accuracy. Besides, single model on Stories has already achieved 62.6\\\\% accuracy.\\n\\nThe point of our paper is proving that LMs can perform better than previous methods, and we demonstrated two ways to improve upon single-LM (ensembling and customizing training data). If one tune the LMs more we believe 70\\\\% is achievable, but 80\\\\% or above will need something entirely different, but that is entirely speculative.\\n\\n> you use 4 different LMs trained on Gutenberg ... but only two on CommonCrawl.\\n\\nAs noted above, we do not decide the total number of LMs before hand, but add new LMs as experiments go:\\n\\nThe original 2 LMs on Gutenberg are Word-LM1 and Char-LM1. Gutenberg is used as it is used to trained USSM. With 66.7\\\\% of USSM + knowledge bases + supervised deep net, single LM are now far behind (60\\\\%) and we started to explore ensembling with other training corpora.\\n\\nThe ensemble of 5 is just adding SQuAD, LM1T, LM1B. The ensemble of 10 is just doubling the choice from the previous ensemble of 5, leading to 4 on Gutenberg and 2 on all other datasets. Doubling from 5 to 10 is a simple and obvious choice to us, albeit somewhat arbitrary. A different choice could result in better or worse, but is likely to improve upon ensemble of 5, which will also support our method of ensembling.\\n\\n> Obviously, the choice seems inconsistent and it does not seem to be based on validation perplexity. Otherwise, why would you use the same model two times?\\n\\nIt is clearly not the same model twice, since they started from different initialization. Our training of LMs is full of models that failed to converge, implementation debugging, transferring pretrained parameters (footnote 8), so it might not be as clean as one wish to see from the first glance. We tried our best to summarize necessary details in Appendix A and the above comment. Thanks for going through them in details.\"}",
"{\"comment\": \"Hi, Thanks for your answer. My question was not about the choice of parameters for the single LM, which we agree seems to outperform USSM and random baseline.\\n\\nInstead, my question was about model selection when you ensemble. Consider that I have 40 random classifiers for the WSC; if I choose 14 of them based on the accuracy on the the WSC (test set), it's really likely that I get good results. \\n\\nYou have 5 corpora (LM1b, SQUAD, CommCrawl, Gutenberg and Stories) and 8 different LM settings (ranging between hyperparameters that differ from the base settings as well as choice of word-level vs character level). This amounts to 40 possible LMs, of which you choose 14. This number (14) is a hyperparameter in itself. How did you come up with it? In addition, when you do model selection to find an ensemble of 10 models (in the case of not using Stories) you use 4 different LMs trained on Gutenberg (2 Word-LM1\\u2019s with different random seeds as well as a one Char-LM4 and one Char-LM 1), but only two on CommonCrawl (Char-LM 4, Char-LM 3). Obviously, the choice seems inconsistent and it does not seem to be based on validation perplexity. Otherwise, why would you use the same model two times?\\n\\nCould you be a bit more clear on how you selected the models?\", \"title\": \"Ensembling\"}",
"{\"title\": \"Models are chosen based on validation perplexity\", \"comment\": \"Hi, thanks for the question. Below we include all details throughout our experiments to answer your question as well as any other potential inquiries about the training process.\", \"tldr\": \"First Heuristic = training corpus diversity (see section 6.3 and Figure 3-right for relevant analysis), Secondary heuristic = validation perplexity on corresponding held-out data.\\n\\nEnsemble choice is made to first include as many corpora as possible (Section 6.3 and Figure 3-Right show relevant analysis):\\n\\n* For single models on PDP-60 we chose Gutenberg as this is also the training corpus used in the previous SOTA [1]. Single model Char-LM result is not included to avoid complicating the tables, but its performance is also better than USSM (53.5%). \\n* For ensemble of 5, we simply add all 3 of the remaining datasets (hence 1 LM each). \\n* For ensemble of 10, we repeat the previous corpora choice twice. \\n* For ensemble of 14, we add 4 LMs from Stories.\\n\\nOnce the training corpus profile is decided, we train and chose LMs based on perplexity on a held-out set. One such held-out set is constructed for each training data, as opposed to a single joint held-out set for all training corpora, since later on we want to demonstrate the effect of training corpus choice on commonsense reasoning test performance.\\n\\nNote that we did not construct and train Word-LM-4 and Char-LM-4 until Section 5.2 (evaluation on Winograd Schema Challenge). There is no particular reason besides we want to push ensemble performance for better results by adding more LMs (even though single models are already better than previous results, see Table 3). \\n\\nNot all of our LMs converged to a good perplexity (below 40 points) on the corresponding validation sets, some other LMs diverged (perplexity > 100), we discarded those models. We initially choose learning rate 0.2 following [2] and randomly try some other learning rates for wordLM2, charLM3 and charLM4 since some of them diverged on LM1B (see footnote 8). Those learning rate are finally fixed and used on all subsequent datasets (CommonCrawl, SQuAD, Gutenberg, and Stories), which is why not all LM-corpus pairs work out in the end. \\n\\nOther than trying out some learning rate values above, we did not perform any tuning since it takes time (on average, an LM took at least 1 million steps for its held-out perplexity to stop improving, which amounts to approximately 01 month of training on a Tesla P-100 GPU), and we already obtain good results.\\n\\n[1] Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. Combing context and\\ncommonsense knowledge through neural networks for solving winograd schema problems. CoRR,\\nabs/1611.04146, 2016.\\n\\n[2] Rafal J\\u00f3zefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the\\nlimits of language modeling. CoRR, abs/1602.02410, 2016.\"}",
"{\"comment\": \"Thanks for the work.\\n\\nConcerning the table 7 and table 8 in the appendix, it seems to me that you have 8 LM variations for each corpus which represents 40 possible single LM models. When you ensemble the choice is not only an arbitrary subset of 14 of them, but involves combinations of these LM variations that are not at all consistent (for example, why do you use LM-2 on SQUAD and not LM-1?). Did you use any auxiliary task to do the model selection? If yes, I think it should be added to the paper.\", \"title\": \"Model Selection\"}"
]
} |
|
SkfMWhAqYQ | Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet | [
"Wieland Brendel",
"Matthias Bethge"
] | Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions. We here introduce a high-performance DNN architecture on ImageNet whose decisions are considerably easier to explain. Our model, a simple variant of the ResNet-50 architecture called BagNet, classifies an image based on the occurrences of small local image features without taking into account their spatial ordering. This strategy is closely related to the bag-of-feature (BoF) models popular before the onset of deep learning and reaches a surprisingly high accuracy on ImageNet (87.6% top-5 for 32 x 32 px features and Alexnet performance for 16 x16 px features). The constraint on local features makes it straight-forward to analyse how exactly each part of the image influences the classification. Furthermore, the BagNets behave similar to state-of-the art deep neural networks such as VGG-16, ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies. | [
"interpretability",
"representation learning",
"bag of features",
"deep learning",
"object recognition"
] | https://openreview.net/pdf?id=SkfMWhAqYQ | https://openreview.net/forum?id=SkfMWhAqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rygfEq_qx4",
"B1g56mREeE",
"Byg9QIWlxN",
"rklqfQFACQ",
"rJeVUiaFRQ",
"rkeeML9EAX",
"Hyg_lE5NRX",
"Skg7JULVRm",
"SyxHQ7UVRQ",
"Skgkfb8NCX",
"rkewfwbVCQ",
"SkxzS7El6Q",
"BylpZJNeTQ",
"B1eZr6Qxam",
"HJew0I7lpX",
"rken1AAkpm",
"BJgUOUvkpX",
"rkxKqUBka7",
"BkxqZHV1pm",
"SkgfzOmkTX",
"H1ep2zJyaX",
"Bkeb0FIa2Q",
"BkxLk4fp3X",
"rkerSzKq2X",
"HJxDPF1t37",
"S1xbm8rE2Q",
"SyghkZtMoX"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1545402921735,
1545032641851,
1544717857939,
1543570194257,
1543261003734,
1542919687778,
1542919151906,
1542903259000,
1542902557314,
1542902023441,
1542883086615,
1541583674450,
1541582597380,
1541582137392,
1541580495434,
1541561828156,
1541531245992,
1541523089016,
1541518593551,
1541515274433,
1541497524556,
1541396936553,
1541379037718,
1541210685203,
1541106014552,
1540802073478,
1539637475667
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer2"
],
[
"~Seunghyeon_Kim1"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1150/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1150/Authors"
],
[
"~Eugene_Belilovsky1"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for your thoughts\", \"comment\": \"Thanks for your thoughts and honest opinion! The decision period might be over but I'd still be curious to get a better understanding of your point of view. To be concrete, in the TL;DR we formulated:\\n\\n\\\"Aggregating class evidence from many small image patches suffices to solve ImageNet, yields more interpretable models and can explain aspects of the decision-making of popular DNNs.\\\"\\n\\nYou seem to weigh the contributions of our paper a bit differently. If you are in the mood and can spare the time I would be very grateful if you could formulate an alternative TL;DR that is better aligned with your perception of the work.\"}",
"{\"title\": \"-\", \"comment\": \"Dear author,\\n\\nI guess I missed this answer. I'm not sure it is fair to claim this CNN is more interpretable, in the sens that this work opens more questions than it closes. \\\"a transparent and interpretable spatial aggregation mechanism\\\", is a bit of an overkill in my humble opinion. Do not worry, this does not affect my review or score, however I do prefer to be honest on this point.\\n\\nI think in the current context of DL, such works should make clear statements of their contributions(and there are a lot here!)\\n\\nRegards,\\n_\"}",
"{\"metareview\": \"This paper presents an approach that relies on DNNs and bags of features that are fed into them, towards object recognition. The strength of the papers lie in the strong performance of these simple and interpretable models compared to more complex architectures. The authors stress on the interpretability of the results that is indeed a strength of this paper.\\n\\nThere is plenty of discussion between the first reviewer and the authors regarding the novelty of the work as the former point out to several related papers; however, the authors provide relatively convincing rebuttal of the concerns.\\n\\nOverall, after the long discussion, there is enough consensus for this paper to be accepted to the conference.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta Review\"}",
"{\"title\": \"Thanks for this rebuttal\", \"comment\": \"Thank you. It answered all my concerns/questions.\"}",
"{\"title\": \"Author's summary of rebuttal discussion\", \"comment\": \"We would like to thank all reviewers for their valuable feedback and we very much appreciate their assessment of our work as \\u201cinteresting\\u201d (R1), \\u201cworth to be shared in the community\\u201d (R2) and \\u201ca valuable contribution to ICLR\\u201d (R3).\", \"we_responded_in_detail_to_the_comments_of_each_reviewer_below_and_summarise_here_the_main_changes_to_the_manuscript\": [\"We clarified that this work is unrelated to region proposal models (like Wei et al. or Tang et al.) in the related work section. (R1)\", \"We added an experiment probing the sensitivity of the BagNets to the precise numerical values of the heatmaps in the appendix. (R3)\", \"We added runtime measurements for BagNets to the results section. (R1)\", \"We added a paragraph in the introduction to define precisely our meaning of interpretability. (R3)\", \"We defined our notion of \\u201clinear BoF models\\u201d and commented on its relevance in the model description. (R3)\", \"We straightened the use of the word \\u201cfeature\\\" throughout the manuscript. (R3)\", \"We modified Figure A.1 to clarify the downsampling step. (R3)\"]}",
"{\"title\": \"-\", \"comment\": \"Thanks again for the open discussion and for thinking about raising your score. Please let us know if there are any further clarifications or questions that we should address.\"}",
"{\"title\": \"Thresholding has little effect on accuracy\", \"comment\": \"As promised we tested how thresholding the logits affects accuracy. We thresholded in two ways: first, by setting all values *below* the threshold to the threshold (and all values above the threshold stay as is). In the second case we binarised the heatmaps by setting all values below the threshold to zero and all values above the threshold to one (this completely removes the amplitude). Please find the results at https://ibb.co/eLJqfV .\\n\\nMost interestingly, for certain binarization thresholds the top-5 accuracy is within 1-2% of the vanilla BagNet performance. This indeed supports your intuition that the amplitude of the heatmaps is not the most important factor. We will include these results (with some more intermediate values) in the appendix.\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks a lot for appreciating our contribution!\\n\\n> Comparison with attention models is necessary to compare the important patches obtained from conventional networks.\\n\\nIn the paper (section 4.3) we quantitatively show that the patches important to BagNets are also important for standard CNNs. Is that the direction you were thinking about? If you have a different experiment in mind we would like to kindly ask you for more details.\"}",
"{\"title\": \"Thanks for the insightful review!\", \"comment\": \"Please excuse our late response and thanks for your insightful feedback and your positive assessment! We are in the middle of the thresholding experiment you suggested. We report on the results later today or tomorrow and would like to respond to the rest of your comments and suggestions:\\n\\n1) Reference to scattering network [1]\\nThat\\u2019s indeed an interesting and related reference with respect to spatially constrained network architectures that we\\u2019ll add to the manuscript. The scattering network extracts patches of size 14 x 14 pixels and employs an additional two-layer fully connected network (or a ResNet-10) on top of those features. That unfortunately makes the spatial aggregation again harder to pinpoint (compared to BagNets) but the work is nonetheless an important precursor. We'll add the reference to the manuscript.\\n\\n2) \\u00ab We construct a linear DNN-based BoF \\u00bb \\nThanks for pointing this out, we have to sharpen the related definitions. For us, the *classifier* is the part of the model _after_ the spatial aggregation (spatial average). The distinction of a linear vs a non-linear classifier is important because only the first is interchangeable with the spatial aggregation, which means that we can interpret the model as extracting evidence from each patch, which is then averaged across all patches. This guarantees that we can exactly attribute how important each patch is for the final model decision. This is not true if the classifier is non-linear. We will clarify this definition and its importance in the manuscript and hope that this sufficiently addresses your concern.\\n\\n3) simplicity is subjective\\nWe agree that one should add a more specific qualifier on \\u201csimpler\\u201d and \\u201cmore interpretable\\u201d as these terms can refer to different concepts. Thanks for pointing this out. What we mean is that the decision making process is more transparent: whereas CNNs take the whole image and somehow churn out a final label, the BagNets linearly accumulate evidence from very small patches to make a decision. This restricts the decision making to features of small size (e.g. BagNets cannot use shape), which makes it easier to understand how certain decisions have been reached (in terms of which image parts have been used). We will clarify this in the manuscript.\\n\\n4) \\u201cthis work basically leaves the problem of understanding general CNNs to the problem of understanding MLPs\\u201d\", \"in_terms_of_the_evidence_extraction_from_local_patches_you_are_right\": \"our paper has nothing to add here. However, from the perspective of the whole image the decision making is more transparent (in terms of which image features are being used for classification) than for general CNNs. In addition, the MLPs now only look at small image patches, which means that the input distribution has much lower entropy. This makes it potentially easier to analyse what evidence is extracted by the MLPs. We will add a clarifying sentence into the manuscript.\\n\\n5) Network depiction in Appendix A\\nThanks for pointing this out, indeed it looks as if the down pooling is performed in each block whereas in reality it\\u2019s only used in the first. We\\u2019ll add an arrow with a description saying \\u201cdownsampling is only performed in the first block\\u201d in each group.\\n\\n6) Usage of word \\u201cfeatures\\u201d\\nThanks for pointing this out, we\\u2019ll go through the manuscript and change the usage to \\u201cimage feature\\u201d (for small patches) or \\u201cfeature embedding\\u201d (to refer to the feature vector extracted by the DNN).\", \"q1\": \"I was wondering if you did try manifold learning on the patches ? Do you expect it to work ?\\nThat\\u2019s an excellent question that we have not yet explored. We have some ideas in that direction (unsupervised learning of low-level image features) but no results to support or refute this idea.\", \"q2\": \"Is there a batch normalization in the FC or a normalization? Did you try to threshold the heat maps before feeding them to the linear layer? I'm wondering indeed if the amplitude of those heatmaps is really key.\\n\\nThere is no normalisation between the average pooling and the linear classifier. We report back on the thresholding experiment today or tomorrow.\", \"q3\": \"do you think it would be easy to exploit the non-overlapping patches for a better parallelization of computations ?\\n\\nThat\\u2019s a good point, indeed that would be possible. One could simply cut an image into smaller parts (e.g. into 128 x 128 patches that overlap by q pixels on the borders where q is the RF size of the BagNet), then run each through the BagNets on separate GPUs and then sum the output of the linear classifier (before the softmax) from each part/GPU.\"}",
"{\"title\": \"Rebuttal?\", \"comment\": \"I'm respectfully wondering if the authors had any thoughts w.r.t. this review?\"}",
"{\"title\": \"Runtime is fast and deeper networks are only gradually shifting away from linear BagNets\", \"comment\": \"Thanks for your comments! Regarding runtimes, a BagNet-32/16/8 running on a GTX 1080 Ti can process 155 (+- 5) images / second of size 3 x 224 x 224 (in batches of 10). For comparison, a ResNet50 can process 570 images in the same time, so BagNets are around 3 to 4 times slower than standard ResNets. Please remember that BagNets are basically ResNets but with most 3x3 convolutions replaced by 1x1 convolutions, so this timing is roughly expected (we have less spatial dimensionality reduction which explains the increased runtimes).\\n\\nAs for deeper neural networks, we indeed observe a large deviation from BagNets the deeper and more precise the networks are. This can be observed from stronger non-linear interactions between spatial patches (Figure 6) and the reduced effectiveness of masking local regions (Figure 8). These deviations may come from (1) a more non-linear classifier on top of the local features or (2) larger local feature sizes. In any case, this is a gradual shift away from linear BagNets and we see it as a refinement of our results, not a contradiction.\"}",
"{\"title\": \"Thanks for your feedback\", \"comment\": \"Thanks for your feedback! The word \\\"interpretable\\\" has different meanings for different people and we agree that we should be careful to define exactly what we mean by this term. There is a large body of literature trying to \\\"understand\\\" CNN decisions by means of a post-hoc feature attribution (i.e. which image parts have been important for the decision). So we mean \\\"more interpretable\\\" in the sense that this architecture transparently integrates evidence from different spatial locations and thus lets us precisely track which spatial features have been contributing how much to the final decision. For each individual location, however, the CNN feature extraction is still a black box. In other words, we reduced the complexity (and thus increased interpretability) of CNN decision making by introducing a transparent and interpretable spatial aggregation mechanism on top of a (still blackbox) local feature extraction. We'll update the manuscript to reflect this perspective more clearly and would appreciate your feedback.\"}",
"{\"title\": \"-\", \"comment\": \"@R2: \\\"[3]\\\" can you comment on the accuracy of the paper you report?\\n\\n@R2, \\\"time complexity and speed\\\": Do you think it would be possible to design cuda routines that act in parallel on patches?\\nHowever, I agree the memory use is more tricky, but I'm ok with it; this is not an engineering paper.\\n\\n@R2: \\\"ROIPooling\\\": could you point us to a paper using it for classification? I'd be very interested to read more about it. Thanks.\"}",
"{\"title\": \"-\", \"comment\": \"@authors: I'm not sure you design a more interpretable CNN: your analysis is purely spatial. I think this should be weakened in the writing because it is misleading. I agree with the other points otherwise.\"}",
"{\"title\": \"Thanks for the open discussion and further comments\", \"comment\": \"First and foremost thanks for this open debate and for reconsidering your decision. We will try to clarify the relation to the works you mentioned in our manuscript. A few more comments from our side:\\n\\n(1) Reference [3] builds a new feature vector for an image that combines a feature vector of the whole image with feature vectors of 128 x 128 and 64 x 64 pixel patches (in order to increase invariance to image transformations). The resulting classification is thus neither more interpretable nor constrained to small patches.\\n\\n(2) You mention that we should use RoIPooling to compare with [Tang]. On what metric would you want this comparison to be performed? We do not claim performance advantages in terms of object classification and do not perform object discovery (one of the subgoals of [Tang et al] which is why they use PASCAL VOC). We'd very much appreciate if you could clarify what exact experiment you have in mind.\\n\\n(3) Time complexity of BagNets: a BagNet-32/16/8 running on a GTX 1080 Ti can process 155 (+- 5) images / second of size 3 x 224 x 224 (in batches of 10). For comparison, a ResNet50 can process 570 images in the same time, so BagNets are around 3 to 4 times slower than standard ResNets. Please remember that BagNets are basically ResNets but with most 3x3 convolutions replaced by 1x1 convolutions, so this timing is roughly expected (we have less spatial dimensionality reduction which explains the increased runtimes).\\n\\nAll in all, the main contributions of this work are (1) a more transparent and interpretable object recognition pipeline (in terms of precisely which object features are being used for classification), (2) the insight that ImageNet can be solved to high accuracy with very small and local image features (so e.g. no shape recognition is required to solve ImageNet) and (3) the insight that standard and widely used ImageNet CNNs seem to use a similar BoF classification strategy. We believe that these insights go way beyond previous work and are not at all addressed in the region proposal literature. Please note that we do not want to claim that our architecture is revolutionary but that we can draw important insights from it about object classification in natural images and what internal decision making process CNNs may use in these tasks (which, given the lack of understanding of current CNN architectures, is dearly needed).\"}",
"{\"title\": \"I agree with the authors that BagNets have the smallest patches and the other papers (Tang et al and the NetVLAD) papers have \\\"large patch\\\" due to the receptive fields.\", \"comment\": \"I agree with the authors that BagNets have the smallest patches and the other papers (Tang et al and the NetVLAD) papers have \\\"large patch\\\" due to the receptive fields.\\n\\nThe first work in this field [3] uses raw patches and does not have receptive fields.\\n\\n[3] Yunchao Gong, Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale orderless pooling of deep convolutional activation features.\\n\\nHowever, [3] is not end-to-end trained. \\n\\nSo the way of combining CNN with BoF is different from the previous works. But it is not fundamentally different. \\n\\nIf all the rest reviewers are willing to accept the paper, I can give a weak accept.\\n\\nBut, still, I want the authors to give more details about the time complexity and speed. In addition, to make the paper more convincing, the authors should use RoIPooling to compare with [Tang].\\n \\n[Tang] Tang et al. Deep Patch Learning for Weakly Supervised Object Classification and Discovery, Pattern Recognition 2017\"}",
"{\"title\": \"Another perspective that might help\", \"comment\": \"Maybe the following perspective also helps: the works you cite use BoF over larger image regions, but the embeddings for each region are still based on conventional, non-interpretable DNNs (like VGG). Our work \\\"opens this blackbox\\\" (to use a very stressed term) and provides a way to compute similar region embeddings in a much more interpretable way as a linear BoF over small patches. In other words, if the works you cite would use BagNets instead of VGG, they would basically use a \\\"stacked BoF\\\" approach: first, small and local patches are combined to yield region embeddings (BagNet), and these region embeddings are used by a second BoF to infer image-level object labels and bounding boxes.\"}",
"{\"title\": \"The effective minimum patch size in the cited works is much larger than 32 x 32 pixels\", \"comment\": \"Thanks for taking the time to respond! To be concrete we'll refer to Tang et al. 2017 in our response.\\n\\nWe believe the statement that \\\"the small patch is 32x32 pixels\\\" is based on a confusion between region proposals (the patches/bounding boxes that you see) and receptive fields. The region proposals spatially crop parts of the highest conv layer activations (e.g. for VLAD encoding, see Figure 4 in Tang et al.). What is shown in visualisations is the image part that corresponds to the cropped part (i.e. if 1/4 of the conv layer is cropped then 1/4th of the image is shown as proposal region). But that is misleading: since each feature vector already sees large parts of the image (212 x 212 pixels in VGG16 to be precise), the effective image region is much larger then the visualised region proposal (minimum is 212 x 212 pixels).\\n\\n> Besides, the time complexity issue of BagNet is not addressed in the paper.\\n\\nBagNets have roughly the same runtime as standard ResNet-50's (it's slightly higher because we have less pooling). We will add precise measurements to the paper, thanks for the suggestion.\\n\\nAs for previous work, in the corresponding section we wrote \\\"Predominantly, DNNs were used to replace the previously hand-tuned feature extraction stage in BoF models, often using intermediate or higher layer features of pretrained DNNs\\\" which, as far as we can see, pretty much applies to the paper you cite (all of them are based on high layer features of AlexNet and VGG). The references that we cite are:\\n\\n[1] Jiewei Cao, Zi Huang, and Heng Tao Shen. Local deep descriptors in bag-of-words for image retrieval. In Proceedings of the on Thematic Workshops of ACM Multimedia 2017\\n[2] Jiangfan Feng, Yuanyuan Liu, and Lin Wu. Bag of visual words model with deep spatial features for geographical scene classification. Comp. Int. and Neurosc., 2017:5169675:1\\u20135169675:14, 2017.\\n[3] Yunchao Gong, Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale orderless pooling of deep convolutional activation features.\\n[4] Eva Mohedano, Kevin McGuinness, Noel E. O\\u2019Connor, Amaia Salvador, Ferran Marques, and Xavier Giro-i Nieto. Bags of local convolutional features for scalable instance search. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, ICMR \\u201916,\\n[5] Joe Yue-Hei Ng, Fan Yang, and Larry S. Davis. Exploiting local features from deep networks for image retrieval. In CVPR Workshops,\\n[6] Fahad Shahbaz Khan, Joost van de Weijer, Rao Muhammad Anwer, Andrew D. Bagdanov, Michael Felsberg, and Jorma Laaksonen. Scale coding bag of deep features for human attribute and action recognition. CoRR, abs/1612.04884, 2016.\"}",
"{\"title\": \"-\", \"comment\": \"@R2: can you comment on the receptive field size of the final layer of the BagNet versus the works you mentioned?\"}",
"{\"title\": \"I still have the previous questions\", \"comment\": \"Thanks for the authors' response!\\n\\nWhat are the similar papers cited in the paper? \\n\\nIn the previous patch-based deep learning methods, there are multi-scale patches. For example, in PASCAL VOC, the whole image is about 500*600 px and the small patch is 32*32 px; they are not whole-object patches. In fact, it is not impossible to obtain whole-object patches, unless object detection has been perfectly done :)\\n\\nRegarding the effectiveness of highlighting the useful features/patches to explain CNNs, this also has been done before. Please refer to the papers I mentioned before; there are figures to useful patches. In computer vision, there are many papers working on learning mid-level features, meaningful patterns or deep patterns. You may also refer to them.\\n\\nIn my understanding, methodologically, there is nothing new in the paper. The explanations about the interpretability of deep nets are not deep enough (not inside of the deep net) and there are many works had ready done similar things.\\n\\nBesides, the time complexity issue of BagNet is not addressed in the paper.\"}",
"{\"title\": \"We do cite similar approaches but they use whole-object patches (instead of small parts), barely increase interpretability and do not shed light on decision making in CNNs\", \"comment\": \"Thank you for reviewing our paper. We would like to make a quick clarification right away, which we hope will change your assessment. All works you cite use non-linear BoF encodings on top of pretrained VGG (or AlexNet) features; the effective patch size of individual features is thus large and will generally encompass the whole object of interest. In contrast, our BagNets are constrained to very small image patches (much smaller than the typical object size in ImageNet), use no region proposals (all patches are treated equally) and employ a very simple and transparent average pooling of local features (no non-linear dependence between features and regions). That\\u2019s why BagNets (1) substantially increase interpretability of the decision making process (see e.g. heatmaps), (2) highlight what features and length-scales are necessary for object recognition and (3) shed light on the classification strategy followed by modern high performance CNNs. None of the cited papers addresses any of these contributions.\", \"ps\": \"We do cite similar approaches in our paper, see first paragraph of related literature. We will add your references there.\"}",
"{\"title\": \"Combing Patch-level CNN and BoF model has been done before, but the paper has the smallest patch\", \"review\": \"The idea of image classification based on patch-level deep feature in the BoF model has been studied extensively.\", \"just_list_few_of_them\": \"Wei et al. HCP: A Flexible CNN Framework for Multi-label Image Classification, IEEE TPAMI 2016\\nTang et al. Deep Patch Learning for Weakly Supervised Object Classification and Discovery, Pattern Recognition 2017\\nTang et al. Deep FisherNet for Object Classification, IEEE TNNLS\\nArandjelovi\\u0107 et al. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition, CVPR 2016\\n\\nThe above papers are not cited in this paper.\\n\\nThere are some unique points. This work does not use RoIPooling layer and has results on ImageNet. But, the previous works use RoIPooling layer to save computations and works on scene understanding images, such as PASCAL. \\n\\nBesides, the paper uses the smallest patch among all the patch-based deep networks. It is interesting.\\n\\nI highly encourage the authors to finetune the ImageNet pre-trained BagNet on PASCAL VOC and compare to the previous patch-based deep networks.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"This paper combines the concept of Bag-of-Feature (BoF) with modern DNN to propose more interpretable neural network framework. Since the proposed method can achieve similar performance to the modern DNN, it can be an alternative to DNN. However, the paper lacks a description of the test phase so it is not clear how many qxq patches are extracted from the full image. As I understand, BagNet extract many small patches from the image, so probably it takes a long time to test one image. In my opinion, it is good to report the test time for the image.\\n\\nThe most interesting part of this paper is section 4.3 which supports the argument that modern DNN learns similar local features to the BagNet. The four experiments in section 4.3 show that VGG16 acts quite similar to the BagNet. On the other hand, the same experiments clearly show that deeper networks such as ResNet-51, DenseNet act totally different from BagNet. In my opinion, these results seem to be contrary to the contribution of the paper that modern DNN can be explained as BoF framework.\", \"title\": \"Interesting idea and results with some comments\"}",
"{\"title\": \"This paper is worth being accepted. The bag-of-words information in the neural network is important for high prediction accuracy. Possibly has high impact in the community and need to be further investigated.\", \"review\": \"This paper suggests a novel and compact neural network architecture which uses the information within bag-of-words features. The proposed algorithm only uses the patch information independently and performs majority voting using independently classified patches. The proposed method provides the state-of-the-art prediction accuracy unexpectedly, and several additional experiments show the state-of-the-art neural networks mainly learn without association between information in different patches.\\n\\nThe proposed algorithm is simple and does not provide completely new idea, but this paper has a clear contribution connecting the previous main idea of feature extraction, bag-of-words, and the prevailing blackbox algorithm, CNN. The results in the paper are worth to be shared in the community and need further investigated.\\n\\nThe presented experiments look fair and reasonable to show the importance of the independent patch information (without association between them), and the presented experimental results show some state-of-the-art methods also perform with independent patch information. \\n\\nComparison with attention models is necessary to compare the important patches obtained from conventional networks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting empirical analysis\", \"review\": \"This is an experimental paper that investigates how spatial ordering of patches influences the classification performances of CNNs. To do so, the authors design CNNs close to ResNets that almost only consist in a simple cascade of 1x1 convolutions, obtaining relatively small receptive field. It is an interesting read, and I recommend it as a valuable contribution to ICLR, that might lead to nice future works.\\n\\nI have however several comments and questions, that I would like to be addressed.\\n\\n1) First I think a reference is missing. Indeed, to my knowledge, it is not the first work to use this kind of techniques. Cf [1]. This does not alterate however the novelty of the approach.\\n\\n2) \\u00ab We construct a linear DNN-based BoF \\u00bb : I do not like this formulation. Here, you assume that you build a ResNet-50 with 1x1 as a representation and have a last final linear layer as a classifier. One could also claim it is a ResNet-48 as a representation followed by 2 layers of 1x1 as a classifier.\\n\\n3) \\u00ab our proposed model architecture is simpler \\u00bb this is very subjective because for instance the FV models are learned in a layer-wise fashion, which makes their learning procedure more interpretable because each layer objective is specified. Furthermore, analyzing these models is now equivalent to analyze a cascade of fully connected layers, which is not simple at all.\\n\\n4) Again, the interpretability mentioned in Sec. 3 is in term of spatial localization, not mapping. I think it is important to make clear this consideration. Indeed, this work basically leaves the problem of understanding general CNNs to the problem of understanding MLPs.\\n\\n5) The graphic of the Appendix A is a bit misleading : it seems 13 downsampling are performed whereas it is not the case, because the first element of each group of block is actually only done once.(if I understood correctly)\\n\\n6) I think the word feature is sometimes mis-used: sometimes it seems it can refer to a patch, sometimes to the code for a patch. (\\u00ab Surprisingly, feature sizes assmall as 17 \\u00d7 17 pixels \\u00bb)\", \"i_got_also_few_questions\": \"\", \"q1\": \"I was wondering if you did try manifold learning on the patches ? Do you expect it to work ?\", \"q2\": \"Is there a batch normalization in the FC or a normalization? Did you try to threshold the heat maps before feeding them to the linear layer? I'm wondering indeed if the amplitude of those heatmaps is really key.\", \"q3\": \"do you think it would be easy to exploit the non-overlapping patches for a better parallelization of computations ?\\n\\nFinally, I find very positive the amount of experiments to test the similarity with standard CNNs. Of course, it\\u2019s far from being a formal proof, but I think it is a very nice first step.\\n\\n[1] Oyallon, Edouard, Eugene Belilovsky, and Sergey Zagoruyko. \\\"Scaling the scattering transform: Deep hybrid networks.\\\" International Conference on Computer Vision (ICCV). 2017.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Architecture search could increase performance and/or efficiency\", \"comment\": \"Dear Eugene,\\nthanks for your comment and the interesting reference! Indeed our results seem to confirm your suspicions regarding integration of spatial information. We'll add your paper into our related work section.\\n\\nWe did not vary our base architecture. For those reasons I'd expect that one can reach even higher performance with a suitable hyperparameter/architecture search or be much more efficient (more shallow/thin architecture) than what we currently use. These could definitely be interesting future directions to pursue.\"}",
"{\"comment\": \"This paper was a nice read. I find the results in this paper quite interesting and it is refreshing to see work revisiting some of the underlying assumptions in our modern computer vision pipelines. I wanted to point out a related result in our recent work https://arxiv.org/pdf/1703.08961.pdf (Sec 2.3,3.1) where we show that using a model localized 16x16 patches can obtain an AlexNet accuracy on imagenet. Specifically we had used a (non-overlapping) local transform with a 16x16 window followed by 3 1x1 convolutions and then an MLP. Indeed, although the MLP could potentially exploit more global spatial information we conjectured this would be quite hard/unlikely, and I believe your result that directly aggregates the predictions of the local encodings reaching nearly the same accuracy confirms this to a degree.\\n\\nI was also wondering if you have tested models other than resnet50-like models as your base, and if so whether those gave substantial differences in the result when varying the actual model (e.g. shallower/thinner/ or non-residual). One could speculate that models applied to smaller sized patches could require a less complex network than is typically used (a potential advantage of this approach). If I understood your model is already rather small compare to the base resnet-50?\", \"title\": \"Interesting work and insights, a potentially related reference\"}"
]
} |
|
BJWfW2C9Y7 | Predictive Local Smoothness for Stochastic Gradient Methods | [
"Jun Li",
"Hongfu Liu",
"Bineng Zhong",
"Yue Wu",
"Yun Fu"
] | Stochastic gradient methods are dominant in nonconvex optimization especially for deep models but have low asymptotical convergence due to the fixed smoothness. To address this problem, we propose a simple yet effective method for improving stochastic gradient methods named predictive local smoothness (PLS). First, we create a convergence condition to build a learning rate varied adaptively with local smoothness. Second, the local smoothness can be predicted by the latest gradients. Third, we use the adaptive learning rate to update the stochastic gradients for exploring linear convergence rates. By applying the PLS method, we implement new variants of three popular algorithms: PLS-stochastic gradient descent (PLS-SGD), PLS-accelerated SGD (PLS-AccSGD), and PLS-AMSGrad. Moreover, we provide much simpler proofs to ensure their linear convergence. Empirical results show that our variants have better performance gains than the popular algorithms, such as, faster convergence and alleviating explosion and vanish of gradients. | [
"stochastic gradient method",
"local smoothness",
"linear system",
"AMSGrad"
] | https://openreview.net/pdf?id=BJWfW2C9Y7 | https://openreview.net/forum?id=BJWfW2C9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xrg92keN",
"rkgX7TrIp7",
"BkxhyCoV67",
"S1lsYBrI2m",
"SkxtxbaDjQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544698348644,
1541983515393,
1541877220414,
1540932995461,
1539981553021
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1149/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1149/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1149/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1149/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1149/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Dear authors,\\n\\nAll reviewers pointed to severe issues with the analysis, making the paper unsuitable for publication to ICLR. Please take their comments into account should you decide to resubmit this work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Issues with the analysis\"}",
"{\"title\": \"Unrealistic assumptions, trivial theory\", \"review\": \"# Unrealistic assumptions and trivial theory\\n\\nThis papers proposes a method to adjust the learning rate of stochastic gradient methods. The problem is of great importance but the theoretical results and presentation contain many issues that make the paper unfit for publication.\\n\\nThe main issue that I see is that the assumption made are unrealistic and make the theory trivial. First, for gradient descent, the authors assume that the gradient is of the form L(x_t) (x_t - x*). Under this assumption, gradient descent converges on a single step with step size 1 / L(x_t). In the stochastic setting, they assume that *each* stochastic gradient is of the form L_i(x_t) (x_t - x*), Eq. (11). Again, SGD in this scenario converges in a single iteration with step size 1 / L_i(x_t).\\n\\nNo wonder in this scenario the authors are able to obtain linear convergence of SGD to arbitrary precision (which is known to be impossible even for quadratics).\\n\\n\\n# Other Issues\\n\\n* Motivation of Eq. (9) is not discussed in sufficient detail. It is unclear to me how to obtain (9) from (7) as the authors mention. Regarding notation, L(x_t) is a scalar, hence (9) could be written more simply as \\\\nabla f(x_t) = L(x_t) (x_t - x*). Why the need for the Kronecker product?\\n\\n* The authors should clearly state what are the assumptions in the theorem statement. For theorem 1 these are not clearly stated, and phrases like \\\"Theorem 1 provides a simple condition for the linear convergence of SGD\\\" give the wrong impression that the Theorem is widely applicable.\\n\\n\\n# Minor\\n * Belo Eq. (10): \\\"where \\\\epsilon_1 is a parameter to prevent ||x_t - x_{t-1}|| going to zero: . I guess what the authors meant is to prevent *the denominator* going to zero, you do want ||x_t - x_{t-1}|| to go to zero as you approach a stationary point\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The Analysis seems incorrect\", \"review\": \"The paper proposes to use an estimate of the 'local' smoothness constructed by taking the difference of the gradients along the previous step. This is a simple idea and has been considered before in literature. The authors seem to take a very simplistic approach to the problem which seems to not work at all in high dimensions. I am reasonable certain that the analysis is incorrect as it is impossible to get linear convergence via SGD or even with GD in general settings. Looking at the proof which is written in a very unreadable way reveals that they make multiple assumptions which holds basically in the case of a quadratic and then further only in one dimension. In which case such a rate with GD is trivial.\\n\\nSo the theory is blatantly wrong. Regarding the experiments they also look shaky at best and sometimes they diverge. I believe the paper is much below standard for ICLR.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Limitedness of contribution and incorrectness of analysis\", \"review\": \"This paper considers the finite-sum optimization problem that is typically seen in machine learning, and proposes methods that adaptively adjust the learning rate by estimating the local Lipschitz constant of the gradient.\\n\\nThe contributions of the paper seem very limited. The proposed method which estimates the local Lipschitz constant of the gradient, named local predictive local smoothness (PLS) method in the paper (equation (10)), has been proposed in [1] long ago (see equation (11) in [1]) and is very well-known to the community. It is quite surprising that the authors claim to be the first to propose this while completely ignoring previous works.\\n\\nI also believe that there are major issues with the analysis for the methods. For example, I do not understand how equation (9) could possibly hold for general functions, and how it could be possible to transform their method into the linear system in (11). Therefore I do not think this paper is technically correct. \\n\\nIn summary, I believe this paper is limited in its contribution and also has major issues in terms of technical correctness, and is well below the standard for ICLR.\", \"reference\": \"[1] Magoulas, G. D., Vrahatis, M. N., & Androulakis, G. S. (1997). Effective backpropagation training with variable stepsize. Neural networks, 10(1), 69-82.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Experimental results are too weak\", \"review\": \"In the paper, the authors try to propose an adaptive learning rate method called predictive local smoothness. They also do some experiments to show the performance.\", \"the_following_are_my_concerns\": \"1. The definition of the L(x_t) is confusing. In (8), the authors define L(x_t), and in (10), the authors give another definition. Does the L(x_t) in (10) always guarantee that (8) is satisfied? \\n\\n2. In theorem 1, \\\\mu^2 = \\\\frac{1}{n} \\\\sum_{i=1}^n L_i^2(x_t) + \\\\frac{2}{n^2} \\\\sum_{i<j}^n L_i(x_t) L_j(x_t) > v. It looks like that \\\\mu > (1-\\\\rho^2) v, no matter the selection of \\\\rho. Why?\\n\\n3. How do you compute L_i(x_t) if x is a multi-layer neural network?\\n\\n4. The experimental results are too weak. In 2018, you should at least test your algorithm using a deep neural network, e.g. resnet. The results on a two-layer neural network mean nothing. \\n\\n5. sometimes, you algorithm even diverge. for example, figure 3 second column third row.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SyxMWh09KX | Attentive Task-Agnostic Meta-Learning for Few-Shot Text Classification | [
"Xiang Jiang",
"Mohammad Havaei",
"Gabriel Chartrand",
"Hassan Chouaib",
"Thomas Vincent",
"Andrew Jesson",
"Nicolas Chapados",
"Stan Matwin"
] | Current deep learning based text classification methods are limited by their ability to achieve fast learning and generalization when the data is scarce. We address this problem by integrating a meta-learning procedure that uses the knowledge learned across many tasks as an inductive bias towards better natural language understanding. Inspired by the Model-Agnostic Meta-Learning framework (MAML), we introduce the Attentive Task-Agnostic Meta-Learning (ATAML) algorithm for text classification. The proposed ATAML is designed to encourage task-agnostic representation learning by way of task-agnostic parameterization and facilitate task-specific adaptation via attention mechanisms. We provide evidence to show that the attention mechanism in ATAML has a synergistic effect on learning performance. Our experimental results reveal that, for few-shot text classification tasks, gradient-based meta-learning approaches ourperform popular transfer learning methods. In comparisons with models trained from random initialization, pretrained models and meta trained MAML, our proposed ATAML method generalizes better on single-label and multi-label classification tasks in miniRCV1 and miniReuters-21578 datasets. | [
"meta-learning",
"learning to learn",
"few-shot learning"
] | https://openreview.net/pdf?id=SyxMWh09KX | https://openreview.net/forum?id=SyxMWh09KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgYJ8R1eV",
"SygMnjw63X",
"B1xKIsQcnQ",
"rkljaHMGhm"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544705504813,
1541401514237,
1541188432672,
1540658627025
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1148/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1148/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1148/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1148/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper describes an incorporation of attention into model agnostic meta learning. The reviewers found that the paper was rather confusing in its presentation of both the method and the tasks. While the results seemed interesting, it was difficult to frame them due to lack of clarity as to what the task is, and the relation between attention and MAML. It sounds like this paper needs a bit more work, and thus is not suitable for publication at this time.\\n\\nIt is disappointing that the reviews were so short, but as the authors did not challenge them, unfortunately the AC must decide on the basis of the first set of comments by reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting results but very unclear narrative\"}",
"{\"title\": \"Review\", \"review\": \"Summary of paper: For the few shot text classification task, train a model with MAML where only a subset of parameters (attention parameters in this case) are updated in the inner loop of MAML. The empirical results suggest that this improves over the MAML baseline.\\n\\nI found this paper confusingly written. The authors hop between a focus on meta-learning to a focus on attention, and it remains unclear to me how these are connected. The description of models is poor -- for example, the ablation mentioned in 4.5.3 is still confusing to me (if the attention parameters are not updated in the inner loop of MAML, then what is?). Furthermore, even basic choices of notation, like A with a bar underneath in a crowded table, seem poorly thought out.\\n\\nI find the focus on attention a bit bizarre. It's unclear to me how any experiments in the paper suggest that attention is a critical aspect of meta-learning in this model. The TAML baseline (without attention) underperforms the ATAML model (with attention), but all that means is that attention improves representational power, which is not surprising. Why is attention considered an important aspect of meta learning?\\n\\nTo me, the most interesting aspect of this work is the idea of not updating every parameter in the MAML inner loop. So far, I've seen all MAML works update all parameters. The experiments suggest that updating a small subset of parameters can improve results significantly in the 1-shot regime, but the gap between normal MAML and the subset MAML is much smaller in the 5-shot regime. This result suggests updating a subset of parameters can serve as a method to combat overfitting, as the 1-shot regime is much more data constrained than the 5-shot regime.\\n\\nIt's unfortunate that the authors do not dig further down this line of reasoning. When does the gap between MAML on all parameters and only on a subset of parameters become near-zero? Does the choice of the subset of parameters matter? For example, instead of updating the attention weights, what happens if the bottommost weights are updated? How would using pretrained parameters (e.g., language modeling pretraining) in meta-learning affect these results? In general, what can be learned about overfitting in MAML?\\n\\nTo conclude, the paper is not written well and has a distracting focus on attention. While it raises an interesting question about MAML and overfitting, it does not have the experiments needed to explore this topic well.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach for few-shot text classification\", \"review\": \"This paper presents a meta learning approach for few-shot text classification, where task-specific parameters are used to compute a context-dependent weighted sum of hidden representations for a word sequence and intermediate representations of words are obtained by applying shared model parameters.\\n\\nThe proposed meta learning architecture, namely ATAML, consistently outperforms baselines in terms of 1-shot classification tasks and these results demonstrate that the use of task-specific attention in ATAML has some positive impact on few-shot learning problems. The performance of ATAML on 5-shot classification, by contrast, is similar to its baseline, i.e., MAML. I couldn\\u2019t find in the manuscript the reason (or explanation) why the performance gain of ATAML over MAML gets smaller if we provide more examples per class. It would be also interesting to check the performance of both algorithms on 10-shot classification.\\n\\nThis paper has limited its focus on meta learning for few-shot text classification according to the title and experimental setup, but the authors do not properly define the task itself.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Attentive Task-Agnostic Meta-Learning for very-few-shot learning\", \"review\": \"The authors introduce the Attentive Task-Agnostic Meta-Learning (ATAML) algorithm for text classification.\\nThe main idea is to learn task-independent representations, while other parameters, including the attention mechanism, are being fine-tuned for each specific task after pretraining. \\nThe authors find that, for few-shot text classification tasks, their proposed approach outperforms several important baselines, e.g., random initialization and MAML, in certain settings. In particular, ATAML performs better than MAML for very few training examples, but in that setting, the gains are significant.\", \"comments\": [\"I am unsure if I understand the contributions paragraph, i.e., I cannot count 3 contributions. I further believe the datasets are not a valid contribution, since they are just subsets of the original datasets.\", \"Using a constant prediction threshold of 0.5 seems unnecessary. Why can't you just tune it?\", \"1-shot learning is maybe theoretically interesting, but how relevant is it in practice?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SyMWn05F7 | Learning Exploration Policies for Navigation | [
"Tao Chen",
"Saurabh Gupta",
"Abhinav Gupta"
] | Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can autonomously explore realistic and complex 3D environments without the context of task-rewards. We propose a learning-based approach and investigate different policy architectures, reward functions, and training paradigms. We find that use of policies with spatial memory that are bootstrapped with imitation learning and finally finetuned with coverage rewards derived purely from on-board sensors can be effective at exploring novel environments. We show that our learned exploration policies can explore better than classical approaches based on geometry alone and generic learning-based exploration techniques. Finally, we also show how such task-agnostic exploration can be used for down-stream tasks. Videos are available at https://sites.google.com/view/exploration-for-nav/. | [
"Exploration",
"navigation",
"reinforcement learning"
] | https://openreview.net/pdf?id=SyMWn05F7 | https://openreview.net/forum?id=SyMWn05F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rklejgP4xN",
"B1e5Q2Pn1N",
"HJgsJ_y2kV",
"Hygjz8oskN",
"HyxMeLoi1N",
"rye2cHjokE",
"rkgtpyX5y4",
"ByeP-Bzc1E",
"HJl1dciYkV",
"HyxtU5AVJE",
"rJeZGq04y4",
"H1gw_OAN14",
"rkgpAvVGkV",
"Bken9w4zJ4",
"BkgYymsoTm",
"SygDhGjoT7",
"rJxWrMsipX",
"rkgdffoi6m",
"Skx8ACqjp7",
"S1epUX832m",
"S1eK3wbjnX",
"rkgGSK0Mhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545003159553,
1544481825715,
1544447971132,
1544431123063,
1544431082456,
1544430995753,
1544331201284,
1544328447284,
1544301158932,
1543985744544,
1543985673233,
1543985263188,
1543813076892,
1543813011918,
1542333153449,
1542333103323,
1542332984693,
1542332943792,
1542332110363,
1541329748754,
1541244849432,
1540708665913
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1147/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1147/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors have proposed an approach for directly learning a spatial exploration policy which is effective in unseen environments. Rather than use external task rewards, the proposed approach uses an internally computed coverage reward derived from on-board sensors. The authors use imitation learning to bootstrap the training and then fine-tune using the intrinsic coverage reward. Multiple experiments and ablations are given to support and understand the approach. The paper is well-written and interesting. The experiments are appropriate, although further evaluations in real-world settings really ought to be done to fully explore the significance of the approach. The reviewers were divided, with one reviewer finding fault with the paper in terms of the claims made, the positioning against prior art, and the chosen baselines. The other two reviewers supported publication even after considering the opposition of R1, noting that they believe that the baselines are sufficient, and the contribution is novel. After reviewing the long exchange and discussion, the AC sides with accepting the paper. Although R1 raises some valid concerns, the authors defend themselves convincingly and the arguments do not, in any case, detract substantially from what is a solid submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}",
"{\"title\": \"I leave this discussion to AC, my final score for this paper is \\u201c2: strong reject\\u201d.\", \"comment\": \"After a full discussion on a proper SLAM baseline and explaining its difference with an exploration policy (such as frontier) and clarifying that authors had not been correct about arguing that R1 has missed the frontier method, now authors argue that the reviewer has flipped their arguments. That is not correct. Please read the full discussion thoroughly. The authors arguments about exploration for SLAM or for navigation are not justified correctly and are not backed up with a proper baseline.\\n \\nAs for collision avoidance, I have provided detailed explanations as to why the experiment of Figure C.4(a) in the Appendix C4 can not be taken as a avoidance baseline. Please look at my previous comments about this. The related discussion is explained fully in 3 paragraphs in my previous comments.\\n \\nObviously, the use of term \\u201creal world\\u201d instead of \\u201crealistic 3D environment\\u201d is not a correct practice in a scientific writing. Also this paper does not provide any evidence as to how well it can work on real scenes, or on a real robot or to what extend and how it can be transferred to real world. Using improper rewordings and making such arguments without an empirical or theoretical backup in a scientific paper is not correct.\\n \\nWhile I can provide more explanations to clarify more and more about the arguments authors have made, I leave the rest of this discussion to AC. My final decision for this paper, as I also mentioned in my previous comments, is \\u201c2: strong reject\\u201d.\"}",
"{\"title\": \"Response to new arguments\", \"comment\": \"Before we respond to R1, we point out that R3 supports the paper over and above R1's comments.\\n\\n**SPTM**\\nWe quote N. Savinov from his follow-up paper \\\"Episodic Curiosity Through Reachibility\\\" that is available on arXiv: \\\"SPTM (Savinov et al., 2018) does compare to the episodic memory buffer but solves a different task \\u2014 given an already provided exploration video, navigate to a goal \\u2014 which is complementary to the task in our work.\\\" Furthermore, SPTM paper does not describe their automated exploration policy in enough detail, and itself acknowledges: \\\"Effective exploration is a challenging task in itself, and a comprehensive study of this problem is outside the scope of the present paper. However, as a first step, we experiment with providing our method with walkthrough sequences generated fully autonomously \\u2013 by our baseline agents trained with reinforcement learning. This is only possible in simple mazes, where these agents were able to reach all goals. We used the best-performing baseline for each maze and repeated exploration multiple times, until all goals were located.\\\"\\n\\n**ZSVI** \\nFirst, we emphasize that we have consulted with one of the authors of ZSVI and they agree: \\\"ZSVI does not attempt to solve long-term navigation problem\\\" (more details on this below).\\n\\na) 20-30 steps vs 59 steps for ZSVI. R1 is not only wrong but is also making a petty point. We picked 20-30 from the text of ZSVI: see Sect. 3.2 > 1. Goal Finding: \\\"To test the extrapolative generalization, we keep the Turtlebot approximately 20-30 steps away from the target location in a way that current and goal observations have no overlap as shown in Figure 4.\\\" 20-30 or 59, our argument still holds.\\nb) Intermediate waypoints necessary or not. Our previous response was in consultation with authors of ZSVI. We reiterate: \\u201cThus, ZSVI does not attempt to solve long-term navigation problem by itself and requires an expert to break long-term navigation into several short-term navigation problems.\\u201d Once again, finding goal tasks in ZSVI are limited to when the goal is 20-30 (or 59 whichever number you want to use here) steps away. Read relevant portions of text from their paper.\\n\\n** Frontier-based Method **\", \"r1_has_changed_their_arguments_on_this_over_their_different_responses\": \"a) In the first review, R1 missed frontier-based method all together. \\nb) When we pointed that we already have much stronger baselines (frontier-based method) than R1s suggestion of using a \\\"greedy\\\" policy, R1 claimed frontier-based method is not state-of-the-art.\\nc) On reiterating that our implementation is indeed very strong, R1 has flipped their argument again, stating that we did not describe it accurately in the original version. This is again incorrect, Section 4.1 > Baselines > 1. Frontier-based Exploration was and is an accurate description of our implementation. R1 had missed the frontier-based method altogether in their original review, and we suggest R1 to read the relevant part of the paper again.\\n\\n** Collision Avoidance **\\nOur action space permits the agent to be stationary, or move around in a small circle. Such a behavior maximizes collision avoidance reward. We explicitly experimented with it in our setup and reported what we found in Appendix C4. Given our setup, what we found makes perfect sense. We are not sure what more R1 wants here, is there a different experiment you want us to run?\\n\\n**Real World Scenes** \\nHouse3D environments are realistic layouts of houses (made by people on the Internet). Infact, their paper is itself called: \\u201cBuilding Generalizable Agents With a Realistic And Rich 3D Environment\\u201d. Computer vision algorithms trained on this dataset have been shown to transfer to the real world. See the original SUNCG paper.\"}",
"{\"title\": \"Prior work is not discussed correctly. Arguments are not convincing. Paper lacks proper comparison with the state-of-the-art baselines. I lower my score to \\u201c2: Strong reject\\u201d (cont.)\", \"comment\": \"<<Comments about Pathak et al. 2018 Zero-shot visual imitation>>\\n\\u00a0\\nRegarding the discussion about the method presented in Pathak et al. 2018 Zero-shot visual imitation:\\n\\u00a0\\n-According to the manuscript of Pathak et al. 2018 and the reported experiments in the published paper (https://arxiv.org/pdf/1804.08606.pdf , https://openreview.net/pdf?id=BkisuzWRW\\n) this statement is not correct:\\n\\u00a0\\n\\u201cWhen GSP is applied to the navigation task, first, it requires the **goal** positions to be within **20-30 steps**\\u201d\\n\\u00a0To investigate this, please look at Figure 4 in the main manuscript. Figure 4 of Pathak et al. 2018 shows that the agent takes 42 steps to see the goal in its view and then takes 17 other steps until it reaches the goal in their settings. This is a total of 59 steps which is more that 20-30 steps.\\n\\u00a0\\n-Also, as explained in the caption of Figure 4 in Pathak et al. 2018, the exploratory behavior has naturally emerged during learning. I had mentioned in my previous review (and I repeat it here again) one would want to see how much gain in the task of navigation would be obtained if an \\u201cexploration only policy\\u201d is learned in a separate step. As can be seen from prior work, some exploration behavior naturally emerges during training and it is not clear if explicitly learning exploration is needed.\\n\\u00a0\\n-There are two testing scenarios in Pathak et al. 2018, one that can use a sequence of landmark images and one that can take one single image as a goal (Please look at Figure 4 and Table 1 in Pathak et al. 2018 for evaluations on a single image goal scenario). Both scenarios are valid can be equally used as a use case of Pathak et al. 2018. Therefore, this statement in the post-rebuttal is incorrect: \\u201cGSP requires **an expert** to provide a sequence of **landmark images**\\u201d\\n\\u00a0\\n\\u00a0\\n<<Comments about SPTM>>\", \"this_statement_by_authors_in_response_of_sptm_is_not_correct\": \"\", \"authors_incorrectly_say\": \"\\u201cSPTM requires the expert demonstration trajectories even at test time.\\u201d\\n\\nSPTM paper mentioned their experimental setup at test time at page 7 of their manuscript (https://arxiv.org/pdf/1803.00653.pdf):\\n\\n\\u201cWhen given a new maze, the agent is provided with an exploration sequence of the environment, with a duration of approximately 5 minutes of in-simulation time (equivalent to 10,500 simulation steps). In our experiments, we used sequences generated by a human subject aimlessly exploring the mazes. The same exploration sequences were provided to all algorithms the proposed method and the baselines.\\u201d\\n\\nThe exploration sequence in SPTM is NOT *expert demonstration trajectories* at test time. it is *aimlessly exploration* of the maze by an agent and used as a set of observations in producing waypoint observations. Then the locomotion network is used for navigating towards the waypoints.\\n\\nAlso, please note that SPTM does not have systemic dependency to human demonstration and experiments with non-human exploration is also provided in the SPTM paper in the supplemental material (Table S2). Thus it cannot be said that SPTM is *infeasible* without demonstrations.\", \"please_look_at_this_paragraph_from_sptm_paper_at_page_10\": \"\\u201cAdditional experiments are reported in the supplement: performance in the validation environments, robustness to hyperparameter settings, an additional ablation study evaluating the performance of the R and L networks compared to simple alternatives, experiments in environments with homogeneous textures, and experiments with automated (non-human) exploration.\\u201d\\n\\u00a0\\n\\u00a0\\nBased on the points mentioned above, authors do not discuss about prior work correctly. The arguments they have made are not convincing. Authors seem to not be knowledgeable about prior work. Authors have not provided the requested baselines. After several rounds of discussion, the paper still lacks proper experimental evaluations with the baselines and state-of-the-art methods. Based on this, I lower my score to \\u201c2\\u201d and vote for \\u201cstrong rejection\\u201d of this paper.\\n\\nAuthors do not discuss about prior work correctly. The arguments are not convincing, paper lacks proper comparison with the state-of-the-art baselines. I lower my score to \\u201c2\\u201d and vote for \\u201cstrong rejection\\u201d of this paper.\"}",
"{\"title\": \"Prior work is not discussed correctly. Arguments are not convincing. Paper lacks proper comparison with the state-of-the-art baselines. I lower my score to \\u201c2: Strong reject\\u201d (cont.)\", \"comment\": \"<<Comparing with a learned collision avoidance policy>>\", \"the_arguments_made_about_the_reason_for_not_comparing_with_a_prior_collision_avoidance_baseline_is_not_convincing\": \"-This statement made by authors is incorrect: \\u201cExperiments that we included (going forward and randomly turning at collision) uses *ground truth collision checking*, and thus already has an advantage over a policy that uses a learned model for collision checking\\u201d\\nWhile one can use ground truth collision checking, that does not suffice for a good \\u201ccollision-avoidance policy\\u201d. Ground truth collision checking in the form that authors have explained, only provides a noise-free observation representation, and it does not provide any intelligent policy for avoiding collisions. The policy that authors have used on top of the noise-free observation representation (obtained from ground truth sensing) is \\u201cgoing forward and randomly turning at collision\\u201d which is a *heuristic policy*. A learned policy may not always move forward as a means to not be trapped in dead-ends. Also, random turns at the time of collision is not optimal; at the time of predicting a near obstacle (and thus a possible future collision) a learned policy can choose actions based on previous observations so that it can lead the agent to places with less chance of collisions in the near future. Given that such behavior can provide exploration as a side product of collision avoidance, I asked for a comparison with a state-of-the-art learning-based collision avoidance policy. \\u00a0However, the authors did not provide such comparison.\\u00a0\\n\\u00a0\\n\\u00a0\\n-The arguments made by authors about the \\u201cSadeghi and Levine 2017\\u201d are not correct. Please look at the video provided here: https://www.youtube.com/watch?v=nXBWmzFrj5s\", \"at_the_minute_3\": \"07-3:11, the agent moves into a room and then moves out of it without *keep turning in a circle*. Also, minutes 0:43-2:33 of the same video shows another example on how the agent explores a building with several rooms and how it moves out of the rooms from the doors and without *keep turning in a circle*.\\n\\u00a0\\n\\u00a0\\n-The experiment that the authors point to for a version of their method that only gets collision avoidance reward in Figure C.4(a) in the Appendix C4 cannot be used in lieu of a collision avoidance policy with the current. The reason is that, a version of the proposed method that only gets collision avoidance reward could only be used as a baseline of a learned collision avoidance policy if its performance for the task of \\u201ccollision-avoidance\\u201d had been compared with a state-of-the-art collision avoidance policy and similar results had been obtained. In other words, it is not clear if the version of the proposed method that only gets collision avoidance reward can compete with any of the state-of-the-art collision avoidance policies presented in prior work.\\nWith the current set of experiments conducted in this paper, the experiment of Figure C.4(a) in the Appendix C4 can be taken as an ablation study on the components of the proposed method and can not be referred as a state-of-the-art collision avoidance policy.\"}",
"{\"title\": \"Prior work is not discussed correctly. Arguments are not convincing. Paper lacks proper comparison with the state-of-the-art baselines. I lower my score to \\u201c2: Strong reject\\u201d.\", \"comment\": \"I still do not see the paper to be offering significant novelty or interesting results. The proposed method does not have major technical novelty and the experiments do not prove that the proposed method is a promising direction for the problems of interest such as \\u201cnavigation\\u201d or \\u201cmap-reconstruction\\u201d or \\u201cgeneral vision-based policy learning\\u201d. I have provided response to the discussion made by the authors bellow. Given all these discussions, I change my initial rate for the paper; I lower my score to \\u201c2\\u201d and vote for \\u201cstrong rejection\\u201d of this paper.\\n\\u00a0\\nI would like to also mention that the authors have not properly discussed about the points raised previously and the repeated statements about \\u201cincorrect understanding\\u201d or \\u201cmisunderstanding\\u201d of the reviewer are not valid.\", \"below_i_point_out_to_several_of_the_unconvincing_arguments_made_by_the_authors\": \"<<Frontier-based method>>\\nAfter several rounds of discussion, and after stating several times that the reviewer has misunderstanding, the authors have provided a list of more recent prior works (than the 1997 paper of frontier-based method) as the actual versions of frontier-based method that they have used, and have revealed more details about their in-house version of \\u201cfrontier-based baseline\\u201d\\u00a0implementation. Why were these details, explanations and prior works missed from the main manuscript in the first place? How can this discussion be used as an evidence that the more recent version of the \\u201cfrontier-based method\\u201d was used as the baseline?\\u00a0\\nNot citing a prior work while being aware of its existence and at the same time using it is not acceptable.\\nClearly, it is not right to say that the reviewer has misunderstood about something which was absent in the paper.\\n\\u00a0\\nThe claims about the proposed method written in the paper and rebuttal are not precise.\\nFor example, here is a statement in the first line of Section 2 at Page 2 of the manuscript:\\n\\u201cOur work on learning exploration policies for navigation in real world scenes is related to active\\nSLAM in classical robotics\\u201d\\nThe claim about a \\u201clearning exploration policies for navigation in real world scene\\u201d is obviously not correct. Because the paper does not provide any evidence as how good the proposed method can work in the \\u201creal world\\u201d or on \\u201creal scenes\\u201d . All experiments are conducted in simulation environment: no real image of scene, no real environment, and no real robot has been used in the entire paper. Prior state-of-the-art exploration works (that I also mentioned in my previous post-rebuttal comments) such as .\\u201d Xu, K., Zheng, L., Yan, Z., Yan, G., Zhang, E., Niessner, M., ... & Huang, H. Autonomous reconstruction of unknown indoor scenes guided by time-varying tensor fields. ACM Transactions on Graphics (TOG), 2017.\\u201d work on real scenes with a real robot and solve a real problem related to active SLAM.\"}",
"{\"title\": \"resolved\", \"comment\": \"The authors have addressed my concerns. I brought the score back up to 7. I think the paper should be accepted.\"}",
"{\"title\": \"Requested Changes Made\", \"comment\": \"Thanks for your suggestions and appreciation for our paper. In our initial response, we were trying to keep the original paper intact and add any changes in the appendix so as to make it easy for reviewers to see what we have changed. As we mentioned in the first response publicly, we promise to make the changes you requested in the final version. We sincerely hope the AC and reviewer do not penalize us for a well-intentioned but not aligned update since we did not realize the update to main paper is a necessity.\\n\\nWe have an updated paper on the website linked in the abstract with these changes. We are listing the changes we made in this update below. These will be included in the final version. We will additionally also include references to the related works that came up in discussion with R1 to the final paper as well. \\n\\n(1) We will add the following text to the \\u201cWith Estimation Noise\\u201d paragraph on page 7.\\n***\\nEven though such a noise model leads to compounding errors over time (as in the case of a real robot), we acknowledge that this simple noise model may not perfectly match noise in the real world.\\n***\\n\\n\\n(2) We will make italic the following existing text in the \\u201cWithout Estimation Noise:\\u201d paragraph on page 7. \\n***\\nNote that this setting is not very realistic as there is always observation error in an agent\\u2019s estimate of its location.\\n***\\n\\n\\n(3) We will add the following text to the Appendix C.7:\\n***\\nDetails of noise generation for experiments with estimation noise in Section 4.2: \\n1. Without loss of generality, we initialize the agent at the origin, that is $x(0) = \\\\mathbf{0}$.\\n2. The agent takes an action a(t). We add truncated Gaussian noise to the action primitive(e.g., move forward 0.25m) to get the estimated pose x(t+1), i.e., x(t+1)=x(t)+ (a(t) with noise) where x(t) is the estimated pose in time step t.\\n3. Iterate the second step until the maximum number of steps is reached.\\nThus, in this noise model, the agent estimates its new pose based on the estimated pose from the last time step and the executed action. Thus, we don\\u2019t use oracle odometry in the noise experiments. This noise model leads to compounding errors over time (as in the case of a real robot), though we acknowledge that this simple noise model may not perfectly match noise in the real world.\\n***\\n\\n(4) We have fixed the typo (\\u201cexiting a room\\u201d). \\n\\n\\nWe very much appreciate your understanding and kindly request you to keep the original rating. We will be happy to rephrase these changes and add further clarifications if you think they will be necessary. Please let us know. \\n\\nThanks.\"}",
"{\"title\": \"above the bar, concerns notwithstanding\", \"comment\": \"I reviewed the spirited discussion between R1 and the authors. I continue to think that the paper provides a fine and informative addition to the literature. It's above the bar for ICLR and I vote for acceptance.\\n\\nThe discussion did bring up many interesting and relevant references to prior (pre-deep) work. These were brought up both by R1 and by the authors. I strongly encourage the authors to incorporate these into the paper. I think this will be useful to the community. These references should be in the paper, not just on the discussion board.\\n\\nI am lowering my rating a bit from 7 to 6 because the authors did not address my request from the original review in the revision, even though they could. I see no reason not to: that's what the ICLR revision period is for.\"}",
"{\"title\": \"Clarifications to R1's incorrect understanding (2)\", \"comment\": \"**Comparison with other learning methods**:\\nWe would like to remind R1 that we added an experiment where the RL agent only gets collision avoidance reward in Figure C.4(a) in the Appendix C4. It shows that the agent does not learn anything meaningful (which is not surprising as even a stay-in-place policy will get a perfect reward). Sadeghi and Levine 2017 and other works on collision avoidance, either explicitly additionally use rewards for moving forward, or appropriately engineer the action space by forcing the agent to move forward in each time step. We did not pick the action space ourselves but used whatever came with House3D, making direct comparisons to such approaches infeasible. Also, the policy in Sadeghi and Levine 2017 shows some exploration behaviors in narrow hallways as the only way to keep agent moving and not colliding the walls, in this specific case, is to move forward. However, it would fail to show exploration behavior in large open space such as living rooms (as in our experiments), because the agent can simply keep turning in a circle to stay away from any wall. Again, we argue that the major purpose of Sadeghi and Levine 2017 is to learn a collision-free policy that keeps the agent moving without a specific intent to keep agent exploring the environments. Experiments that we included (going forward and randomly turning at collision) uses *ground truth collision checking*, and thus already has an advantage over a policy that uses a learned model for collision checking.\\n\\nAs for the comparison to Pathak et al. GSP method in \\u2018Zero-shot visual imitation\\u2019. We would like to emphasize to AC and R1 that we have personally communicated with one of the authors of Pathak et al. before formulating this reply: GSP tackles a completely different problem that of acquiring skills using self-supervision. While, GSP can do local navigation, GSP is NOT designed for long-horizon navigation tasks. When GSP is applied to the navigation task, first, it requires the **goal** positions to be within **20-30 steps** from the agent\\u2019s current position and the agent will see the target observation within the first 5-10 steps. But in our experiments, our agent is exploring whole houses in House3D, which easily takes thousands of steps and the agent rarely ever sees the target observation within the first 5-10 (or for that matter even 100s of) steps. Second, GSP requires **an expert** to provide a sequence of **landmark images** to guide the agent to move to a far target location while our agent explores the house environment efficiently on its own without the need of experts. Thus, ZSVI does not attempt to solve long-term navigation problem by itself and requires an expert to break long-term navigation into several short-term navigation problems. If we attempt to use ZSVI to solve long-term navigation (without \\u201cexpert waypoints\\\" as in our experiments) it would fail.\\n\\nZhu et al. 2017 proposed a target-driven navigation policy that can find the object given an image. However, the training and testing environment are the same in Zhu et al.\\u2019s case. Their goal is to learn a policy to find the object in the same room which requires millions of interaction in the training/testing environment. Our work focuses on learning an exploration policy that generalizes to new environments in a zero-shot manner. During our experiments, we confirmed the same: the policy learned in Zhu et al. fails to generalize to new environments without re-training for millions of iterations as mentioned in their paper.\", \"references\": \"Dornhege, Christian, and Alexander Kleiner. \\\"A frontier-void-based approach for autonomous exploration in 3d.\\\" Advanced Robotics 27.6 (2013): 459-468.\\n\\nWang, Yiheng, Alei Liang, and Haibing Guan. \\\"Frontier-based multi-robot map exploration using particle swarm optimization.\\\" Swarm Intelligence (SIS), 2011 IEEE Symposium on. IEEE, 2011.\\n\\nFraundorfer, Friedrich, et al. \\\"Vision-based autonomous mapping and exploration using a quadrotor MAV.\\\" Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012.\\n\\nMannucci, Anna, Simone Nardi, and Lucia Pallottino. \\\"Autonomous 3D exploration of large areas: a cooperative frontier-based approach.\\\" International Conference on Modelling and Simulation for Autonomous Systems. Springer, Cham, 2017.\\n\\nCampos, Francisco M., et al. \\\"A complete frontier-based exploration method for Pose-SLAM.\\\" Autonomous Robot Systems and Competitions (ICARSC), 2017 IEEE International Conference on. IEEE, 2017.\\n\\nMahdoui, Nesrine, Vincent Fr\\u00e9mont, and Enrico Natalizio. \\\"Cooperative Frontier-Based Exploration Strategy for Multi-Robot System.\\\" 2018 13th Annual Conference on System of Systems Engineering (SoSE). IEEE, 2018.\"}",
"{\"title\": \"Clarifications to R1's incorrect understanding (1)\", \"comment\": \"**Frontier-based exploration**:\\nWe want to clarify that the frontier-based exploration we have implemented is an improved version of the original algorithm. The original algorithm commands the agent to go along with the frontier grids in 2D. However, in our version of the frontier exploration, we sampled a target point which, in most cases, are far away from the agent\\u2019s current locations. Then the agent uses a shortest path planning algorithm to go to that target position. Since we are using a vision-based RGBD sensor, we don\\u2019t have to go through the frontier grids one by one as the robot can see many frontier grids in one time, which greatly improves the efficiency. This is a fairly efficient algorithm if pose estimates are accurate. In fact, in our implementation, we need to sample less than 10 target points to cover majority of the area in most houses. Our implemented frontier-based exploration method, in some sense, is more similar to the frontier-based exploration in 3D proposed in Dornhege et al. 2013 (we will open source the code). We cited Yamauchi, 1997 because, to our knowledge, this paper is the earliest paper that proposed the frontier-based exploration method. Also, we would like to clarify that frontier-based exploration is not an outdated technology. It\\u2019s still being used in the robotics community. Just to give a few examples, Wang et al. 2011, Fraundorfer et al. 2012, Mannucci et al. 2017, Campos et al. 2017, Mahdoui et al. 2018 all use frontier-based exploration method.\\n\\n**Related work**:\\nThanks for the suggestion on the related work. We are happy to add these relevant works in the final version of the paper as R1 requires. However, our work is different from these works. The main focus on Xu et al. 2017 is on generating smooth movement path for high-quality camera scan. Bai et al. 2016 proposed an information-theoretic exploration method using Gaussian process regression. This is computationally inefficient when the kernel matrix becomes large. Thus, Bai et al. 2016 only show experiments on simplistic map environments. GPs are computationally expensive when maps are complicated, which is the case in our experiments. Kollar et al. 2008, assume access to the ground-truth map and learn an optimized trajectory that maximizes the accuracy of the SLAM-derived map. In contrast, our learning policy directly tells the action that the agent should take next and estimates the map on the fly.\"}",
"{\"title\": \"Response Summary to R1's mis-undertandings\", \"comment\": \"We thank R1 for reading through our paper more carefully. Unfortunately, we still believe R1\\u2019s understanding of the paper is incorrect. This is clearly highlighted by the following statement made by R1:\\n\\n\\u201cWhile the proposed method also uses human demonstration, authors argue that SPTM requires a human to demonstrate the environment, which is impractical in real-world scenarios. This is a contradicting statement.\\u201d\\n\\nSPTM requires the expert demonstration trajectories even at test time. Our work only uses human demonstration for imitation learning during training time ONLY. Again we emphasize, unlike SPTM, we *do not* require human demonstrations at test time. \\n\\nWe individually address *ALL* the other points raised by R1 below.\"}",
"{\"title\": \"Missing relevant prior works, still lack of proper evaluations. Novelty is not convincing (cont. )\", \"comment\": \"Improving performance of down-stream navigation task is listed as one of the main contributions of the paper (second paragraph of page2). However, proper empirical comparison with a state-of-the-art navigation method is not conducted. While it is mentioned in the rebuttal that they could combine their approach with a state of the art navigation method (such as SPTM) they refused to conduct such comparison. As also mentioned in my original review, an empirical comparison between \\u201cproposed method + a state-of-the-art navigation method (e.g. SPTM)\\u201d versus \\u201ca state-of-the-art navigation method (e.g. SPTM)\\u201d is required to understand if the proposed exploration policy is actually needed to improve navigation or not or how much the exploration step can help improving navigation.\\n\\nWhile the proposed method also uses human demonstration, authors argue that SPTM requires a human to demonstrate the environment, which is impractical in real-world scenarios. This is a contradicting statement. \\n\\nOther prior works of navigation such as Pathak et al\\u2019s \\u201cZero-shot visual imitation\\u201d or \\u201cZhue et al 2017\\u201d could also be used as a baseline for navigation as both propose a goal-driven navigation method where the image of goal is taken as input. For comparing with Pathak et al\\u2019s \\u201cZero-shot visual imitation\\u201d no modification would be required as it works on a similar navigation task setup at conducted in section 4.3 (taking image of the goal as input). The code of Pathak et al\\u2019s \\u201cZero-shot visual imitation\\u201d is available in github.\\n\\nBased on the above points, this paper lacks proper comparison with state-of-the-art prior works and many relevant prior works are ignored. Technical novelty is incremental, known learning techniques and architectures are used. In addition, the paper is also not offering a novel application or a novel problem and experiments are not conveying interesting empirical results (prior works are ignored and not cited). Also the claims of the paper for its contribution are not backed up with analytical or experimental evaluations. Based on these, my vote is for rejection of the paper.\"}",
"{\"title\": \"Missing relevant prior works, still lack of proper evaluations. Novelty is not convincing\", \"comment\": \"As upon to the authors request, I read the paper one more time. Before responding to the argues made by the authors, I would like to invite them to read the reviews thoroughly with more attention and consider relevant state-of-the-art research work before arguing that the reviewer has *misunderstood* or *missed* the paper. The rebuttal repeatedly has pointed to R1 for not understanding the paper or missing things while no convincing answers is provided for the major points that I had mentioned in my (R1) review. The revised version is a better version than the submission, thanks to authors for revision. However, I still do not see this paper to provide interesting technical novelty or compelling results for the major audience of the ICLR conference and I keep my initial vote for rejection of this paper.\\n \\nHaving this in mind that one of major goals of peer-reviewing is to provide constructive feedback I respond to the argues brought by authors in their rebuttal hoping that they don\\u2019t argue about these facts with incomplete rewordings or pretending misinterpretation:\", \"here_are_the_responses_to_some_of_the_points_in_the_rebuttal\": \"I had not missed the comparison with \\u201cfronitier-based\\u201d method in the manuscript.\\nI'd want to make it clear that \\u201cfrontier-based exploration\\u201d method of Yamauchi , 1997 is *not* a classic SLAM method. \\u201cFrontier-based exploration\\u201d is a very old exploration heuristic proposed in 1997. Therefore, I do not consider it as a \\u201ccompelling comparison point\\u201d or even a strong baseline. \\n\\nI'd want to highlight that the following sentence from the second paragraph of the introduction in the main manuscript is incorrect and is particularly ignoring many years of active research on good exploration policies for map construction. I have pointed to a few of such prior works bellow. Yamauchi, 1997 cannot be considered as a state-of-the-art method and is not a proper point of comparison for exploration policies.\", \"incorrect_sentence_in_the_manuscript\": \"\\u201cHow does one build a map? How should we explore the environment to build this map? Current approaches either use a human operator to control the robot for building the map (e.g. Thrun et al. (1999)), or use heuristics such as frontier-based exploration (Yamauchi, 1997)\\u201d\\n\\n\\nI had requested comparison with a SLAM-based method that constructs the map and then does navigation on that map (Please read at my original review thoroughly). Since, it seems that the authors are not aware of state-of-the-art works in SLAM and map reconstruction as well as state-of-the-art autonomous exploration policies I point them to a few recent works (out of numerous works conduced in this area of research in the past few years):\\n\\n[a] Xu, K., Zheng, L., Yan, Z., Yan, G., Zhang, E., Niessner, M., ... & Huang, H. Autonomous reconstruction of unknown indoor scenes guided by time-varying tensor fields. ACM Transactions on Graphics (TOG), 2017.\\n\\n[b] Shi Bai, Jinkun Wang, Fanfei Chen, and Brendan Englot. Information-theoretic exploration with Bayesian optimization. In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on . IEEE, 2016.\\n\\n[c] Thomas Kollar and Nicholas Roy. Trajectory optimization using reinforcement learning for map exploration. Int. J. Robotics Research, 2008.\\n\\n\\nIt is strongly recommended that these relevant citations be added to any final version of this manuscript.\\n\\nSpecially, in [a] an autonomous exploration policy is proposed and is shown to be robust to noise and can work on a real robot and in a real environment rather than just simulation. \\n\\n\\nThe collision avoidance policy baseline is not conducted properly. It is not acceptable to coin a heuristic method of \\u201cmoving straight\\u201d and then \\u201crandom turn\\u201d agent and call it a \\u201csophisticated version\\u201d of collision avoidance policy. Such statement is not correct. A state-of-the-art learning-based collision avoidance method should be used as a point of comparison. For example, look at Sadeghi and Levine, 2017 for a learning based collision avoidance policy that does not fall into degenerate solution of staying in place and also have a rich action space.\"}",
"{\"title\": \"Additional Experiments, Pointers to Existing Experiments and Clarifications (2)\", \"comment\": \"3. Comparison with other learning-based navigation works: First, we do not study a specific navigation task, but instead our contribution is a task-independent exploration policy. We do however show that exploration helps in downstream navigation tasks (Section 4.3). We use well-established Classical Path Planning, the simplest navigation algorithm for doing these experiments. This was a conscious choice so as to not-conflate quality of learned navigation policy with the quality of our learned exploration policy. Our contribution is orthogonal to navigation task itself and therefore our approach can be used in conjunction with any navigation approach. For example, the exploration data by running our policy can be used \\u2018as is\\u2019 with Savinov et al\\u2019s state-of-the-art SPTM approach [A]. SPTM otherwise requires a human to demonstrate the environment, which is impractical in real-world scenarios.\\n\\nR1 suggests we should compare to Pathak et al\\u2019s \\u201cZero-shot visual imitation\\u201d (ZSVI) as it uses \\u201cexploration strategies\\u201d and \\u201cimitation learning\\u201d for navigation. While they indeed use both terms (exploration and imitation), the context and usage is completely different.\\n (a) *Exploration for Imitation (ZSVI) vs. Imitation for Exploration (Ours)*\\n In ZSVI, exploration is used in training to collect trajectories and imitation is used in testing to follow a path. On the other hand, ours is completely the opposite. We use imitation in training to learn how to explore at test time. Again we emphasize: ZSVI does not run any explicit exploration policy during testing.\\n (b) This leads to completely different behavior of two algorithms. The time/distance range in ZSVI is much smaller as compared to ours. Either the goal is in the same room or they need a lot of waypoint images to solve the navigation task.\\n\\nIn order to show a comparison to \\u201cRL with a good exploration \\u2026 without explicit exploration\\u201c, we have implemented navigation on top of Curiosity Driven Exploration using Self-Supervision. As shown in Appendix C.6 (will be added to Sec 4.3), the comparison is in our favor. \\n\\n4. More Experimental Details: We have added additional details in Appendix C. We have included:\\na) Stats and floor-plans of houses used for training and testing (Appendix C1).\\nb) Coverage plots for when we run the agent for 2000 steps (Appendix C4, Fig C3). Conclusions are the same as for the original 1000 steps plots as presented in the paper.\\nc) Agent details. Step size is 0.25m forward motion, 9 degree rotations (already provided in the paper). Real world performance depends on how fast a robot is. A turtlebot-2 can move at a peak speed of 0.65 m/s, if that\\u2019s what you were looking for.\\n\\n5. More Technical Details: \\na) We have added details about map construction in Appendix C2. Yes, we can use known-loop closure techniques in SLAM, though there may still be error and we wanted to show that learning is robust to it (Fig 2 (center), video on website).\\nb) Imitation learning details are in Appendix C5.\\nc) 3D Information: Yes, you are right depth images only give 2.5D information, however, we integrate information from different views, to obtain a more complete sense of the environment than given by a single depth image. 3D information can also be extracted from RGB images, see [B] and numerous others for example.\\n\\nWe will incorporate your suggestions on presentation in the final version.\\n\\n[A] Semi-parametric Topological Memory for Navigation Nikolay Savinov, Alexey Dosovitskiy, Vladlen Koltun. ICLR 2018.\\n[B] Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene Shubham Tulsiani, Saurabh Gupta, David Fouhey, Alexei A. Efros, Jitendra Malik. CVPR 2018.\"}",
"{\"title\": \"Additional Experiments, Pointers to Existing Experiments and Clarifications (1)\", \"comment\": \"We thank R1 for their comments. R1\\u2019s primary concerns are about novelty and missing empirical comparison. These perhaps stem from some misunderstandings about our paper as some requested comparisons are either irrelevant or stronger comparisons are already presented in the paper. Therefore, we urge the reviewer to take a second look at the paper in light of the rebuttal.\\n\\n1. Novelty: In this paper, we learn policies for exploring novel 3D environments (Section 3 through Section 4.2), and show that exploration data, gathered by executing our learned exploration policies, improves performance at downstream navigation tasks (Section 4.3). To the best of our knowledge, this is the first work that studies learned exploration policies for navigation, systematically compares them to classical and learning-based baselines, and shows the effectiveness of exploration data for downstream tasks. In doing so, we adopt existing learning techniques (imitation learning + reinforcement learning), and map building techniques. Our novelties are orthogonal to these aspects:\\n (a) Problem formulation: Framing exploration as a learning problem, and showing the utility of exploration data for downstream tasks.\\n (b) Map based policy architectures and reward functions. Classical SLAM based approaches indeed produce maps but: (a) it still needs a policy for exploration during the map-building phase; (b) does not solve navigation rather uses geometric analysis for path planning. Our approach focuses on (a) and unlike heuristic approaches used in SLAM, we use a learning-based approach.\\n (c) We also show maps can also be used for learning effective policies, and for computing reward signals.\\n (d) Use of IL + RL to optimize our policy, as opposed to pure RL that is typically used.\\n\\n2. Comparison with other exploration approaches: \\na) Simple Greedy Baseline: We experimented with the suggested one-step greedy policy. Here we virtually simulate all possible actions that the agent can take, and compute the gain in coverage. We then execute the action that results in the maximum gain in coverage. At 1000 steps such a policy only covers 40m^2, as opposed to our policies that cover up to 125 m^2. This is not surprising as the policy gets stuck inside local regions of full coverage. No action leads to any increase in coverage and the agents move back and forth. The full performance plot is provided in Fig C3(a) in the updated PDF. \\n\\nNote, in the paper, we have provided a more compelling comparison point to classical exploration approaches: frontier-based method. Reviewer seems to have missed this comparison as R1 still asks for comparisons to classical approaches.\\n\\nb) Collision Avoiding Policy: A policy that purely avoids collisions has a degenerate solution of the agent staying in-place, resulting in negligible coverage (Fig C4(a)). We also tried a more sophisticated version, where the agent moves straight unless a collision happens (Fig C3(a)), at which point it randomly rotates (by angle between 0 and 2pi), and continues to move straight. To help the policy further, we used ground truth collision-checking. This policy covers 75m^2, still much lower than our performance (125m^2).\"}",
"{\"title\": \"We agree, will Incorporate Feedback into Manuscript\", \"comment\": \"Thank you for your comments and suggestions. We acknowledge that most of our evaluation is in the perfect odometry setting which is unrealistic. We experimented with a reasonable noise model that compounds over time within the episode, but we admit it may not be very realistic. We will prominently note both these points in the final version of the paper upon acceptance.\"}",
"{\"title\": \"Additional Experiments\", \"comment\": \"Thanks for your comments and suggestions. We address your specific concerns below:\\n\\n1. Explicit mapping is hand-engineering. We acknowledge (and will explicitly state in the paper) that using occupancy map as the policy input is based on domain/task knowledge. Using the occupancy map gives the agent a better representation of long-horizon memory and show great improvement compared to the policy without the map as input. We do agree ego-motion estimation in real-world might be noisy. To handle that we performed experiments with noise and show that our model seems robust (See video on the website, Fig 4b in the paper). \\n\\nWith regard to end-to-end approaches, approaches like Zhang et al. (2017) uses a differentiable map structure to mimic the SLAM techniques. These works are orthogonal to our effort on exploration. Indeed, our exploration policy can benefit from their learned maps instead of only using reconstructed occupancy map. We also believe our current approach provides a strong baseline for future end-to-end versions.\\n\\n2. Explicit environment rewards for exploration: We agree that the use of reward yielding objects throughout the environment will lead to a very similar outcome as our approach. The key distinction is that our approach instruments the agent (with a depth sensor) as opposed to instrumenting the environment. This makes our proposed formulation more amenable to being trained and deployed in the real world: all we need is an RGB-D sensor. This is a big advantage over spreading reward yielding objects that disappear as the agents arrive at those locations, which is almost impractical in the real world. With this key distinction being said, we did do several experiments where our policy is trained with external rewards. The performance is shown in Fig C4(c) in Appendix C4. The results show that our coverage map reward is much more effective than external rewards generated by reward-yielding objects. Our method covers 125m^2 on average while even 4 reward yielding objects per square meter is 91m^2.\\n\\n3. Role of collision avoidance penalty: We added the performance of the agent trained with our policy but with only coverage reward (no collision penalty) in Fig C4(b) in Appendix C4. We observe that adding collision penalty indeed helps improve performance slightly (125m^2 with penalty as opposed to 120m^2 without penalty). Thus, our policy explores well even without explicit collision avoidance penalty.\\n\\nWe will add more references to the related work and improve the writing as you suggested in the final version of the paper.\"}",
"{\"title\": \"Response Overview\", \"comment\": \"We thank the reviewers for their comments and suggestions. We are glad that the reviewers found:\\n (a) our paper to tackle an important and clearly motivated problem (R1, R3)\\n (b) our approach to be a great idea (R2), a good addition to the literature (R3) and not-complicated (R1).\\n (c) our paper to be \\u201cwell-executed\\u201d, with \\u201cvarious ablations\\u201d, and \\u201ccomparisons to \\u2026 commendably a classical SLAM baseline\\u201d (R2)\\n (d) our paper to be well-written (R1, R3), and well-explained (R2).\\nWe have answered *ALL* questions that the reviewers posed by providing additional experimental comparisons, pointing to relevant existing experiments and providing clarifications. Hopefully, this clarifies some of the misunderstandings that R1 has about our paper. Additional experiments have been added to Appendix C of the updated PDF. We will incorporate these experiments and other suggestions in camera-ready upon acceptance.\"}",
"{\"title\": \"No significant novelty, lack of experimental evaluations, missing technical details\", \"review\": \"This paper proposes a method for learning how to explore environments. The paper mentions that the \\u201cexploration task\\u201d that is defined in this paper can be used for improving the well-known navigation tasks. For solving this task, a reward function a network architecture that uses RGBD images + reconstructed map + imitation learning + PPO is designed.\\n\\n<<Pros>>\\n\\n-The paper is well-written (except for a few typos).\\n-The overall approach is simple and does not have much complications. \\n-The underling idea and motivation is clearly narrated in the intro and abstract and the paper has a easy-to-understand flow. \\n\\n<<Cons>>\\n\\n**The technical novelty is not significant**\\n\\n-This paper does not provide significant technical novelty. It is a combination of known prior methods: imitation learning + ppo (prior RL work). The presented exploration task is not properly justified as to how it could be useful for the navigation task. The reconstruction of maps for solving the navigation problem is a well-explored problem in prior SLAM and 3D reconstruction methods. Overall the novelty of the approach and the proposed problem is incremental. \\n\\n**The paper has major short comings in the experimental section. The presented experiments do not support the main claim of the paper which is improving the performance in the well-known navigation task. Major baselines are missing. Also, the provided results are not convincing in doing the right comparison with the baselines. **\\n\\n-Experimental details are missing. The major experimental evaluations (Fig. 2 and Fig. 3) are based on the m^2 coverage after k steps and the plots are cut at 1000 steps. What are the statistical properties of the 3D houses used for training and testing? E.g what is their area in m^2? How big is each step in meters? Why are the graphs cut at 1000 steps? How would different methods converge after more than 1000 steps, e.g. 2000 steps? I would like to see how would the different methods converge after larger number of steps? How long would each step take in terms of time? How could these numbers convey the significance of the proposed method in a real would problem settings? \\n\\n-The experiments do not convey if learning has significantly resulted in improved exploration. Consider a simple baseline that follows a similar approach as explained in the paper for constructing the occupancy map using the depth sensor. A non-learning agent could use this map at each step to make a greedy choice about its next action which greedily maximizes the coverage gain based on its current belief of the map. While the performance of random policy is shown in Fig.2 the performance of this greedy baseline is a better representative of the lower bound of the performance on the proposed task and problem setup.\\n\\n-What is the performance of a learning-based method that only performs collision avoidance? Collision avoidance methods tend to implicitly learn to do a good map coverage. This simple baseline can show a tangible lower bound of a learning-based approach that does not rely on map.\\n\\n-The major promise of the paper is that the proposed exploration task can improve navigation. However, the navigation experiment does not compare the proposed method with any of prior works in navigation. There is a huge list of prior methods for navigation some of which are cited in the \\u201clearning for Navigation\\u201d section of the related works and the comparison provided in Fig. 4 is incomplete compared to the state-of-the-arts in navigation. For example, while the curiosity driven approach is compared for the exploration, the more related curiosity based navigation method which uses both \\u201cexploration strategy\\u201d and \\u201cimitation learning\\u201d : \\u201cPathak, Deepak, et al. \\\"Zero-shot visual imitation.\\\"\\u00a0International Conference on Learning Representations. 2018.\\n\\u201c is missed in navigation comparison. The aforementioned paper is also missed in the references. \\n\\n-Algorithmic-wise, it would make the argument of the paper clearer if results were conducted by running different exploration strategies for navigation to see if running RL with a good exploration strategy could solve the exploration challenge of the navigation problem without needing an explicit exploration stage (similar to the proposed method) which first explores and constructs the map and then does navigation by planning.\\n\\n-The navigation problem as explained in section is solved based on planning approach that uses a reconstructed map. This is a fairly conventional approach that SLAM based methods use. Therefore, comparison with a SLAM method that constructs the map and then does navigation would be necessary. \\n\\n\\n** Technical details are missing or not explained clearly**\\n\\n- Section 3.1 does not clearly explain the map construction. It seems that the constructed map is just a 2D reconstruction of the space (and not 3D) using the depth sensor which does not need transformation of the 3D point cloud. What is the exact 3D transformation that you have done using the intrinsic camera parameters? This section mentions that there can be error in such map reconstruction because of robot noise but alignment is not needed because the proposed learning method provides robustness against miss-alignment. How is this justified? Why not using the known loop closure techniques in SLAM? \\n\\n-The technical details about the incorporated imitation learning method are missing. What imitation learning method is used? How is the policy trained during the imitation learning phase? \\n\\n-Last paragraph of intro mentions that the proposed method uses 3D information efficiently for doing exploration. The point of this sentence is unclear. What 3D information is used efficiently in the paper? Isn\\u2019t it only 2.5D (information obtained by depth sensor) used in the proposed method?\\n\\n**Presentation can be improved**\\n\\n-The left and right plots of the Figure 3 contains lots of repetitions which brings in confusion in comparing the performance of runs with different settings. These two plots should be presented in a single plot. \\n\\n- Interpretation of \\u201cgreen vs white vs black\\u201d in the reconstructed maps is left to the reader in Fig. 1. \\n\\n- Last line in page 5: there is no need for reiteration. It is already clear.\\n\\n**Missing references**\\n\\n-Since the paper is about learning to explore, discussion about \\u201cexploration techniques in RL\\u201d is recommended to be added in at least the related work section. \\n\\n-A big list papers for 3D map reconstruction is missing. Since the proposed method relies on a map reconstruction, those papers are relevant to this work and can potentially be used for comparison (as explained above). It is highly recommended that relevant prior 3D map reconstruction papers be added to the related work sections.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good use of mapping for exploration\", \"review\": \"This is a well explained and well executed paper on using classical SLAM-like 2D maps for helping a standard Deep RL navigation agent (convnet + LSTM) explore efficiently an environment and without the need for extrinsinc rewards. The agent relies on 3 convnets, one processing RGB images, one the image of a coarse map in egocentric referential, and one of the image of a fine-grained map in egocentric referential (using pre-trained ResNet-18 convnets). Features produced by the convnets are fed into a recurrent policy trained using PPO. Two rewards are used: the increase in the map's coverage and an obstacle avoidance penalty. The agent is further bootstrapped through imitation learning in a goal-driven task executed by a human controlling the agent. The authors analyze the behavior of the navigation algorithm by various ablations, a baseline consisting of Pathak's (2017) Intrinsic Curiosity Module-based navigation and, commendably, a classical SLAM baseline with path planning to empty, unexplored spaces.\\n\\nUsing an explicit map is a great idea but the authors need to acknowledge how hand-engineered all this is, when comparing it to actual end-to-end methods. First, the map reconstruction is done by back-projections of a depth image (using known projective geometry parameters) onto a 3D point cloud, then by slicing it to get a 2D map, accumulated over time using nearly perfect odometry. SLAM was an extremely hard problem to start with, and it took decades and particle filters to get to the quality of the images shown in this paper as obvious. Normally, there is drift and catastrophic map errors, whereas the videos show a nearly perfect map reconstruction. Is the motion model of the agent unrealistic? Would this ever work out of the box on a robot in a real world? The authors brush off the need for bundle adjustment, saying that the convnet can handle noisy local maps. Second, how do you get and maintain such nice ego-centric maps? Compared to other end-to-end work on learning how to map (see Wayne et al. or Zhang et al. or Parisotto et al., referred to later in the paper), it looks like the authors took a giant shortcut. All this SLAM apparatus should be learned!\\n\\nOne crucial baseline that is missing is that of explicit extrinsic rewards encouraging exploration. These rewards merely scatter reward-yielding objects throughout the environment; over the course of an episode, an object reward that is picked does not re-appear until the next exploration episode, meaning that the agent needs to cover the whole space to forage for rewards. Examples of such rewards have been published in Mnih et al. (2016) \\\"Asynchronous methods for deep reinforcement learning\\\" and are implemented in DeepMind Lab (Beatie et al., 2016). Such an extrinsic reward would be directly related to the increase of coverage.\", \"a_second_point_of_discussion_that_is_missing_is_that_of_the_collision_avoidance_penalty\": \"roboticists working on SLAM know well that they need to keep their robot away from plain-texture walls, otherwise the image processing cannot pick useful features for visual odometry, image matching or ICP. What happens if that penalty is dropped in this navigation agent?\\n\\nFinally, the authors mention the Neural Map paper but do not discuss Zhang et al. (2017) \\\"Neural SLAM\\\" or Wayne et al. (2018) \\\"Unsupervised Predictive Memory in a Goal-Directed Agent\\\", where a differentiable memory is used to store map information over the course of an episode and can store information relative to the agent's position and objects' / obstacles' positions as well.\", \"minor_remark\": \"the word \\\"finally\\\" is repeated twice at the end of the introduction.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"good paper\", \"review\": \"This paper proposes learning exploration policies for navigation. The problem is motivated well. The learning is conducted using reinforcement learning, bootstrapped by imitation learning. Notably, RL is done using sensor-derived intrinsic rewards, rather than extrinsic rewards provided by the environment. The results are good.\\n\\nI like this paper a lot. It addresses an important problem. It is written well. The approach is not surprising but is reasonable and is a good addition to the literature.\\n\\nOne reservation is that the method relies on an oracle for state estimation. In some experiments, synthetic noise is added, but this is not a realistic noise model and the underlying data still comes from an oracle that would not be available in real-world deployment. I recommend that the authors do one of the following: (a) use a real (monocular, stereo, or visual-inertial) odometry system for state estimation, or (b) acknowledge clearly that the presented method relies on unrealistic oracle odometry.\\n\\nEven with this reservation, I support accepting the paper.\", \"minor\": \"In Section 3.4, \\\"existing a room\\\" -> \\\"exiting a room\\\"\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJl-b3RcF7 | The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks | [
"Jonathan Frankle",
"Michael Carbin"
] | Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy. | [
"Neural networks",
"sparsity",
"pruning",
"compression",
"performance",
"architecture search"
] | https://openreview.net/pdf?id=rJl-b3RcF7 | https://openreview.net/forum?id=rJl-b3RcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Ny72rYKMP-",
"rkghY1IGtr",
"rkgXUU6b0N",
"H1gy9YbylN",
"S1xmvZRayE",
"Hkelbn3I14",
"r1l-QxArJ4",
"ryggsG-VkV",
"HygUFDOTAX",
"SkloQowaAm",
"H1gQ-4Z5CQ",
"SylwJEW9Cm",
"ryg2lfbcRQ",
"BJeHlbWcCX",
"BygUAeW5R7",
"r1xZ0Ag5Cm",
"SygdQnlqRm",
"BylPD1gKT7",
"HJeDy85anQ",
"Bkg5UpU52m",
"ryemP68v2m"
],
"note_type": [
"comment",
"comment",
"comment",
"meta_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1592630039616,
1571082115633,
1559512651368,
1544653191237,
1544573274632,
1544109047715,
1544048665340,
1543930520456,
1543501693721,
1543498531161,
1543275515146,
1543275487450,
1543274995589,
1543274733160,
1543274702199,
1543274184922,
1543273504416,
1542156126796,
1541412318970,
1541201233700,
1541004635265
],
"note_signatures": [
[
"~Rahmawati_Pratiwi1"
],
[
"~Hady_Elsahar2"
],
[
"~Kevin_Martin_Jose1"
],
[
"ICLR.cc/2019/Conference/Paper1146/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1146/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1146/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1146/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1146/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1146/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1146/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"Very interesting theory indeed. I wonder if you could make a study case comparing this method to lottery from other country such as Indonesia? As I think other countries have different system. That would make it much more interesting in my opinion.\", \"title\": \"Study Case\"}",
"{\"comment\": \"Figure 5 Iterative pruning showing training scores after iterative pruning.\\n\\nThe conv-2 (blue solid line) training accuracy increases ~10% when iteratively pruned until when only 7% of the weights are retained the model starts not fitting the data anymore. This effect happens less for large models conv-4 and conv-6 \\n\\nModels fit slightly faster as well, that is a bit counterintuitive to me is there a justification to that?\", \"title\": \"Why the winning ticket training accuracy increases while iterative pruning?\"}",
"{\"comment\": \"Nothing of consequence, I just found a typo in this sentence:\\n\\n\\\"This work was support in part by the Office of Naval Research (ONR N00014-17-1-2699)\\\"\\n\\nshould be \\n\\n\\\"This work was supported in part by the Office of Naval Research (ONR N00014-17-1-2699)\\\"\\n\\nGreat paper BTW.\", \"title\": \"Minor typo\"}",
"{\"metareview\": \"The authors posit and investigate a hypothesis -- the \\u201clottery ticket hypothesis\\u201d -- which aims to explain why overparameterized neural networks are easier to train than their sparse counterparts. Under this hypothesis, randomly initialized dense networks are easier to train because they contain a larger number of \\u201cwinning tickets\\u201d.\\nThis paper received very favorable reviews, though there were some notable points of concern. The reviewers and the AC appreciated the detailed and careful experimentation and analysis. However, there were a couple of points of concern raised by the reviewers: 1) the lack of experiments conducted on large-scale tasks and models, and 2) the lack of a clear application of the idea beyond what has been proposed previously. \\n\\nOverall, this is a very interesting paper with convincing experimental validation and as such the AC is happy to accept the work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Intriguing hypothesis with convincing experimental validation and analyses\"}",
"{\"title\": \"Additional Experiments and Graphs\", \"comment\": \"We have an update with several further experiments that examine the relationship between SNIP and our paper.\\n\\nWe have simplified our pruning mechanism to prune weights globally (instead of per-layer) with otherwise the same pruning technique. For our three main networks (MNIST, Resnet-18, and VGG-19), we find that globally-pruned winning tickets reach higher accuracy at higher levels of sparsity and learn faster than SNIP-pruned networks.\\n\\nFor example, VGG19 reaches 92% test accuracy when pruned by at most 97.2% with SNIP vs. at most 99.5% for globally-pruned winning tickets. Resnet-18 achieves 90% accuracy when pruned by at most 27% with SNIP vs. at most 89% for globally-pruned winning tickets.\\n\\nWe also performed several further experiments exploring the effect of initialization and structure on SNIP-pruned networks. We find that SNIP-pruned networks can be randomly reinitialized as well as randomly rearranged (i.e., randomly choose the locations of unpruned connections within layers) with limited impact on their accuracy. However, these networks are neither as accurate nor learn as quickly as winning tickets.\\n\\nThe fact that SNIP-pruned networks can be rearranged suggests that SNIP largely identifies the proportions in which layers can be pruned such that the network is still able to learn, leaving significant opportunity to exploit the additional, initialization-sensitive understanding demonstrated by our results.\\n\\nWe provide several graphs here (https://drive.google.com/drive/folders/1lpxJFpkF0Afq1rRqkEDnLcPN0kMV8BBC?usp=sharing) to support these claims. We will add these experiments to the final version of our paper.\"}",
"{\"comment\": \"We appreciate that the authors spared time to address our comment, and we believe that the confusion on the effect of (re-)initialization is clarified.\\nWe look forward to trying SNIP in your experimental setting once your code is released.\", \"title\": \"Thank you for your response.\"}",
"{\"title\": \"Thank you for sharing your work!\", \"comment\": \"(Edited to improve clarity and update replication results.)\\n\\nThank you for sharing your work; we are very excited to see your results, since they seem to support the lottery ticket hypothesis as posed and add substantial further evidence to our hypothesis via a different pruning technique. We will be sure to refer to the SNIP results in the final version of our paper.\\n\\nThe main statement of the lottery ticket hypothesis does not exclude the possibility that winning tickets are still trainable when reinitialized. Specifically, while the hypothesis conjectures that, given a dense network and its initialization, there exists a subnetwork that is still trainable with the original initializations, it does not require any particular behavior of this subnetwork under other initializations. Thank you for this comment; we will revise our language to make this clear.\\n\\nIn our experiments, we do find initialization to have a significant impact on the success of the pruned subnetworks we find (hence the quote you provide from our paper). You mention in your rebuttal that \\u201cSNIP finds the architecturally important parameters in the network,\\u201d perhaps reducing the relative importance of initialization for the winning tickets that you find.\\n\\nOnce your source code is made available, we would be very interested in analyzing your preliminary comparison between SNIP-pruned networks with the original initialization and SNIP-pruned networks when reinitialized; we have replicated the SNIP algorithm as presented in your paper in our own framework and produce the following results: \\n\\n* Lenet (MNIST): We confirm that the accuracy of SNIP-pruned networks does not change when they are reinitialized. In addition, we find that, although SNIP outperforms random pruning, SNIP-pruned networks do not match the test accuracy of our winning tickets or our randomly reinitialized winning tickets.\\n\\n* Resnet-18 (CIFAR10): We did not have time (in the 24 hours between your comment and the end of the comment period) to confirm your random reinitialization experiments on this network. We find that, although SNIP outperforms random pruning, it does not match the test accuracy of the winning tickets and only slightly outperforms the randomly-reinitialized winning tickets.\\n\\n* VGG19 (CIFAR10): (Updated) We confirm that the accuracy of the SNIP-pruned networks does not change when they are reinitialized. When training with warmup, SNIP produces networks that nearly match the accuracy of our winning tickets at the corresponding level of sparsity. However, our winning tickets learn faster than the SNIP-pruned networks. \\n\\nWe look forward to discussing SNIP in the final version of our paper as a potential \\u201cmethod of choice for the further exploration of [our] hypotheses.\\u201d However, our preliminary, replicated results suggest that there is a gap in accuracy and speed of learning between SNIP-pruned networks and our winning tickets.\\n\\n(One minor nit: you mention that we only test our method for moderate sparsity levels, but our graphs show that we continue to find winning tickets at extreme sparsity levels (> 90%) similar to those in your paper.)\"}",
"{\"comment\": \"Thank you for the interesting work.\\n\\nConcurrently, we proposed a new pruning method, SNIP, ( https://openreview.net/forum?id=B1VZqjAcYX ), that finds extremely sparse networks by single-shot at random initialization, and the pruned sparse networks are then trained in the standard way.\\n\\nWe found one of your hypotheses \\\"When randomly reinitialized, a winning ticket learns more slowly and achieves lower test accuracy\\\" intriguing. Therefore, we tested to see if this behavior holds on subnetworks obtained by SNIP.\\n\\nSpecifically, we tested various models (LeNets, AlexNets, VGGs and WRNs) on MNIST and CIFAR-10 datasets for the same extreme sparsity levels (> 90%) used in our paper. As a result, we found that there are no differences in performance between re-initializing and NOT-initializing the subnetworks (after pruning by SNIP and before the start of training): 1) the final accuracies are almost the same (the difference is less than 0.1%) and 2) the training behavior (the training loss and validation accuracy curves) is very similar.\\n\\nIt seems that our finding, albeit preliminary, is contradictory to the aforementioned hypothesis. This discrepancy may be due to the fact that the conclusions in your paper are based on magnitude based pruning and the method is tested for moderate sparsity levels, etc.\\n\\nAs stated in your latest version (Section 7), \\\"we intend to explore more efficient methods for finding winning tickets that will make it possible to study the lottery ticket hypothesis in more resource-intensive settings\\\" or \\\"... non-magnitude pruning methods (which could produce smaller winning tickets or find them earlier)\\\", we believe that SNIP could be a method of choice for the further exploration of your hypotheses.\\n\\nWe hope to hear your thoughts.\", \"title\": \"Winning tickets obtained by a different pruning method (SNIP) and the effect of re-initialization\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response, for addressing my previous concerns with the paper, and for taking the additional time for revising your original submission. Please see my updated review above.\"}",
"{\"title\": \"Response to Rebuttal\", \"comment\": \"Thanks for the very detailed response, the additional experiments and analysis and the updated manuscript. I am particularly pleased to see the additional experiments (not that the original manuscript was lacking experimental results) and the analysis in Appendix D. I think that the current paper is \\\"filled to the brink\\\" with interesting experiments and results (which are conducted in a very solid fashion) - there are many interesting follow-up questions (quite a few of which have been named by the reviewers) and it is tempting to add even more results, but I agree with the authors that these questions deserve a separate publication.\\n\\nI also appreciate a more formal statement of the lottery-ticket hypothesis.\\n\\nThe questions and issues raised in my review have all been addressed in a satisfactory fashion - the paper got even stronger. Looking forward to reading followup work on how well winning tickets generalize, whether they appear in non-classification tasks and whether other pruning methods identify the same winning tickets or not.\"}",
"{\"title\": \"Author Response (Part 1)\", \"comment\": \"Thank you so much for your thoughtful review. Below, you will find our responses to your questions and comments. We have modified the paper to reflect your feedback, and we are very interested in any further feedback you have about the new version of the paper.\\n\\nWe have summarized the changes in the new version of the paper in a top-level comment called \\\"Summary of Changes in the New Version.\\\"\\n\\nWhere multiple reviewers made similar comments, we have grouped the answers into a \\\"Common Questions\\\" comment; you can find this comment as a response to our top-level comment called \\\"Summary of Changes in the New Version.\\\"\\n\\n---\\n\\n> I acknowledge and support the author\\u2019s decision to have thorough and clean experiments on these small models and tasks, rather than having half-baked results on ImageNet, etc. The downside of this is that the experiments are thus not sufficient to claim (with reasonable certainty) that the lottery ticket hypothesis holds \\u201cin general\\u201d. The paper would be stronger, if the existence of winning tickets on larger-scale experiments or tasks other than classification were shown - even if these experiments did not have a large number of control experiments/ablation studies.\\n\\nPlease see Common Questions.\\n\\n---\\n\\n> 2. While the paper shows the existence of winning tickets robustly and convincingly on the networks/tasks investigated, the next important question would be how to systematically and reliably \\u201cbreak\\u201d the existence of lottery tickets. Can they be attributed to a few fundamental factors?\\n\\nPlease see Common Questions.\\n\\n---\\n\\n> Are they a consequence of batch-wise, gradient-based optimization, or an inherent feature of neural networks, or is it the loss functions commonly used, \\u2026?\\n\\nIn Appendices D and E, we show that the existence of winning tickets in lenet and conv2/4/6 is independent of the instantiation of a gradient-based optimization method (at least across Adam, SGD, and Momentum). However, we agree that there are still broader questions about the origin of winning tickets. We hope that the work in this paper makes it possible for us and others to follow with answers to these questions.\\n\\n---\\n\\n> On page 2, second paragraph, the paper states: \\u201dWhen randomly reinitialized, our winning tickets no longer match the performance of the original network, explaining the difficulty of training pruned networks from scratch\\u201d. I don\\u2019t fully agree - the paper certainly sheds some light on the issue, but an actual explanation would result in a testable hypothesis. My comment here is intended to be constructive criticism, I think that the paper has enough \\u201cjuice\\u201d and novelty for being accepted - I am merely pointing out that the overall story is not yet conclusive (and I am aware that it might need several more publications to find these answers).\\n\\nThis is an excellent observation and we have changed our language accordingly.\\n\\n---\\n\\n> 3. Do the winning tickets generalize across hyper-parameters or even tasks. I.e. if a winning ticket is found with one set of hyper-parameters, but then Optimizer/learning-rate/etc. are changed, does the winning-ticket still lead to improved convergence and accuracy? Same question for data-sets: do winning-tickets found on CIFAR-100 also work for CIFAR-10 and vice versa? If winning-tickets turn out to generalize well, in the extreme this could allow \\u201cshipping\\u201d each network architecture with a few good winning-tickets, thus making it unnecessary to apply expensive iterative pruning every time. I would not expect generalization across data-sets, but it would be highly interesting to see if winning tickets generalize in any way (after all I am still surprised by how well adversarial examples generalize and transfer).\\n\\nThis is a great question that we are interested in as well. We have conducted some exploratory experiments in each of these directions (changing hyperparameters and changing datasets) in preparation for future research, but the results are too preliminary to merit discussion. We have noted the dataset transfer direction in our list of implications at the end of Section 1, and we think that answering these question precisely will require a separate publication.\"}",
"{\"title\": \"Author Response (Part 2)\", \"comment\": \"> 4. Some things that would be interesting to try: 4a) Is there anything special about the pruned/non-pruned weights at the time of initialization? Did they start out with very small values already or are they all \\u201cbehind\\u201d some (dead) downstream neuron? Is there anything that might essentially block gradient signal from updating the pruned neurons? This could perhaps be checked by recording weights\\u2019 \\u201ctrajectories\\u201d during training to see if there is a correlation between the \\u201cdistance weights traveled\\u201d and whether or not they end up in the winning ticket.\\n\\nIn the new Appendix D, we study the pruned and non-pruned weights at the time of initialization. We find that winning ticket initializations tend to come from the extremes of the truncated normal distribution from which the unpruned networks are initialized. We are interested in studying the other questions you mention in future work. We also look at the distance weights travel in the unpruned network, finding that weights that are part of the eventual winning tickets tend to move more than weights that are not part of the winning ticket.\\n\\n---\\n\\n> 4b) Do ARD-style/Bayesian approaches or second-order methods to pruning identify (roughly) the same neurons for pruning?\\n\\nThese are great questions that we are interested in understanding as well. In order to keep our experiments as simple and tractable as possible, we opted to focus on a single, simple, widely-accepted pruning method. However, we have updated our limitations section (Section 7) to reflect that we only use a single identification technique and that other techniques may produce winning tickets with different properties (e.g., fewer weights, improved training times, better generalization, or better performance on hardware).\\n\\n---\\n\\n> 5. Typo (should be through): \\u201cwe find winning tickets though a principled search process\\u201d\\n\\nNice catch - it should now be corrected!\\n\\n---\\n\\n> For the standard ConvNets I assume you did not use batchnorm. Does batchnorm interfere in any way with the existence of winning tickets? (at least on ResNet they seem to exist with batchnorm as well)\\n\\nThe new networks (resnet18 and vgg16/19) all use batchnorm. You're correct that lenet and conv2/4/6 do not use batchnorm. As you note, since we still find winning tickets on these larger networks, it does not appear that batchnorm interferes with the existence of winning tickets.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you so much for your thoughtful review. Below, you will find our responses to your questions and comments. We have modified the paper to reflect your feedback, and we are very interested in any further feedback you have about the new version of the paper.\\n\\nWe have summarized the changes in the new version of the paper in a top-level comment called \\\"Summary of Changes in the New Version.\\\"\\n\\n---\\n\\n> 1. Though this is an empirical paper about an observed phenomenon, it should contain a bit more background and discussion on the theoretical implications of its subject. For example, see [2] which is also an empirical work about a theoretical hypothesis, but still includes the right theoretical context that helps the reader judge the meaning of their results. The same should be done here. For instance, there is a growing interest in the link between compression and generalization that is relevant to this work [3,4], and the effect of winning ticket leading to better generalization could be explained via other works which link structure to inductive bias [5,6].\\n\\nWe have rewritten our discussion section (Section 6) to connect with contemporary understanding of inductive bias, generalization (and its relation to compressibility), and optimization of overparameterized networks. We hope that this section provides appropriate context for interpreting these results, however we are open to additional suggestions.\\n\\n---\\n\\n> 2. The lottery ticket hypothesis is described in the paper as being both about optimization (faster \\u201cconvergence\\u201d) and about generalization (better \\u201cgeneralization accuracy\\u201d). However, there is a slight issue with how these terms are treated in the paper. First, \\u201cconvergence\\u201d is defined as the point at which the test accuracy reaches to a minimum and before it begins to rise again, but it does not mean (and most likely not) that it is the point at which the optimization algorithm converged to its minimum \\u2014 it is better to write that early stopping regularization was used in this case.\\n\\nThank you for this very helpful suggestion. We have updated our language throughout the paper to ensure that we are using this terminology properly.\\n\\n---\\n\\n> Second, the convergence point is chosen according to the test set which is bad methodology, because the test set cannot be used for choosing the final model (only the training and validation sets).\\n\\nWe have updated all of our experiments in the main body of the paper to report the iteration of early-stopping based on validation loss and to report the accuracy at that iteration based on test loss. The conclusions from our results remain the same.\\n\\n---\\n\\n> Third, the training accuracies are not reported in the paper, and without them, it is difficult to judge if a given model fails to generalize is simply fails to converge to 100% accuracy on the training set.\\n\\nWe have updated the paper to include graphs of the training accuracies at early-stopping time for lenet and conv2/4/6. In general, training accuracy at early-stopping time rises with test accuracy. However, at the end of the training process, training accuracy generally reaches 100% for all but the most heavily pruned networks (see the new Appendix B); this is true for both winning tickets and randomly reinitialized networks (although winning tickets generally still reach 100% training accuracy when pruned slightly further (e.g., 3.6% vs. 1.9% for MNIST)). Even so, the accuracy patterns witnessed at early-stopping time remain in place at the end of training: winning tickets see test accuracy improvements and reach higher test accuracy than when randomly reinitialized, indicating that winning tickets indeed generalize better.\\n\\n---\\n\\n> As a minor note, \\u201cgeneralization accuracy\\u201d as a term is not that common and might be a bit confusing, so it is better to write \\u201ctest accuracy\\u201d.\\n\\nWe have updated our language to reflect this suggestion.\"}",
"{\"title\": \"Author Response (Part 1)\", \"comment\": \"(Edit: we reworded this comment for clarity, but the content is otherwise the same)\\n\\nThank you so much for your thoughtful review. Below, you will find our responses to your questions and comments. We have modified the paper to reflect your feedback, and we are very interested in any further feedback you have about the new version of the paper. \\n\\nWe have summarized the changes in the new version of the paper in a top-level comment called \\\"Summary of Changes in the New Version.\\\"\\n\\nWhere multiple reviewers made similar comments, we have grouped the answers into a \\\"Common Questions\\\" comment; you can find this comment as a response to our top-level comment called \\\"Summary of Changes in the New Version.\\\"\\n\\n---\\n\\n> Actually another submission (https://openreview.net/forum?id=rJlnB3C5Ym) made the opposite conclusions.\\n\\nUp to a certain level of pruning, a randomly reinitialized network can match the accuracy (and often learning speed) of the original network. We find this to be true throughout our paper, particularly in the conv2/4/6 experiments. However, past this point, winning tickets continue to match the performance of the original network when randomly reinitialized networks cannot. Furthermore, at the levels of pruning for which randomly reinitialized networks do match the performance of the original network, winning tickets reach even higher accuracy and learn faster. As a concrete example, in Section 5 of the updated version of our paper, we include lottery ticket experiments on the same VGG19 network for CIFAR10 as appears in \\\"Rethinking the Value of Network Pruning.\\\" We find that, when randomly reinitialized, subnetworks found via iterative pruning remain within 0.5 percentage points of the accuracy of the original network until pruned by about 70%; after this point, accuracy drops off as in random reinitialization experiments throughout our paper. This result supports the findings of \\\"Rethinking the Value of Network Pruning:\\\" up to a certain level of pruning, VGG19 continues to reach accuracy close the original network even when randomly reinitialized. However, past the initial two or three pruning iterations, these randomly reinitialized networks do not qualify as winning tickets by our definition. In contrast, iterative pruning produces winning tickets when the network is pruned by up to 94.5%.\\n\\n---\\n\\n> It would be clearer if the author can use some math notations.\\n\\nWe agree; thank you for the feedback. In the updated version, we have made our definitions precise through mathematical notation.\\n\\n---\\n\\n> As identified by the authors themself, lacking of supporting experiments on large-scale dataset and real-world models. Only MNIST/CIFAR-10 and toy networks like LeNet, Conv2/Conv4/Conv6 are used. The author has done experiments on resnet, I would be better to move it to the main paper.\\n\\nPlease see \\\"Common Questions.\\\"\\n\\n---\\n\\n> There is no explanation about why the \\u201clottery ticket\\u201d can perform well when trained with the \\u201coriginal initialization\\u201d but not with random initialization. Is it because the original initialization is not far from the pruned solution? Then this is a kind of overting to the obtained solution.\\n\\nPlease see \\\"Common Questions.\\\"\\n\\n---\\n\\n> The other problem is that the implications are not clearly useful without showing any applications. The paper could be stronger if the authors can provide more results to support the applications of this conjecture.\\n\\nWe largely consider the value of this paper to be its identification of an avenue to understand properties of neural networks, independent of the current applicability of this understanding to end objectives (e.g., faster training). We intend for this paper to pose an opportunity for future applications. However, we agree that we do not evaluate them.\\n\\nIf winning tickets do seem to exist in a wide variety of networks, we believe that the most concrete application is in line with contemporary work on distillation/compression/pruning: if a technique can find winning tickets early on in training, then those winning tickets can be used for the remainder of learning, thereby reducing resource demands and speeding up learning (depending on the profitablity of exploiting the sparsity of a winning ticket, as you note next).\"}",
"{\"title\": \"Author Response (Part 2)\", \"comment\": \"> The authors only explore the sparse networks. Model compression by sparsification has good compression rate, especially for networks with large FC layers. However, the acceleration relies on specific hardware/libraries. It would be more complete if the author can provide experiments on structurally pruned networks, especially for CNNs.\\n\\nThis is a great observation. We agree that structured pruning techniques produce pruned networks that are more amenable to existing software/hardware acceleration techniques. In the limitations section of the updated version (Section 7), we have explicitly noted structured pruning as an opportunity to connect our empirical observations of winning tickets to concrete practice.\\n\\n---\\n\\n> The x-axis of pruning ratios in Figure 1/4/5 could be uniformly sampled and make the figure easier to read.\\n\\nDone - thank you for the suggestion!\\n\\n---\\n\\n> Does the winning tickets always exist?\\n\\nOur experiments indicate that winning tickets do seem to exist for the variety of network architectures considered in this paper (and as explicitly scoped by our stated limitations in Section 7 - we acknowledge that we only consider a limited subset of neural network tasks in this paper). However, in the most literal sense, no: winning tickets do not always exist for all datasets and networks. Take, as an example, a minimal dense network for two-way XOR which has two hidden units. If the parameters of the network are initialized to values that give the correct outputs from the very start, then removing any one parameter makes it impossible to reach the same accuracy as the unpruned network.\\n\\n---\\n\\n> What is the size of winning tickets for a very thin network? Would it also be less than 10%?\\n\\nIn the updated version of the paper (Section 5), we have studied several networks that are much thinner than those described in the original version of the paper: VGG16, VGG19, and resnet18. For VGG16 and VGG19, we continue to find winning tickets that are at or less than 10% of the original size of the network. For resnet18 (which has 16x fewer parameters than conv2 and 75x fewer than VGG19), we find winning tickets that are about 15% of the size of the original network. Our results suggest that, for several exemplary thin networks, we still find winning tickets near or below 10-20%, depending on the level of overparameterization of the original network.\"}",
"{\"title\": \"Responses to Common Reviewer Questions\", \"comment\": \"There were a couple of questions that were asked by more than one reviewer. We have centralized our responses to those common questions here.\\n\\n---\\n\\n> Reviewer 1: As identified by the authors themself, lacking of supporting experiments on large-scale dataset and real-world models. Only MNIST/CIFAR-10 and toy networks like LeNet, Conv2/Conv4/Conv6 are used. The author has done experiments on resnet, I would be better to move it to the main paper.\\n\\n> Reviwer 3: The paper would be stronger, if the existence of winning tickets on larger-scale experiments or tasks other than classification were shown - even if these experiments did not have a large number of control experiments/ablation studies.\\n\\nIn the new version of the paper, we have added experiments on resnet18 and vgg16/19 with CIFAR10 (Section 5 for VGG19 and resnet-18 and Appendix H for VGG16), where we continue to find winning tickets. Notably, our iterative-pruning method for finding winning tickets becomes sensitive to learning rate, so we have to modify the learning rate schedule from the default values to find winning tickets (e.g., by adding warmup). Unfortunately, running pruning experiments on Imagenet or the like was beyond our means during the rebuttal period. The new experiments, which better evoke real-world architectures, improve our confidence in the generality of the lottery ticket hypothesis. However, we acknowledge this concern.\\n\\n---\\n\\n> Reviewer 1: There is no explanation about why the \\u201clottery ticket\\u201d can perform well when trained with the \\u201coriginal initialization\\u201d but not with random initialization. Is it because the original initialization is not far from the pruned solution? Then this is a kind of overting to the obtained solution.\\n\\n> Reviewer 3: While the paper shows the existence of winning tickets robustly and convincingly on the networks/tasks investigated, the next important question would be how to systematically and reliably \\u201cbreak\\u201d the existence of lottery tickets. Can they be attributed to a few fundamental factors?\\n\\nWe have not yet been able to definitively answer why a winning ticket can perform well with the original initialization but not random initialization. However, in the updated version, we have added an appendix that provides more detail about the internals of winning tickets from lenet for MNIST (Appendix D). Specifically, we investigate two questions: 1) (as suggested by Reviewer 1) are the initial values of winning tickets close to their trained values? and 2) what is the distribution of weights in winning tickets before initialization?\\n\\n* Question 1: we actually find the opposite of what Reviewer 1 suggests: in the unpruned network, weights that are part of the eventual winning tickets tend to move more than weights that are not part of the winning ticket.\\n\\n* Question 2: we find that the winning ticket initializations tend to come from a different distribution than the network as a whole: a bimodal distribution with two peaks toward the extremes of the truncated normal distribution from which the network was originally initialized. We try reinitializing winning tickets from this distribution, but doing so performs no better than random reinitialization. We also try performing magnitude pruning before training based on the hypothesis that low-magnitude weights are unlikely to be part of the eventual winning ticket; this approach also performs no better than random reinitialization. We conclude that these insights based on magnitude at initialization are not sufficient to identify a lottery ticket.\\n\\nThese results do not definitively answer the questions posed, but they represent the first set of clues on the path to doing so. We intend to continue down this path in our future work.\"}",
"{\"title\": \"Summary of Changes in New Version\", \"comment\": \"(Edit: we reworded this comment for clarity, but the content is otherwise the same)\\n\\nWe would like to thank the reviewers for their thorough feedback. In response to the many valuable suggestions and questions they provided, we have made substantial revisions to the paper. In this comment, we summarize those changes section-by-section.\\n\\n-----\", \"changes_throughout_the_paper\": [\"As suggested by Reviewer 2, we no longer refer to network \\\"convergence.\\\" Instead, we describe the same phenomenon as \\\"the iteration at which early-stopping would occur.\\\" Rather than discussing faster convergence times, we instead refer to faster learning as indicated by an earlier iteration of early-stopping.\", \"As suggested by Reviewer 1, we have added mathematical notation throughout the paper where appropriate. We adopt the syntax P_m = k% to describe a winning ticket for which the pruning mask m contains 1's in k% of its indices.\", \"As suggested by Reviewer 2: for all of our training iterations/test accuracy experiments, we measure early-stopping with the validation set and report accuracy at early-stopping using the test set. Our results throughout the paper are the same as in the original submission.\", \"-----\"], \"section_1\": \"* As suggested by Reviewer 1, we have added a formal characterization of the lottery ticket hypothesis in mathematical notion. The meaning of this statement is the same as the informal statement made in the original submission.\\n\\n-----\", \"section_2\": [\"As suggested by Reviewer 2, we have added graphs that show training accuracy at early-stopping time and test accuracy at the end of training (i.e., when training accuracy reaches 100%). Generating this data required re-running our experiments. Therefore, we have updated all reported numbers in this section to reflect the recollected values. Our results remain the same.\", \"We integrated the P_m notation to streamline the prose. Otherwise,the semantics of this text is exactly the same.\", \"-----\"], \"section_3\": \"We applied the same changes as in Section 2 (described above). Our results remain the same.\\n\\n-----\", \"section_4\": \"This section compares results with dropout to results from Section 3. The only change we make is an update to the numbers reported from Section 3 (which are updated as described above). Otherwise, our results are the same.\\n\\n-----\", \"section_5\": \"As suggested by Reviewers 1 and 3, we have moved the content for resnet-18 on CIFAR 10 that was in Appendix D in the original submission to this section. Additionally, we provide new experiments for VGG16/19 on CIFAR10.\\n\\nTo briefly summarize our results, we continue to find winning tickets. However, we show that our results are sensitive to learning rate (as was previously reported for resnet-18 in Appendix D in the original submission). Specifically, at the higher learning rates typically used to train these networks, there is a small accuracy gap between the identified winning ticket and the original network. We show that learning rate warmup eliminates this gap.\\n\\n-----\", \"section_6\": \"As suggested by Reviewer 2, we have expanded this section to integrate theoretical context related to generalization, optimization, and inductive bias. Otherwise, our conclusions remain the same.\\n\\n-----\", \"section_7\": \"We have added content to our Limitations to reflect the additions that we have promised in our responses to individual reviews.\\n\\n-----\", \"section_8\": \"Unchanged.\\n\\n-----\", \"appendices\": \"We have added content to our Appendix to reflect the additions that we have promised in our responses to individual reviews.\"}",
"{\"comment\": \"I share many of this reviewer's concerns and hope they can be addressed by the authors.\\n\\nHowever, I found the point about \\\"original initialization\\\" to be rather pedantic. The majority of the audience will understand \\\"original initialization\\\" to be the values of the weights before any optimization.\\n\\nWhile it is possible that some light verbiage would be helpful to clarify, I do not think that \\\"math notations\\\" will help a bit (and in fact may serve to further confuse).\\n\\nI am not affiliated with the authors in any way.\", \"title\": \"Re. \\\"original initialization\\\"\"}",
"{\"title\": \"interesting conjecture, needs experiments on larger dataset and better presentation and explanation about the result\", \"review\": \"It was believed that sparse architectures generated by pruning are difficult to train from scratch. The authors show that there exist sparse subnetworks that can be trained from scratch with good generalization performance. To explain the difficulty of training pruned networks from scratch or why training needs the overparameterized networks that make pruning necessary, the authors propose a lottery ticket hypothesis: unpruned, randomly initialized NNs contain subnetworks that can be trained from scratch with similar generalization accuracy. They also present an algorithm to identify the winning tickets.\\n\\nThe conjecture is interesting and it is still a open question for whether a pruned network can reach the same accuracy when trained from scratch. It may helps to explain why bigger networks are easier to train due to \\u201chaving more possible subnetworks from which training can recover a winning ticket\\u201d. It also shows the importance of both the pruned architecture and the initialization value. Actually another submission (https://openreview.net/forum?id=rJlnB3C5Ym) made the opposite conclusions.\", \"the_limitations_of_this_paper_are_several_folds\": [\"The paper seems a bit preliminary and unfinished. A lot of notations seems confusing, such as \\u201cwhen pruned to 21%\\u201d. The author defines a winning lottery ticket as a sparse subnetwork that can reaching the same performance of the original network when trained from scratch with the \\u201coriginal initialization\\u201d. It is quite confusing as there is no definition anywhere about the \\u201coriginal initialization\\u201d. It would be clearer if the author can use some math notations.\", \"As identified by the authors themself, lacking of supporting experiments on large-scale dataset and real-world models. Only MNIST/CIFAR-10 and toy networks like LeNet, Conv2/Conv4/Conv6 are used. The author has done experiments on resnet, I would be better to move it to the main paper.\", \"There is no explanation about why the \\u201clottery ticket\\u201d can perform well when trained with the \\u201coriginal initialization\\u201d but not with random initialization. Is it because the original initialization is not far from the pruned solution? Then this is a kind of overting to the obtained solution.\", \"The other problem is that the implications are not clearly useful without showing any applications. The paper could be stronger if the authors can provide more results to support the applications of this conjecture.\", \"The authors only explore the sparse networks. Model compression by sparsification has good compression rate, especially for networks with large FC layers. However, the acceleration relies on specific hardware/libraries. It would be more complete if the author can provide experiments on structurally pruned networks, especially for CNNs.\", \"The x-axis of pruning ratios in Figure 1/4/5 could be uniformly sampled and make the figure easier to read.\"], \"questions\": \"- Does the winning tickets always exist?\\n- What is the size of winning tickets for a very thin network? Would it also be less than 10%?\\n\\n\\n------update----------\\n\\nI appreciate the author\\u2019s efforts on providing detailed response and more experiments. After reading the rebuttal and the revised version, though the paper has been improved, my concerns are not fully addressed to safely accept it.\\n\\nIt can be summarized that there exists a sparse network that can be trained well only provided with certain weight initialization.The winning tickets can only be found via iterative pruning of the trained network. This is a chicken-egg problem and I failed to see how it can improve the network design. It still feels incomplete to me by just providing a hypothesis with limited sets of experiments. The implications are actually the most valuable/attractive part, such as \\u201cImprove our theoretical understanding of neural networks\\u201d, however, they are very vague with no clear instructions even after accepting this hypothesis. I would expect analysis of the reason behind failure and success. I understand that it could be left for another paper, but the observations/experiments only are not strong enough for confirming the the hypothesis.\\n\\nSpecifically, the experiments are conducted on relatively wide and shallow CNNs. Note that VGG-16/19 and ResNet-18 are designed for ImageNet but not CIFAR-10, which are much wider than normal CIFAR-10 networks, such as ResNet-56. Even \\u201cresnet18 has 16x fewer parameters than conv2 and 75x fewer than VGG19\\u201d, it is mainly due to the removal of FC layers with average pooling and cannot be claimed as \\u201cmuch thinner\\u201d networks. As increasing the wideness usually ease the optimization, and the pruned sparse network still enjoy this property unless significantly pruned. Thus, I still doubt whether the conclusion can hold for much thinner network, i.e., \\u201cwinning tickets near or below 10-20%, depending on the level of overparameterization of the original network.\\u201d\\n\\nThe observation of \\u201cwinning ticket weights tend to change by a larger amount then weights in the rest of the network\\u201d in Figure 19 seems natural and the conjecture of the reason \\u201cmagnitude-pruning biases the winning tickets we find toward those containing weights that change in the direction of higher magnitude\\u201d sounds reasonable. It would be great if the authors can dig into this and make more comparison with the distribution of random weights initialization.\\n\\nThe figures could also be improved and simplified as the lines are hard to read and compare.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Highly thought provoking!\", \"review\": \"==== Summary ====\\n\\nIt is widely known that large neural networks can typically be compressed into smaller networks that perform as well as the original network while directly training small networks can be complicated. This paper proposes a conjecture to explain this phenomenon that the authors call \\u201cThe Lottery Ticket Hypothesis\\u201d: large networks that can be trained successfully contain at initialization time small sub-networks \\u2014 which are defined by both connectivity and the initial weights that the authors call \\u201cwinning tickets\\u201d \\u2014 that if trained separately for similar number of iterations could reach the same performance as the large network. The paper follows by proposing a method to find these winning tickets by pruning methods, which are typically used for compressing networks, and then proceed to test this hypothesis on several architectures and tasks. The paper also conjectures that the reason large networks are more straightforward to train is that when randomly initialized large networks have more combinations for subnetworks which makes have a winning ticket more likely.\\n\\n==== Detailed Review ====\\n\\nI have found the hypothesis that the paper puts forth to be very appealing, as it articulates the essence of many ideas that have been floating around for quite a while. For example, the notion that having a large network makes it more probable for some of the initialized weights to be in the \\u201cright\\u201d direction for the beginning of the training, as mentioned in [1] that was cited in this submission. Given our lack of understanding of the optimization and generalization properties of neural networks, as well as how these two interact, then any insight into this process, like this paper suggests, could have a significant impact on both theory and practice. To that effect, I generally found the experiments in support of the hypothesis to be pretty convincing, or at the very least that there is some truth to it. Most importantly, the hypothesis and experiments presented in this paper gave me a new perspective on both the generalization and optimization problem, which as a theoretician gave me new ideas on how to approach analyzing them rigorously \\u2014 and that is why I strongly vote for the acceptance of this paper.\\n\\nThough I have very much enjoyed reading this submission, which for the most part is very well written, it does have some issues:\\n\\n1. Though this is an empirical paper about an observed phenomenon, it should contain a bit more background and discussion on the theoretical implications of its subject. For example, see [2] which is also an empirical work about a theoretical hypothesis, but still includes the right theoretical context that helps the reader judge the meaning of their results. The same should be done here. For instance, there is a growing interest in the link between compression and generalization that is relevant to this work [3,4], and the effect of winning ticket leading to better generalization could be explained via other works which link structure to inductive bias [5,6].\\n2. The lottery ticket hypothesis is described in the paper as being both about optimization (faster \\u201cconvergence\\u201d) and about generalization (better \\u201cgeneralization accuracy\\u201d). However, there is a slight issue with how these terms are treated in the paper. First, \\u201cconvergence\\u201d is defined as the point at which the test accuracy reaches to a minimum and before it begins to rise again, but it does not mean (and most likely not) that it is the point at which the optimization algorithm converged to its minimum \\u2014 it is better to write that early stopping regularization was used in this case. Second, the convergence point is chosen according to the test set which is bad methodology, because the test set cannot be used for choosing the final model (only the training and validation sets). Third, the training accuracies are not reported in the paper, and without them, it is difficult to judge if a given model fails to generalize is simply fails to converge to 100% accuracy on the training set. As a minor note, \\u201cgeneralization accuracy\\u201d as a term is not that common and might be a bit confusing, so it is better to write \\u201ctest accuracy\\u201d.\\n\\nTo conclude, even though I urge the authors to address the above issues, which could significantly improve its quality and clarity, I think that this article thought-provoking and highly deserving of being accepted to ICLR.\\n\\n[1] Bengio et al. Convex neural networks. NIPS 2006.\\n[2] Zhang et al. Understanding deep learning requires rethinking generalization. ICLR 2017.\\n[3] Arora et al. Stronger generalization bounds for deep nets via a compression approach. ICML 2018.\\n[4] Zhou et al. Compressibility and Generalization in Large-Scale Deep Learning. Arxiv preprint 2018.\\n[5] Cohen et al. Inductive Bias of Deep Convolutional Networks through Pooling Geometry. ICLR 2017.\\n[6] Levine et al. Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design. ICLR 2018. \\n\\n==== Updated Review Following Rebuttal ====\\n\\nThe authors have addressed all of the concerns that I have mentioned above, and so I have updated my score accordingly. The additional background on related works, as well as the additional experiments in response to the other reviews will help readers appreciate the observations that are raised by the authors. The new revision is a very strong submission, and I highly recommend accepting it to ICLR.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Intriguing results that challenge the common understanding of how neural network training works\", \"review\": \"(Score raised from 8 to 9 after rebuttal)\\nThe paper examines the hypothesis that randomly initialized (feed-forward) neural networks contain sub-networks that train well in the sense that they converge equally fast or faster and reach the same or better classification accuracy. Interestingly, such sub-networks can be identified by simple, magnitude-based pruning. It is crucial that these sub-networks are initialized with their original initialization values, otherwise they typically fail to be trained, implying that it is not purely the structure of the sub-networks that matters. The paper thoroughly investigates the existence of such \\u201cwinning-tickets\\u201d on MNIST and CIFAR-10 on both, fully connected but also convolutional neural networks. Winning-tickets are found across networks, various optimizers, at different pruning-levels and across various other hyper-parameters. The experiments also show that iterative pruning (with re-starts) is more effective at finding winning-tickets.\\n\\nThe paper adds a novel and interesting angle to the question of why neural networks apparently need to be heavily over-parameterized for training. This question is intriguing and of high importance to further the understanding of how neural networks train. Additionally, the findings might have practical relevance as they might help avoid unnecessary over-parameterization which, in turn, might save use of computational resources and energy. The main idea is simple (which is good) and can be tested with relatively simple experiments (also good). The experiments conducted in the paper are clean (averaging over multiple runs, controlling for a lot of factors) and should allow for easy reproduction but also for clean comparison against future experiments. The experimental section is well executed, the writing is clear and good and related work is taken into account to a sufficient degree. The paper touches upon a very intriguing \\u201cfeature\\u201d of neural networks and, in my opinion, should be relevant to theorists and practitioners across many sub-fields of deep learning research. I therefore vote and argue for accepting the paper for presentation at the conference. The following comments are suggestions to the authors on how to further improve the paper. I do not expect all issues to be addressed in the camera-ready version.\\n\\n1) The main \\u201cweakness\\u201d of the paper might be that, while the amount of experiments and controls is impressive, the generality of the lottery ticket hypothesis remains somewhat open. Even when restricting the statement to feed-forward networks only, the networks investigated in the paper are relatively \\u201csmall\\u201d and MNIST and CIFAR-10 bear the risk of finding patterns that do not hold when scaling to larger-scale networks and tasks. I acknowledge and support the author\\u2019s decision to have thorough and clean experiments on these small models and tasks, rather than having half-baked results on ImageNet, etc. The downside of this is that the experiments are thus not sufficient to claim (with reasonable certainty) that the lottery ticket hypothesis holds \\u201cin general\\u201d. The paper would be stronger, if the existence of winning tickets on larger-scale experiments or tasks other than classification were shown - even if these experiments did not have a large number of control experiments/ablation studies.\\n\\n2) While the paper shows the existence of winning tickets robustly and convincingly on the networks/tasks investigated, the next important question would be how to systematically and reliably \\u201cbreak\\u201d the existence of lottery tickets. Can they be attributed to a few fundamental factors? Are they a consequence of batch-wise, gradient-based optimization, or an inherent feature of neural networks, or is it the loss functions commonly used, \\u2026? On page 2, second paragraph, the paper states: \\u201dWhen randomly reinitialized, our winning tickets no longer match the performance of the original network, explaining the difficulty of training pruned networks from scratch\\u201d. I don\\u2019t fully agree - the paper certainly sheds some light on the issue, but an actual explanation would result in a testable hypothesis. My comment here is intended to be constructive criticism, I think that the paper has enough \\u201cjuice\\u201d and novelty for being accepted - I am merely pointing out that the overall story is not yet conclusive (and I am aware that it might need several more publications to find these answers).\\n\\n3) Do the winning tickets generalize across hyper-parameters or even tasks. I.e. if a winning ticket is found with one set of hyper-parameters, but then Optimizer/learning-rate/etc. are changed, does the winning-ticket still lead to improved convergence and accuracy? Same question for data-sets: do winning-tickets found on CIFAR-100 also work for CIFAR-10 and vice versa? If winning-tickets turn out to generalize well, in the extreme this could allow \\u201cshipping\\u201d each network architecture with a few good winning-tickets, thus making it unnecessary to apply expensive iterative pruning every time. I would not expect generalization across data-sets, but it would be highly interesting to see if winning tickets generalize in any way (after all I am still surprised by how well adversarial examples generalize and transfer).\\n\\n4) Some things that would be interesting to try:\\n4a) Is there anything special about the pruned/non-pruned weights at the time of initialization? Did they start out with very small values already or are they all \\u201cbehind\\u201d some (dead) downstream neuron? Is there anything that might essentially block gradient signal from updating the pruned neurons? This could perhaps be checked by recording weights\\u2019 \\u201ctrajectories\\u201d during training to see if there is a correlation between the \\u201cdistance weights traveled\\u201d and whether or not they end up in the winning ticket.\\n4b) Do ARD-style/Bayesian approaches or second-order methods to pruning identify (roughly) the same neurons for pruning?\\n\\n5) Typo (should be through): \\u201cwe find winning tickets though a principled search process\\u201d\\n\\n6) For the standard ConvNets I assume you did not use batchnorm. Does batchnorm interfere in any way with the existence of winning tickets? (at least on ResNet they seem to exist with batchnorm as well)\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rylbWhC5Ym | HR-TD: A Regularized TD Method to Avoid Over-Generalization | [
"Ishan Durugkar",
"Bo Liu",
"Peter Stone"
] | Temporal Difference learning with function approximation has been widely used recently and has led to several successful results. However, compared with the original tabular-based methods, one major drawback of temporal difference learning with neural networks and other function approximators is that they tend to over-generalize across temporally successive states, resulting in slow convergence and even instability. In this work, we propose a novel TD learning method, Hadamard product Regularized TD (HR-TD), that reduces over-generalization and thus leads to faster convergence. This approach can be easily applied to both linear and nonlinear function approximators.
HR-TD is evaluated on several linear and nonlinear benchmark domains, where we show improvement in learning behavior and performance. | [
"Reinforcement Learning",
"TD Learning",
"Deep Learning"
] | https://openreview.net/pdf?id=rylbWhC5Ym | https://openreview.net/forum?id=rylbWhC5Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eJeuU2k4",
"rygarC_anX",
"rygCKFqc2X",
"rklhGtXS3m"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544476647097,
1541406277329,
1541216646424,
1540860179626
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1143/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1143/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1143/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1143/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All three reviewers raised the issues that (a) the problem tackled in the paper was insufficiently motivated, (b) the solution strategy was also not sufficiently motivated and (c) the experiments had serious methodological issues.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Issues with motivation and experiments\"}",
"{\"title\": \"Good to formulate the problem but issues in exposition and validation\", \"review\": [\"The paper considers the problem of overgeneralization between adjacent states of the one-step temporal difference error, when using function approximation. The authors suggest an explicit regularization scheme based on the correlation between the respective features, which reduces to penalizing the Hadamard product.\", \"The paper has some interesting ideas, and the problem is very relevant to deep RL. Having a more principled approach to target networks would be nice. I have some concerns though:\", \"The key motivation is not convincing. Our goal with representation learning for deep RL is to have meaningful generalization between similar states. The current work essentially tries to reduce this correlation for the sake of interim optimization benefits of the one-step update.\", \"The back and forth between fixed linear features and non-linear learned features needs to be polished. The analysis is usually given for the linear case, but in the deep setting the features are replaced with gradients. Also, the relationship with target networks, as well as multi-step updates (e.g. A3C) needs to be mentioned early, as these are the main ways of dealing with or bypassing the issue the authors are describing.\", \"The empirical validation is very weak -- two toy domains, and Pong, the easiest Atari game, so unfortunately there isn\\u2019t enough evidence to suggest that the approach would be impactful in practice.\"], \"minor_comments\": [\"there must be a max in the definition of v* somewhere\", \"V_pi is usually used for the true value function, rather than the estimate\", \"Sections 2.2 and 4.2 should be better bridged\", \"The relationship with the discount factor just before Section 5 is interesting, but quite hand-wavy -- the second term only concerns the diagonal elements, and the schedule on gamma would be replaced by a schedule on eta.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A new td method\", \"review\": \"This paper introduces a variation on temporal difference learning for the function approximation case that attempts to resolve the issue of over-generalization across temporally-successive states. The new approach is applied to both linear and non-linear function approximation, and for prediction and control problems. The algorithmic contribution is demonstrated with a suite of experiments in classic benchmark control domains (Mountain Car and Acrobot), and in Pong.\\n\\nThis paper should be rejected because (1) the algorithm is not well justified either by theory or practice, (2) the paper never clearly demonstrates the existence of problem they are trying to solve (nor differentiates it from the usual problem of generalizing well), (3) the experiments are difficult to understand, missing many details, and generally do not support a significant contribution, and (4) the paper is imprecise and unpolished.\\n\\nMain argument\\n\\nThe paper does not do a great job of demonstrating that the problem it is trying to solve is a real thing. There is no experiment in this paper that clearly shows how this temporal generalization problem is different from the need to generalize well with function approximation. The paper points to references to establish the existence of the problem, but for example the Durugkar and Stone paper is a workshop paper and the conference version of that paper was rejected from ICLR 2018 and the reviewers highlighted serious issues with the paper\\u2014that is not work to build upon. Further the paper under review here claims this problem is most pressing in the non-linear case, but the analysis in section 4.1 is for the linear case. \\n\\nThe resultant algorithm does not seem well justified, and has a different fixed point than TD, but there is no discussion of this other than section 4.4, which does not make clear statements about the correctness of the algorithm or what it converges to. Can you provide a proof or any kind of evidence that the proposed approach is sound, or how it\\u2019s fixed point relates to TD?\\n\\nThe experiments do not provide convincing evidence of the correctness of the proposed approach or its utility compared to existing approaches. There are so many missing details it is difficult to draw many conclusions:\\n1) What was the policy used in exp1 for policy evaluation in MC?\\n2) Why Fourier basis features?\\n3) In MC with DQN how did you adjust the parameters and architecture for the MC task?\\n4) Was the reward in MC and Acrobot -1 per step or something else\\n5) How did you tune the parameters in the MC and Acrobot experiments?\\n6) Why so few runs in MC, none of the results presented are significant?\\n7) Why is the performance so bad in MC?\\n8) Did you evaluate online learning or do tests with the greedy policy?\\n9) How did you initialize the value functions and weights?\\n10) Why did you use experience replay for the linear experiments?\\n11) IN MC and Acrobot why only a one layer MLP?\\n\\n\\nIgnoring all that, the results are not convincing. Most of the results in the paper are not statistically significant. The policy evaluation results in MC show little difference to regular TD. The Pong results show DQN is actually better. This makes the reader wonder if the result with DQN on MC and Acrobot are only worse because you did not properly tune DQN for those domains, whereas the default DQN architecture is well tuned for Atari and that is why you method is competitive in the smaller domains. \\n\\nThe differences in the \\u201caverage change in value plots\\u201d are very small if the rewards are -1 per step. Can you provide some context to understand the significance of this difference? In the last experiment linear FA and MC, the step-size is set equal for all methods\\u2014this is not a valid comparison. Your method may just work better with alpha = 0.1. \\n\\n\\nThe paper has many imprecise parts, here are a few:\\n1) The definition of the value function would be approximate not equals unless you specify some properties of the function approximation architecture. Same for the Bellman equation\\n2) equation 1 of section 2.1 is neither an algorithm or a loss function\\n3) TD does not minimize the squared TD. Saying that is the objective function of TD learning in not true\\n4) end of section 2.1 says \\u201cIt is computed as\\u201d but the following equation just gives a form for the partial derivative\\n5) equation 2, x is not bounded \\n6) You state TC-loss has an unclear solution property, I don\\u2019t know what that means and I don\\u2019t think your approach is well justified either\\n7) Section 4.1 assumes linear FA, but its implied up until paragraph 2 that it has not assumed linear\\n8) treatment of n_t in alg differs from appendix (t is no time episode number)\\n9) Your method has a n_t parameter that is adapted according to a schedule seemingly giving it an unfair advantage over DQN.\\n10) Over-claim not supported by the results: \\u201cwe see that HR-TD is able to find a representation that is better at keeping the target value separate than TC is \\u201c. The results do not show this.\\n11) Section 4.4 does not seem to go anywhere or produce and tangible conclusions\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"0) It\\u2019s hard to follow how the prox operator is used in the development of the alg, this could use some higher level explaination\\n1) Intro p2 is about bootstrapping, use that term and remove the equations\\n2) Its not clear why you are talking about stochastic vs deterministic in P3\\n3) Perhaps you should compare against a MC method in the experiments to demonstrate the problem with TD methods and generalization\\n4) Section 2: \\u201ccan often be a regularization term\\u201d >> can or must be?\\n5) update law is a odd term\\n6)\\u201d tends to alleviate\\u201d >> odd phrase\\n7) section 4 should come before section 3\\n8) Alg 1 in not helpful because it just references an equation\\n9) section 4.4 is very confusing, I cannot follow the logic of the statements \\n10) Q learning >> Q-learning\\n11) Not sure what you mean with the last sentence of p2 section 5\\n12) where are the results for Acrobot linear function approximation\\n13) appendix Q-learning with linear FA is not DQN (table 2)\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Introduces a new variation on TD. Empirical results are not done well enough to support the claims of an improvement.\", \"review\": \"The paper introduces HR-TD, a variation of the TD(0) algorithm. The variant is meant to ameliorate a problem of \\u2018over-generalization\\u2019 with conventional TD. This problem is briefly characterized, but primarily it is presumed to be established by prior work. The algorithm is simple and a series of experiments are presented with it applied to Mountain Car, Acrobot, and Atari Pong, with both linear function approximation and neural networks (DDQN). It is claimed that the results establish HR-TD as an improvement over TD. However, I found the results unconvincing because they were statistically insufficient, methodologically flawed, and too poorly presented for me to be confident of the meaning of numbers reported. In addition, it is not hard to imagine very simple problems where the HR-TD technique would be counterproductive, and these cases were not included in the experimental testbeds.\", \"the_first_weakness_of_the_paper_is_with_its_characterization_of_the_problem_that_it_seeks_to_solve\": \"over-generalization. This problem is never really characterized in this paper. It instead refers instead to two other papers, one published only in a symposium and the other with no publication venue identified.\\n\\nThe second weakness of the paper is the claim that it has done a theoretical analysis in Section 4.4. I don\\u2019t see how this section establishes anything of importance about the new method.\\n\\nThe problem with the main results, the empirical results, is that they do not come close to being persuasive. There are many problems, beginning with there simply not being clear. I read and reread the paragraphs in Section 5.1, but I cannot see a clear statement of what these numbers are. Whatever they are, to assess differences between them would require a statistical statement, and there is none given. Moreover to give such a statistical statement would require saying something about the spread of the results, such as the empirical variance, but none is given. And to say something about the variance one would need substantially more than 10 runs per algorithm. Finally, there is the essential issue of parameter settings. With just one number given for each algorithm, there are no results or no statement about what happens as the parameters are varied. Any one of these problems could render the results meaningless; together they surely are.\\n\\nThese problems become even greater in the larger problems.\\n\\nA nice property of HR-TD is that it is simple. Based on that simplicity we can understand it as being similar to a bias toward small weights. Such a bias could be helpful on some problems, possibly on all of those considered here. In general it is not clear that such a bias is a good idea, and regular TD does not have it. Further, HR-TD does not do exactly a bias to small weights, but something more complicated. All of these things need to be teased apart in careful experiments. I recommend small simple ones. \\n\\nHow about a simple chain of states that are passed through reliably in sequence leading to a terminal state with a reward of 1000 (and all the other rewards 0). Suppose all the states have the same feature representation. If gamma=1, then all states have value 1000, and TD will easily learn and stick at this value even for large alpha, but HR-TD will have a large bias toward 0, and the values will converge to something significantly less than the true value of 1000. \\n\\nThat would be an interesting experiment to do. Also good would be to compare HR-TD to a standard bias toward small weights to see if that is sufficient to explain the performance differences.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SJggZnRcFQ | Learning Programmatically Structured Representations with Perceptor Gradients | [
"Svetlin Penkov",
"Subramanian Ramamoorthy"
] | We present the perceptor gradients algorithm -- a novel approach to learning symbolic representations based on the idea of decomposing an agent's policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification.
| [
"representation learning",
"structured representations",
"symbols",
"programs"
] | https://openreview.net/pdf?id=SJggZnRcFQ | https://openreview.net/forum?id=SJggZnRcFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgKUwKWlE",
"rye-jhH507",
"SJxnyJB_0m",
"Bkxrj2N_CX",
"Hyxed34OCQ",
"HJxKv6VUaQ",
"Bkl4fe-r6X",
"BJgi_JbSaQ",
"r1xETQsMTX",
"SJl6pksG6X",
"r1xbCrZgpQ",
"Hyg6yj_qnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544816465064,
1543294105167,
1543159524417,
1543158941091,
1543158888265,
1541979489387,
1541898251718,
1541898099126,
1541743547922,
1541742532733,
1541572041100,
1541208804764
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1141/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1141/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1141/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1141/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1141/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1141/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1141/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1141/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1141/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1141/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1141/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1141/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper considers the problem of learning symbolic representations from raw data. The reviewers are split on the importance of the paper. The main argument in favor of acceptance is that bridges neural and symbolic approaches in the reinforcement learning problem domain, whereas most previous work that have attempted to bridge this gap have been in inverse graphics or physical dynamics settings. Hence, it makes for a contribution that is relevant to the ICLR community. The main downside is that the paper does not provide particularly surprising insights, and could become much stronger with more complex experimental domains.\\nIt seems like the benefits slightly outweigh the weaknesses. Hence, I recommend accept.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}",
"{\"title\": \"Revision address some of my concerns\", \"comment\": \"1) With regards to transfer, it would be useful to see a comparison with a purely neural network baseline: one can imagine pre-training the neural network baseline on the \\u201cgo-to-pose\\u201d task, and then fine-tune the network in the \\u201ccollect-wood\\u201d task. This may be a fairer comparison.\\n2) The authors\\u2019 reasoning makes sense, and I believe that this is a strength of the preceptor gradients framework.\\n3) I see. Thank you for clarifying.\\n7) Agreed, expressing the program at the right abstraction level would be crucial. Although writing a controller program whose inputs live in a relatively simpler abstraction level is easier, a major challenge I potentially see with how the perceptor gradients would scale. For example, in the Minecraft tasks, the outputs of the preceptor network are categorical variables over the agent\\u2019s x and y position. However, presumably for a task with this simplicity it may be possible to use conventional non-learning computer vision methods to obtain the x and y position. Beyond merely increasing the complexity of the preceptor network, there seems to be a conceptual issue: how would the perceptor gradients approach scale to scenarios where the state is not that easy to symbolically specify? This is presumably the motivation for learning from pixels in the first place. For example, one can imagine manipulating a non-convex object like a cup, or a soft object like a stuffed animal. Would the output of the perceptor network be in these cases, and would the output space of the perceptor network need to be custom-designed for each differing object geometry? I think the perceptor gradients approach is a good step towards learning systems that generalize better, but it seems that future work would need to address the challenge of scaling the approach to domains where the output space of the perceptor network is more complex than categorical positions.\\n\\nOverall, the revised version of the paper has addressed some of my concerns, although it still seems to me that more future work would need to be done with respect to point (7) for the perceptor gradients approach to have more impact. Despite these concerns, I believe the paper is a good first step towards tackling an ambitious goal, so I would recommend acceptance.\"}",
"{\"title\": \"Revised Paper\", \"comment\": [\"We would like to thank the reviewers for all the feedback you have provided us as we feel that it has significantly improved our paper. We have uploaded a revised version of the paper taking into account all comments and suggestions. Key changes which we have made include:\", \"Expanded background section\", \"Improved the mathematical description to avoid confusion\", \"Included results from ablation experiments on the Minecraft tasks\"]}",
"{\"title\": \"Response 2 / 2\", \"comment\": \">> 5. Related work: \\u2026 \\n\\nThese are both fair points and we have taken them into account in the revised version of the paper.\\n\\n >> 6. Possible limitation: \\u2026 \\n\\nCertainly the perceptor outputs incorrect representations at the early stages of training, but the learning procedure manages to improve the perceptor until it outputs the correct representations. Space permitting, we will include a discussion on this potential limitation.\\n\\n >> 7. How does this scale? \\u2026 \\n\\nIf the variables of interest are entirely unknown then it is not possible to provide a program which can map the output of the perceptor to an action. Writing any such a program specifies (as a working hypothesis, at least) the variables/symbols of interest which then the perceptor learns to infer. Importantly, expressing the program with symbols at the right abstraction level, balancing the capabilities of the perceptor and the semantics of the task, is crucial. For example, writing a controller program working with low-level pixel features is a daunting task, but writing a program using the position and velocity of the pendulum is a straightforward exercise for anyone familiar with control theory. Scaling up the method to a real-world task would mainly require improvements to the grounding capabilities of the perceptor. We do not report any experiments on physical robot setups, but we\\u2019d say that the difference between perceiving the position of a pendulum, for example, from a real physical system and the synthetic videos can be addressed by increasing the complexity of the perceptor network. In fact, our experiments demonstrate that programs provide strong inductive bias and speed up learning in a way which is which is crucial for RL based robot learning. \\n\\n >> 8. Clarity\\n\\nWe have expanded the background section to cover the ideas of inverse graphics and inverse physics, which are particular instantiations of the analysis by synthesis idea. In general, we find the idea of generating observations from symbolic representations interesting as it can be useful for performing some sort of enumerative testing of the trained RL agent, but this is work beyond the scope of this paper.\"}",
"{\"title\": \"Response 1 / 2\", \"comment\": \"Thank you very much for the detailed review and the interesting points and suggestions made. We have split our response in two parts due to response length limits.\\n\\n >> 1. To what extent do the experiments...\", \"there_are_two_main_aspects_of_transferability_that_we_consider\": \"1) Transferability to new environments - can a perceptor network trained in one environment be transferred to another? Extrapolating with neural networks to new parts of the data domain that have not been considered during training is a hard challenge. We address this problem with the \\u201cMinecraft: Collect Wood\\u201d experiment where we used a pose perceptor trained on the \\u201cMinecraft: Go to Pose\\u201d. The pose perceptor network had never seen a wood block and so we allow for very slow adaptation of the pose perceptor during learning of the \\u201cGo to Pose\\u201d task. This demonstrates that perceptors can be transferred to new environments. Of course, methods such as data augmentation and domain randomisation can improve the transferability of perceptors, but ultimately they do inherit all the limitations of pure statistical learners.\\n \\n2) Transferability to new tasks - can a perceptor trained on one task be used for another? The output of a perceptor, due to its symbolic nature, can be fed into a variety of programs solving different tasks in the same environment. For example, one can easily modify the LQR controller to stabilise the pendulum at any other linear position x different from 0. One can also modify the A* planner such that the agent avoids the wooden block, rather than collecting it. In contrast to the proposed perceptor-program decomposition, it is far from obvious how to alter an end-to-end policy in order to adhere to the specifications of a new task. The \\u201cMinecraft: Collect Wood\\u201d experiment addresses this idea to an extent by showing that a certain set of symbols, e.g. the agent pose, is required by many task encoding programs. \\n\\n >> 2. Experiment request: \\u2026 \\n\\nThe program maps symbols to actions and so if it is removed then it needs to be replaced by neural network performing the same type of mapping. The main question then is what neural network architecture to use - if the architecture is too simple it will lead to poor performance and if it is too complex then it would result in learning poor symbolic representations. \\nWe have, however, revised the paper to include the results of the Minecraft experiments using a feedforward perceptor only (no decoder). The results demonstrate that the decoder has little effect on the learning performance and so the program introduces the main inductive bias. We would be happy to discuss other alternatives for such ablation experiments.\\n\\n >> 3. Question: \\u2026\\n\\nTraining of the beta-VAE is independent of the linear regression and these are performed sequentially. We use a vanilla beta-VAE as described in [1]. The resulting latent space is non-identifiable meaning that a certain factor of variation can be represented by any of the latent variables. In order to overcome this issue Higgins et al. [1] train a linear classifier on top of the learnt latent space in order to derive a disentanglement metric. We take a similar approach. In order to inspect the latent space learnt by the beta-VAE we train a single layer linear regressor to predict the ground truth values from the latent code of the already trained beta-VAE. \\n\\nThe key idea behind beta-VAE is that enforcing independence between the latent variables results in disentangled factors. This, however, is only the case when the ground truth factors of variation are indeed independent of each other. That assumption is obviously violated since the factors of variation are entangled through the physics model (in equations of motion written as an ODE) of the cart-pole system. Therefore, the beta-VAE does not manage to reconstruct the factors of variation as accurately as the perceptor gradients setup. Programmatic regularisation is a powerful technique precisely because it can be used to express and enforce arbitrary relationships between the latent factors of interest.\\n\\n[1] Higgins et al., beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, https://openreview.net/forum?id=Sy2fzU9gl\"}",
"{\"title\": \"Now I see...\", \"comment\": \"So, if equation (3) is a factor multiply, then we can write it out as:\\n\\\\[\\\\pi(a \\\\mid s) = \\\\sum_\\\\sigma P(a \\\\mid \\\\sigma) P(\\\\sigma \\\\mid s)\\\\;\\\\;,\\\\]\\nwhich by your assumption of a deterministic program is \\n\\\\begin{align*}\\n\\\\pi(a \\\\mid s) & = \\\\sum_\\\\sigma I(\\\\rho(\\\\sigma) = a) P(\\\\sigma \\\\mid s)\\\\\\\\\\n& = \\\\sum_{\\\\{\\\\sigma \\\\mid \\\\rho(\\\\sigma) = a\\\\}} \\\\psi(\\\\sigma \\\\mid s)\\n\\\\end{align*}\\nSo\\n\\\\begin{align}\\n\\\\nabla_\\\\theta \\\\log \\\\pi(a \\\\mid s) &= \\\\nabla_\\\\theta \\\\sum_{\\\\{\\\\sigma \\\\mid \\\\rho(\\\\sigma) = a\\\\}} \\\\psi(\\\\sigma \\\\mid s)\\\\\\\\\\n& = \\\\sum_{\\\\{\\\\sigma \\\\mid \\\\rho(\\\\sigma) = a\\\\}} \\\\nabla_\\\\theta \\\\psi(\\\\sigma \\\\mid s)\\n\\\\end{align}\\nNote that this quantity still depends on which symbols $\\\\sigma$ will\\ncause your program $\\\\rho$ to generate action $a$.\\n\\nNow, in equation (6), $\\\\sigma_t^{(i)}$ a particular symbol, which is the one\\nthat was {\\\\em actually} generated by the perceptor at time $t$ on sequence\\n$i$, right? Your equation (6) makes sense to me if $\\\\sigma$ is\\nactually part of $\\\\tau$. But it's not.\\n\\nOf course my example of a program that ignores its input and always\\noutputs 0 is not interesting practically. But let's see what happens\", \"here\": \"\\\\begin{itemize}\\n\\\\item The trace will only have $a = 0$.\\n\\\\item The sequence of symbols $\\\\sigma_t^{(i)}$ will be something\\n interesting.\\n\\\\item In your equation (6), the right-hand side would be independent\\n of $\\\\rho$ and generally be non-zero.\\n\\\\item But, in fact, my equation (1) above would be 0 because $\\\\phi$\\n is a probability distribution, and so it should be 0. And also\\n because, intuitively, it should be 0.\\n\\\\end{itemize}\", \"in_your_reply_to_me_you_said\": \"``the key intuition behind Theorem 1 is\\nthat the program can be thought of as being absorbed in the\\nenvironment.'' But, in that case, it really {\\\\em is} true that the\\n$\\\\sigma$ need to be observed, because they are the new ``actions.''\\n\\nOkay. I think this is why we end up with different understandings of\\nwhat's going on. \\n\\nLet me proceed with the rest of the paper under that assumption (but\\nif that's in fact what's going on here, then you would need to amend\\nyour description of $\\\\tau$ to include $\\\\sigma$.)\\n\\nOkay. Sorry to have been dim. It does all make sense now.\\n\\nThis seems like a completely reasonable idea, though really not all\\nthat surprising. One might ask whether it would be a\\ngood idea to do value-iteration networks this way, too: that is,\\nthink of the VIN as part of the environment and just train the models\\nusing reinforce. I guess not, quite, because the model for VIN is a\\nstatic object, not something that varies along the trajectory. \\n\\nThen we need to get into the question of whether reinforce is a\\nsensible algorithm or not, and under what circumstances. \\n\\nIn any case, I will change my rating and hope the story of my\\nconfusion above is useful to you.\"}",
"{\"title\": \"Purpose of Theorem 1\", \"comment\": \"This is indeed the purpose of Theorem 1 as we have also mentioned in our response to the reviewer.\"}",
"{\"title\": \"Response to the review\", \"comment\": \"\", \"factorisation_in_equation_3\": \"------------------------------------------\\nThank you for the close look at the mathematical details of the paper. Equation 3 is meant to represent full factor multiplication rather than marginalisation of \\\\sigma_t. We will clarify this in the paper and hopefully avoid confusing other readers.\", \"theorem_1\": \"-----------------\\nThe purpose of Theorem 1 is to show that REINFORCE can be applied to train the perceptor network. The key intuition behind Theorem 1 is that the program can be thought of as being absorbed in the environment and the task of the agent is to feed it the right inputs. Therefore, considering a program that always outputs action 0, as suggested in the review, is essentially equivalent to an environment which does not take into account the actions of the agent at all. In this case, the gradient of the log probability of the trajectory with respect to the parameters of the policy, in a standard policy gradients setup, would also be non-zero. More importantly, while this scenario is an interesting theoretical edge case it has little, if any, practical implications. \\n\\nAdditionally, we would like to note that Theorem 1 handles correctly the case when the perceptor is to always output the same symbols as the gradient of the log prob of the symbol trajectory with respect to theta will be 0 as expected.\", \"related_work\": \"----------------------\\nWorks such as Value Iteration Networks, QMDP Networks and Particle Filter Networks are based on the idea of differentiable programs which can express only subset of the problems that a general program can express.\\nOne of the key contributions of the paper is that perceptor gradients can work with general programs as we have demonstrated by directly plugging in programs from standard Python packages. Nevertheless, that is a substantial body of literature that we will update the paper to connect with.\\n\\nSymbols vs. Discrete variables:\\n-----------------------------------------------\\nThe perceptor can output both continuous (LQR experiment) and discrete variables (Minecraft experiments) that characterise the raw input data. We call the output of the perceptor symbolic as each output variable (regardless of its domain) has semantic content imposed by the program. Our experiments demonstrate that the perceptor does learn representations which follow the symbolic structure of the program.\"}",
"{\"title\": \"My impression of the purpose of Theorem 1\", \"comment\": \"With respect to how the program's transformation is included in the gradient computation, my understanding from equation 8 is that the point of Theorem 1 is to show that, because the program is a non-differentiable piece, we can essentially push the agent/environment boundary further into the agent, such that the \\\"actions\\\" are the task related symbols, the \\\"states\\\" are the visual observations, and the \\\"agent\\\" is only the perceptor network. Then, from the perspective of the policy parameters, the program essentially becomes part of the environment. Therefore, we can apply REINFORCE to optimize the perceptor network as we would for any other policy.\"}",
"{\"title\": \"Interesting perspective, but the paper could be stronger with experiments that reflect its original motivations\", \"review\": \"The high-level problem this paper tackles is that of learning symbolic representations from raw noisy data, based on the hypothesis that symbolic representations that are grounded in the semantic content of the environment are less susceptible to overfitting.\\n\\nThe authors propose the perceptor gradients algorithm, which decouples the policy into 1) a perceptor network that maps raw observations to domain-specific representations, which are inputs to 2) a pre-specified domain-specific control or planning program. The authors claim that such a decomposition is general enough to accommodate any task encoding program.\", \"the_proposed_method_is_evaluated_on_three_experiments\": \"a simple control task (cartpole-balancing), a navigation task (minecraft: go to pose), and a stochastic single-object retrieval task (minecraft: collect wood). The authors show that the perceptor gradients algorithm learns much faster than vanilla policy gradient. They also show that the program provides an inductive bias that helps ground the representations to the true state of the agent by manually inspecting the representations and by reconstructing the representation into a semantically coherent scene.\\n\\nThis paper is clear and well-written and I enjoyed reading it. It proposes a nice perspective of leveraging programmatic domain knowledge and integrating such knowledge with a learned policy for planning and control. If the following concerns were addressed I would consider increasing my score.\\n\\n1. To what extent do the experiments support the authors' claims: Although the existing experiments are very illustrative and clear, they did not seem to me to illustrate that the learned representations are transferable as the authors claimed in the introduction. This perhaps is due to the ambiguous definition of \\\"transferable;\\\" it would be helpful if the authors clarified what they mean by this. Nevertheless, as the paper suggests in the introduction that symbolic representations are less likely to overfit to the training distribution, I would be interested to see an experiment that illustrates the capability of the program-augmented policy to generalize to new tasks. For example, Ellis et al. [1] suggested that the programs can be leveraged to extrapolate to problems not previously seen in the input (e.g. by running the for loop for more iterations). To show the transferability of such symbolic representations, is it possible for the authors to include an experiment to show to what extent the perceptor gradients algorithm can generalize to new problems? For example, is it possible for the proposed approach to train on \\\"Minecraft: Go to Pose\\\" and generalize to a larger map? Or is it possible for the proposed approach to train on one wood block and generalize to more wood blocks?\\n2. Experiment request: The paper seems to suggest that the \\\"Minecraft: Go to Pose\\\" task and the \\\"Minecraft: Collect Wood\\\" task were trained with an autoencoding perceptor. To more completely assess to what extent the program is responsible for biasing the representations to be semantically grounded in environment, would the authors please provide ablation experiments (learning curves, and visualization of the representations) for these two tasks where only the encoder was used?\\n3. Question: I am a bit confused by the beta-VAE results in Figure 4. If the beta-VAE is trained to not only reconstruct its input as well as perform a \\\"linear regression between the learnt latent space and the ground truth values\\\" (page 6), then I would have expected that the latent space representations to match the ground truth values much more closely. Would the authors be able to elaborate more on the training details and objective function of the beta-VAE and provide an explanation for why the learned latent space deviates so far from the ground truth?\\n5. Related work: \\n a) The paper briefly discusses representation learning in computer vision and physical dynamics modeling. However, in these same domains it lacks a discussion of approaches that do use programs to constrain learned representations, as in [1-3]. Without this discussion, my view is that the related work would be very incomplete because program-induced constraints are core to this paper. Can the authors please provide a more thorough and complete treatment of this area?\\n b) The approaches that this paper discusses for representation learning have been around for quite a long time, but it seems rather a misrepresentation of the related work to have all but two citations in the Related Work section 2017 and after. For example, statistical constraints on the latent space have been explored in [4-5]. Can the authors please provide a more thorough and complete treatment of the related work?\\n6. Possible limitation: A potential limitation for decoupling the policy in this particular way is that if the perceptor network produced incorrect representations that are fed into the program, the program cannot compensate for these errors. It would be helpful for the authors to include a discussion about this in paper.\\n7. How does this scale? As stated in the intro, the motivation for this work is for enabling autonomous agents to learn from raw visual data. Though the experiments in this paper were illustrative of the approach, these experiments assumed that the agent had access to the true state variables of its environment (like position and velocity), and the perceptor network is just inferring the particular values of these variables for a particular problem instance. However, presumably the motivation for learning from raw visual data is that the agent does not have access to the simulator of the environment. How do the authors envision their proposed approach scaling to real world settings where the true state variables are unknown? There is currently not an experiment that shows a need for learning from raw visual data. This is a major concern, because if the only domains that the perceptor gradients algorithm can be applied are those where the agent already has access to the true state variables, then there may be no need to learn from pixels in the first place. This paper would be made significantly stronger with an experiment where 1) learning from raw visual data is necessary (for example, if it is the real world, or if the true state variables were unknown) and 2) where the inductive bias provided by the program helps significantly on that task in terms of learning and transfer. Such an experiment would decisively reflect the paper's claims.\\n8. Clarity: The paper mentions that it is possible to generate new observations from the latent space. The paper can be made stronger by a more motivated discussion of why generating new observations is desirable, beyond just as a visualization tool. For example, the authors may consider making a connection with the analysis-by-synthesis paradigm that characterizes the Helmholtz machine.\\n\\n[1] Ellis et al. (https://arxiv.org/pdf/1707.09627.pdf)\\n[2] Wu et al. (http://papers.nips.cc/paper/6620-learning-to-see-physics-via-visual-de-animation.pdf)\\n[3] Kulkarni et al. (https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Kulkarni_Picture_A_Probabilistic_2015_CVPR_paper.html)\\n[4] Schmidhuber (ftp://ftp.idsia.ch/pub/juergen/factorial.pdf)\\n[5] Bengio et al. (https://arxiv.org/abs/1206.5538)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Learning Programmatically Structured Representations with Perceptor Gradients\", \"review\": \"This paper proposes the perceptor gradients algorithm to learn symbolic representations for devising autonomous agent policies to act.\\nThe perceptor gradients algorithm decomposes a typical policy into a perceptor network that maps observations to symbolic representations and a user-provided task encoding program which is executed on the perceived symbols in order to generate an action. Experiments show the proposed approach achieves faster learning rates compared to methods based solely on neural networks and yields transferable task related symbolic representations. The results prove the programmatic regularisation is a general technique for structured representation learning. Although the reviewer is out of the area in this paper, this paper seems to propose a novel algorithm to learn the symbolic representations.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"I tried very hard but I think ultimately failed to understand this paper.\", \"review\": \"The fundamental idea proposed in this paper is a sensible one: design the functional form of a policy so that there is an initial parameterized stage that operates on perceptual input and outputs some \\\"symbolic\\\" (I'd be happier if we could just call them \\\"discrete\\\") characterization of the input, and then an arbitrary program that operates on the symbolic output of the first stage.\\n\\nMy fundamental problem is with equation 3. If you want to talk about the factoring of the probability distribution p(a | s) that's fine, but, to do it in fine detail, it should be:\\nP(a | s) = \\\\sum_sigma P(a, sigma | s) = \\\\sum_sigma P(a | sigma, s) * P(sigma | s)\\nAnd then by conditional independence of a from s given sigma\\n = \\\\sum_sigma P(a | sigma) * P(sigma | s)\\nBut, critically, there needs to be a sum over sigma! Now, it could be that I am misunderstanding your notation and you mean for p(a | sigma) to stand for a whole factor and for the operation in (3) to be factor multiplication, but I don't think that's what is going on.\\n\\nThen, I think, you go on to assume, that p(a | sigma) is a delta distribution. That's fine.\\n\\nBut then equation 5 in Theorem 1 again seems to mention delta without summing over it, which still seems incorrect to me.\\n\\nAnd, ultimately, I think the theorem doesn't make sense because the transformation that the program performs on its input is not included in the gradient computation. Consider the case where the program always outputs action 0 no matter what its symbolic input is. Then the gradient of the log prob of a trajectory with respect to theta should be 0, but instead you end up with the gradient of the log prob of the symbol trajectory with respect to theta.\\n\\nI got so hung up here that I didn't feel I could evaluate the rest of the paper. \\n\\nOne other point is that there is a lot of work that is closely related to this at the high level, including papers about Value Iteration Networks, QMDP Networks, Particle Filter Networks, etc. They all combine a fixed program with a parametric part and differentiate the whole transformation to do gradient updates. It would be important in any revision of this paper to connect with that literature.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HJfxbhR9KQ | Mimicking actions is a good strategy for beginners: Fast Reinforcement Learning with Expert Action Sequences | [
"Tharun Medini",
"Anshumali Shrivastava"
] | Imitation Learning is the task of mimicking the behavior of an expert player in a Reinforcement Learning(RL) Environment to enhance the training of a fresh agent (called novice) beginning from scratch. Most of the Reinforcement Learning environments are stochastic in nature, i.e., the state sequences that an agent may encounter usually follow a Markov Decision Process (MDP). This makes the task of mimicking difficult as it is very unlikely that a new agent may encounter same or similar state sequences as an expert. Prior research in Imitation Learning proposes various ways to learn a mapping between the states encountered and the respective actions taken by the expert while mostly being agnostic to the order in which these were performed. Most of these methods need considerable number of states-action pairs to achieve good results. We propose a simple alternative to Imitation Learning by appending the novice’s action space with the frequent short action sequences that the expert has taken. This simple modification, surprisingly improves the exploration and significantly outperforms alternative approaches like Dataset Aggregation. We experiment with several popular Atari games and show significant and consistent growth in the score that the new agents achieve using just a few expert action sequences. | [
"Reinforcement Learning",
"Imitation Learning",
"Atari",
"A3C",
"GA3C"
] | https://openreview.net/pdf?id=HJfxbhR9KQ | https://openreview.net/forum?id=HJfxbhR9KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeDl48bxE",
"ryeDrw8qkN",
"HygO5LHq0Q",
"SJljw8S90X",
"B1g14LSq07",
"BkxqR35J07",
"BJgTCSrEp7",
"Hkgw1asmhX",
"ByxMU9pvjm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544803310629,
1544345407502,
1543292559828,
1543292514886,
1543292455284,
1542593745654,
1541850581442,
1540762846718,
1539983945935
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1140/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1140/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1140/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1140/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1140/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1140/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1140/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1140/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1140/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an interesting idea for more effective imitation learning. The idea is to include short actions sequences as labels (in addition to the basic actions) in imitation learning. Results on a few Atari games demonstrate the potential of this approach.\\n\\nReviewers generally like the idea, think it is simple, and are encouraged by its empirical support. That said, the work still appears somewhat preliminary in the current stage: (1) some reviewer is still in doubt about the chosen baseline; (2) empirical evidence is all in the similar set of Atari games --- how broadly is this approach applicable?\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Nice work with potential, but contributions need to be strengthened\"}",
"{\"title\": \"Rating remains the same\", \"comment\": \"Dear authors,\\n\\nThank you for your clarifications and an additional comparison with a baseline using a random subset of action pairs.\\n\\nThe main idea of this paper is interesting. However, my major concern is still in the experimental part, as it remains unclear to me how we should use the method. Many of the hyperparameters are empirically selected and lack a systematic evaluation, e.g. the number and the length of the \\\"meta-actions\\\".\\n\\nYour previous response has shown the performance on action-triplets. It is surprising to me that longer \\\"meta-actions\\\" leads to worse performance, which is somewhat against the main vein of the paper: the training difficulty is not clearly described. It would be better to show the correlation between the number of available demonstrations and the length/number of \\\"meta-actions\\\" we should adopt.\\n\\nI'm still confused about the selection of the baseline. Again, InfoGAIL is proposed to imitate multi-modal expert demonstrations. The tasks used in the paper do not seem to be in this particular setting. GAIL [1] might be a more suitable baseline.\\n\\nAs a result, my rating will stay the same for now, but I encourage the authors to keep on improving the paper.\\n\\n[1] Jonathan Ho, Stefano Ermon. \\\"Generative Adversarial Imitation Learning\\\". In NIPS 2016.\"}",
"{\"title\": \"Thank you very much for the positive comments and suggestion about another baseline. Clarifications are given below.\", \"comment\": \"Please check the updated figures in our paper that include the comparison of a random subset of action pairs vs the most frequent action pairs (the line in magenta). The new plots strengthen our proposal that the most-frequent action pairs have useful information.\\n\\nQ1. Ideally, an expert should be consistent with the action pair distribution over a set of few episodes. In our analysis, we found that the frequent action pairs after 12 hrs, 13 hrs, 14 hrs and 15 hrs of training the expert network are consistent. Hence, it is evident that after training the expert network for reasonable time, the top action pairs saturate. We have made our choice more concrete by training all expert networks for 15 hrs.\\n\\nQ2. As mentioned in the paper, imitation learning algorithms presume that the expert information is available beforehand. We just substitute human data with a pre-trained network. Collecting traces of human data is a fast and viable but it is highly dependent on the task/game. In our case, assuming access to expert action sequences, calculating the frequency distribution and obtaining top action sequences is a trivial task with few seconds of time. \\n\\nQ3. Please check the new plots for random subset of action-pairs. As for adding all possible action-pairs, the action space grows exponentially, and the network must classify lot more classes with the same information. With games like FishingDerby and Asteroids (18 and 14 actions), it become too hard for network to classify hundreds of classes with same information.\\n\\nQ4. Action triplets are inconsistent and statistically insignificant with limited demonstration: Our focus was on using very limited (small) demonstration. The number of episodes that we use is quite small (25 episodes each with actions ranging from 700 to 7000) as we wanted very limited demonstration. We observe that with such limited demonstration, only action-pairs are reliable. The frequent action triplets after 12 hrs, 13 hrs, 14 hrs and 15 hrs of training expert network are different each time. Furthermore, for the game FishingDerby with 18 basic actions, the top 18 action pairs account for 33.85% of all the action-pairs in the 25 expert episodes. The top 18 action triplets account to just 7.36% of the all triplets in the same 25 episodes. Even for other games, we have similar discrepancy for action pairs vs triplets (DemonAttack-27.21% vs 8.87%, Asteroids-21.67% vs 6.37%, Atlantis 31.32% vs 9.61%, SpaceInvaders-28.82% vs 10.24%, BeamRider-14.78% vs 2.81%, TimePilot-15.41% vs 2.05%, Qbert-67% vs 51%). We still experimented with 3-step actions and noticed that for Atlantis, action triplets outperform action-pairs which Is great. But for other games, action-triplets perform worse than action-pairs. \\n\\nQ5. Thank you for the suggestion. We\\u2019ll investigate search algorithms in the future to identify informative action-sequences. One class of models that we mentioned in the paper is \\u2018Options Framework\\u2019. The main drawback of Options Framework is that we need human designed options. Our work is a generic way of identifying options.\\n\\nThank you for spotting typos. We have fixed them in the latest revision.\"}",
"{\"title\": \"Thank you for the review. Clarifications are given below.\", \"comment\": \"Q1. We would like to stress that our setting, also clearly mentioned in the paper at several places, is standard imitation learning setting, where access to expert information is given input to the algorithm. We do not need any GA3C training. It is a proxy to generate very few expert action sequences. For the other imitation learning baselines, the same pretrained GA3C training is used as a proxy for expert. Hence, it is a fair comparison.\\n\\nQ2. The memory advantage of our approach is quite straight forward. Out of all imitation baseline, only our method does not need to store state information at all. We only need few action sequences for ~25 episodes (each with a few 1000 integers) which takes trivially low memory. On the other hand, to store any reasonable (say 10000) state-action pairs of an expert in an environment, we will need at least 4032MB memory. Please note that each state is an image is originally 210*160*3 dimensional.\\n\\nQ3. As we understand, you\\u2019re concerned about difference in variance when we plot episode-wise and time-wise. Please note that we ran all the 5 runs of each game for 15 hrs. But the number of episodes in each run is different. For Atlantis game, the number of episodes range between 9114 to 10366. In our episode-wise plots, we only show the mean and variance of first 9114 episodes for each run. Hence, even though the time-wise and episode-wise plots are generated from the same output, the variance is higher for episode-wise plot. This is more glaring on Atlantis game as our idea gets much higher score than the baselines.\\n\\nWe believe we have answered all your questions. If you have any questions on reproducibility, we\\u2019ve our code ready for release once the review period is over. Since our idea is simple and very effective, it needs more visibility so that more investigation can be made on this idea. Simplicity is the very reason why we can beat GA3C by significant margin. If the idea is not computationally simple, most likely it won\\u2019t beat GA3C (a highly optimized implementation on GPUs) on running time. We hope you will change your opinion about the overall score.\"}",
"{\"title\": \"Thank you for the review. Clarifications are provided below.\", \"comment\": \"Q1. Action triplets are inconsistent and statistically insignificant with limited demonstration: Our focus was on using very limited (small) demonstration. The number of episodes that we use is quite small (25 episodes each with actions ranging from 700 to 7000) as we wanted very limited demonstration. We observe that with such limited demonstration, only action-pairs are reliable. An expert should be consistent with the frequent action pairs/triplets over a set of episodes. In our analysis, we found that the frequent action pairs are consistent after 12 hrs, 13 hrs, 14 hrs and 15 hrs of training the expert network. The action pairs at different time instants in training were just permutations of each other. The same was not true for action triplets. Furthermore, for the game FishingDerby with 18 basic actions, the top 18 action pairs account for 33.85% of all the action-pairs in the 25 expert episodes. The top 18 action triplets account to just 7.36% of the all triplets in the same 25 episodes. Even for other games, we have similar discrepancy for action pairs vs triplets (DemonAttack-27.21% vs 8.87%, Asteroids-21.67% vs 6.37%, Atlantis 31.32% vs 9.61%, SpaceInvaders-28.82% vs 10.24%, BeamRider-14.78% vs 2.81%, TimePilot-15.41% vs 2.05%, Qbert-67% vs 51%). We still experimented with 3-step actions and noticed that for Atlantis, action triplets outperform action-pairs which Is great. But for other games, action-triplets perform worse than action-pairs.\\n\\nQ2. Thank you for the suggestion. It is an interesting exercise to interpret frequent action pairs.\\n\\nQ3. The memory advantage of our approach is quite straight forward. Out of all imitation baseline, only our method does not need to store state information at all. Even when we are resizing images to 84*84*4, we need several thousands of those images to get noticeable advantage when compared to having no information at all.\\n\\nQ4. InfoGAIL in one of the most recent techniques in Imitation Learning. Hence, we wanted to compare against InfoGAIL and ensure that we are not missing any subtleties. For our Dagger implementation, we used the simple parameter free version of beta=Indicator(i=1), i.e., 1 for the first episode and then 0 from the second episode.\\n\\nQ5. Thanks for the great suggestion! We were intending to explore this direction in future on continuous action spaces by binning continuous values to discrete.\\n\\nThank you for spotting typos, we have corrected them in the current version. Since our idea is simple and very effective, it needs more visibility so that more investigation can be made on this idea. Simplicity is the very reason why we can beat GA3C by significant margin. If the idea is not computationally simple, most likely it won\\u2019t beat GA3C (a highly optimized implementation on GPUs) on running time. We hope you will change your opinion about the overall score.\"}",
"{\"title\": \"Additional experiments and revision\", \"comment\": \"We have added the plots for another baseline which is to choose a random subset of action pairs and append to the original action space. Please check the new plots in magenta in Figure 2. These plots strengthen our hypothesis that the frequent action pairs have useful information that random action pairs do not.\"}",
"{\"title\": \"need more in-depth analysis\", \"review\": \"[Summary]\\n\\nThis paper presents an interesting idea that to append the agent's action space with the expert's most frequent action pairs, by which the agent can perform better exploration as to achieve the same performance in a shorter time. The authors show performance gain by comparing their method with two baselines - Dagger and InfoGAIL.\\n\\n\\n[Stengths]\\n\\nThe proposed method is simple yet effective, and I really like the analogy to mini-moves in sports as per the motivation section.\\n\\n\\n[Concerns]\\n\\n- How to choose the number and length of the action sequences?\\nThe authors empirically add the same number of expert's action sequences as the basic ones and select the length k as 2. However, no ablation studies are performed to demonstrate the sensitivity of the selected hyperparameters. Although the authors claim that \\\"we limit the size of meta-actions k to 2 because large action spaces may lead to poor convergence\\\", a more systematic evaluation is needed. How will the performance change if we add more and longer action sequences? When will the performance reach a plateau? How does it vary between different environments?\\n\\n- Analysis of the selected action sequences.\\nIt might be better to add more analysis of the selected action sequences. What are the most frequent action pairs? How does it differ from game to game? What if the action pairs are selected in a random fashion?\\n\\n- Justification of the motivation\\nThe major motivation of the method is to release the burden of memory overheads. However, no quantitative evaluations are provided as to justify the claim. Considering that the input images are resized to 84x84, storing them should not be particularly expensive.\\n\\n- The choice of baseline.\\nInfoGAIL (Li et al., 2017) is proposed to identify the latent structures in the expert's demonstration, hence it is not clear to me how it suits the tasks in the paper. The paper also lacks details describing how they implemented the baselines, e.g. beta in Dagger and the length of the latent vector in InfoGAIL.\\n\\n- The authors only show experiments in Atari games, where the action space is discrete. It would be interesting to see if the idea can generalize to continuous action space. Is it possible to cluster the expert action sequences and form some basis for the agent to select?\\n\\n- Typos\\n{LRR, RLR/RRL} --> {LRR, RLR, RRL}\\nsclability --> scalability\\nwe don't need train a ... --> we don't need to train a ...\\nAtmost --> At most\\n\\n\\n[Recommendation]\\n\\nThe idea presents in the paper is simple yet seemingly effective. However, the paper lacks a proper evaluation of the proposed method, and I don't think this paper is ready with the current set of experiments. I will decide my final rating based on the authors' response to the above concerns.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Border line paper\", \"review\": \"The paper proposes an idea of using the most frequent expert action sequence to assist the novice, which, as claimed, has lower memory overhead than other imitation learning methodologies. The authors present comparison of their proposed method with state-of-the-art and show its superior performance. However I do have the following few questions.\\n\\n1. The proposed method requires a long time of GA3C training. How is that a fair comparison in Figure 2, where proposed method already has a lead over GA3C? It could be argued that it's not using all of the training outcome, but have the authors considered other form of experts and see how that works?\\n\\n2. The authors claimed one of the advantages of their method is reducing the memory overhead. Some supporting experiments will be more convincing.\\n\\n3. In Figure 3, atlantis panel, the score shows huge variance, which is not seen in Figure 2. Are they generated from the same runs? Could the authors give some explanation on the phenomenon in Figure 3?\\n\\nOverall, I think the paper has an interesting idea. But the above unresolved questions raises some challenge on its credibility and reproducibility.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A well written paper with a simple, yet powerfull, idea that needs further analysis\", \"review\": \"The paper describes an imitation reinforcement learning approach where\\nthe primitive actions of the agent are augmented with the most common\\nsequences of actions perform by experts. It is experimentally shown\\nhow this simple change has clear improvements in the performance of\\nthe system in Atari games. In practice, the authors double the number\\nof primitive actions with the most frequent double actions perform by\\nexperts. \\n\\nA positive aspect of this paper comes from the simplicity of the\\nidea. There are however several issues that should be taken into\", \"account\": \"- It is not clear how to determine when the distribution of action\\n pair saturates. This is relevant for the use of the proposed approach.\\n- The total training time should consider both the initial time to\\n obtain the extra pairs of frequent actions plus the subsequent\\n training time used by the system. Either obtained from a learning\\n system (15 hours) or by collecting traces of human experts (< 1\\n hour?). \\n- It would be interesting to see the performance of the system with\\n all the possible pairs of primitive actions and with a random subset\\n of these pairs, to show the benefits of choosing the most frequent\\n pairs used by the expert.\\n- This analysis could be easily extended to triplets and so on, as\\n long as they are the most frequently used by experts.\\n- The inclusion of macro-actions has been extensively studied in\\n search algorithms. In general, the utility of those macros depends on\\n the effectiveness of the heuristic function. Perhaps the authors\\n could revise some of the literature.\\n- Choosing the most frequent pairs in all the game may not be a\\n suitable strategy. Some sequences of actions may be more frequent\\n (important) at certain stage of the game (e.g., at the beginning/end\\n of the game) and the most frequent sequences over all the game may\\n introduce additional noise in those cases.\\n\\nThe paper is well written and easy to follow, there are however, some\", \"small_typos\": [\"expert(whose => expert (whose\", \"% there are several places where there is no space between a word and\", \"% its following right parenthesis\", \"don't need train => don't need to train\", \"experiments4. => experiments.\", \"Atmost => At most\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJxeWnCcF7 | Learning Mixed-Curvature Representations in Product Spaces | [
"Albert Gu",
"Frederic Sala",
"Beliz Gunel",
"Christopher Ré"
] | The quality of the representations achieved by embeddings is determined by how well the geometry of the embedding space matches the structure of the data.
Euclidean space has been the workhorse for embeddings; recently hyperbolic and spherical spaces have gained popularity due to their ability to better embed new types of structured data---such as hierarchical data---but most data is not structured so uniformly.
We address this problem by proposing learning embeddings in a product manifold combining multiple copies of these model spaces (spherical, hyperbolic, Euclidean), providing a space of heterogeneous curvature suitable for a wide variety of structures.
We introduce a heuristic to estimate the sectional curvature of graph data and directly determine an appropriate signature---the number of component spaces and their dimensions---of the product manifold.
Empirically, we jointly learn the curvature and the embedding in the product space via Riemannian optimization.
We discuss how to define and compute intrinsic quantities such as means---a challenging notion for product manifolds---and provably learnable optimization functions.
On a range of datasets and reconstruction tasks, our product space embeddings outperform single Euclidean or hyperbolic spaces used in previous works, reducing distortion by 32.55% on a Facebook social network dataset. We learn word embeddings and find that a product of hyperbolic spaces in 50 dimensions consistently improves on baseline Euclidean and hyperbolic embeddings, by 2.6
points in Spearman rank correlation on similarity tasks
and 3.4 points on analogy accuracy.
| [
"embeddings",
"non-Euclidean geometry",
"manifolds",
"geometry of data"
] | https://openreview.net/pdf?id=HJxeWnCcF7 | https://openreview.net/forum?id=HJxeWnCcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1l3VC9rl4",
"BkevXvEQ0Q",
"H1exAN8qaQ",
"B1la9NI96Q",
"S1lIwN896m",
"SygOB4LcTX",
"ryljZN8caX",
"r1gDfZ8ca7",
"Byxc9eI9am",
"Hklpflm6h7",
"ryxIIdud37",
"r1x-tAv3jm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545084467634,
1542829855364,
1542247623863,
1542247572729,
1542247518397,
1542247488514,
1542247427488,
1542246670529,
1542246546051,
1541382164703,
1541077070013,
1540288120660
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1139/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1139/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1139/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1139/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a novel framework for tractably learning non-eucliean embeddings that are product spaces formed by hyperbolic, spherical, and Euclidean components, providing a heterogenous mix of curvature properties. On several datasets, these product space embeddings outperform single Euclidean or hyperbolic spaces. The reviewers unanimously recommend acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel framework for learning non-euclidean embeddings\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate the reviewer's thoughtful reading of our response and reconsidering the score. After considering the reviewer\\u2019s points, we have changed the title of the submission to \\\"Learning Mixed-Curvature Representations in Products of Model Spaces\\\" in order to more accurately reflect that we perform embeddings specifically into products of hyperbolic, Euclidean, and spherical spaces (the traditional Riemannian \\u201cmodel spaces\\u201d of constant curvature [1]). We have clarified this throughout the paper (e.g. statement of Lemma 2). In the revised draft, we also make it more explicit that we follow notation from standard references and recent work in this area such as [2].\\n\\nBeyond feedback on details, we welcome further comments on the overall merits of our approach and its contributions to representation learning. We believe our contributions are now more accurately reflected in the title and claims, and look forward to the reviewer\\u2019s evaluation of their intrinsic value.\\n\\n[1] John Lee. Riemannian manifolds: an introduction to curvature\\n[2] Nickel and Kiela. Poincar\\u00e9 embeddings for learning hierarchical representations.\"}",
"{\"title\": \"Addressing your concerns\", \"comment\": \"We welcome the reviewer's detailed questions and suggestions on the technical presentation of our paper, and we appreciate the opportunity to improve it. To the best of our understanding, many of the reviewer's questions are addressed in the submitted draft, or pertain to standard notation and arguments. Nevertheless, we respond to the reviewer\\u2019s comments in detail below, clarifying ideas or pointing out specific lines where questions are answered.\\n\\nWe sincerely hope that our response clarifies any potential notational confusions, and we look forward to further engaging in a substantial discussion on the overall merits of our work.\\n\\nAll pages and lines referenced refer to the original submission.\"}",
"{\"title\": \"Line-by-line response (1)\", \"comment\": [\"Page 2: What are p_i, i=1,2,...,n, their set T and \\\\mathcal{P}?\", \"This refers to an arbitrary set T containing points p_1,...,p_n on a manifold P, for which we wish to define a mean.\", \"What is | | used to compute distortion between a and b?\", \"Absolute value\", \"Please fix the definition of the Riemannian manifold, such that M is not just any manifold, but should be a smooth manifold or a particular differentiable manifold. Please update your definition more precisely, by checking page 328 in J.M. Lee, Introduction to Smooth Manifolds, 2012, or Page 38 in do Cormo, Riemannian Geometry, 1992.\", \"Yes, it is a smooth manifold, as specified in the first line of the \\u201cProduct Manifolds\\u201d paragraph.\", \"Please define \\\\mathcal{P} in equation (1).\", \"\\\\mathcal{P} is a product manifold.\", \"Define K used in the definition of the hyperboloid more precisely.\", \"K is an arbitrary constant that indexes the curvature. This is described in the first paragraph of section \\u201cLearning the curvature\\u201d.\", \"Please provide proofs of these statements for product of manifolds with nonnegative and nonpositive curvatures: \\u201cIn particular, the squared distance in the product decomposes via (1). In other words, dP is simply the l2 norm of the component distances dMi.\\u201d\"], \"the_given_statement_is_a_standard_fact_about_products_of_riemannian_manifolds\": [\"some classical references are [Levy] and [Ficken], although the result is stated directly in, e.g, [TS, pg. 81, eq. (4.19)]. Here is a sketch of the proof: first, the Levi-Civita connection on the manifold decomposes along the product components [DoCarmo Ex. 6.1]. This implies that the acceleration is 0 iff it is 0 in each component; in other words, geodesics in the product manifold decompose into geodesics in each of the factors. The distance function\\u2019s decomposition follows from the additivity of the Riemannian metric, i.e. |\\\\dot{\\\\gamma}(t)| = \\\\sqrt{\\\\dot{\\\\gamma_1}(t)^2 + \\\\dot{\\\\gamma_2}(t)^2}.\", \"Please explain what you mean by \\u201cwithout the need for optimization\\u201d in\\u2026 In addition, how can you compute geodesic etc. if you use l1 distance for the embedded space?\", \"We are referring to embedding algorithms that do not require optimizing a loss function via, for example, gradient descent. This concept is detailed in Appendix C.3. For example, the second paragraph on page 19 shows how to embed a cycle by explicitly writing down the coordinates of the points, with no optimization. Similarly, for hyperbolic space, the combinatorial construction previously studied in [Sarkar, SDGR] embeds trees in hyperbolic space without optimization.\", \"Additionally, it is explicitly mentioned in the first line of the corresponding paragraph that the alternative distances proposed are meant to \\u201cignore the Riemannian structure\\u201d, because many common applications of embeddings such as link prediction do not actually require Riemannian manifold structure, or related notions such as geodesics. Conversely, the motivation for the application in Section 4.2 is to show a task where manifold structure and geodesics are actually required, where the (Riemannian) product is effective.\", \"By equation (2), the paper focuses on embedding graphs, which is indeed the main goal of the paper. Therefore, first, the novelty and claims of the paper should be revised for graph embedding. Second, three particular spaces are considered in this work, which are the sphere, hyperbolic manifold, and Euclidean space. Therefore, you cannot simply state your novelty for a general class of product spaces. Thus, the title, novelty, claims and other parts of the paper should be revised and updated according to the particular input and output spaces of embeddings considered in the paper.\", \"Our embedding technique is not limited to graphs, and indeed we perform word embeddings into product manifolds as described in Section 4.2. Graphs, however, are used as a standard metric for non-Euclidean embeddings [NK1, SDGR, NK2], and so we evaluate our approach on a variety of graphs in Section 4.1. The language of graphs is also convenient for stating some of our results, but not necessary, as described in Footnote 1.\", \"The three particular spaces are the standard spaces of constant curvature, which has been considered in previous work. Our claimed novelty is in combining these using the Riemannian product construction to perform efficient embeddings into mixed-curvature spaces, as stated in the abstract (3rd sentence), introduction (3rd paragraph), and many other places throughout.\", \"Please explain how you compute the metric tensor g_P and apply the Riemannian correction (multiply by the inverse of the metric tensor g_P) to determine the Riemannian gradient in the Algorithm 1, more precisely.\", \"This is standard, as in [NK1,NK2, WL]. The only place it is necessary for us is for the hyperbolic components in Step (9).\"]}",
"{\"title\": \"Line-by-line response (2)\", \"comment\": \"- Step (9) of the Algorithm 1 is either wrong, or you compute v_i without projecting the Riemannian gradient. Please check your theoretical/experimental results and code according to this step.\\n\\nThere is a typo; the RHS should have v_i instead of h_i.\\n\\n\\n- What is h_i used in the Algorithm 1? Can we suppose that it is the ith component of h?\\n\\nh_i refers to the coordinates corresponding to the i-th component or factor.\\n\\n\\n- In step (6) and step (8), do you project individual components of the Riemannian gradient to the product manifold? Since their dimensions are different, how do you perform these projections, since definitions of the projections given on Page 5 cannot be applied? Please check your theoretical/experimental results and code accordingly.\\n\\nEach projection is within its component; the text mentions each component is handled independently. A subscript i has been added to the RHS of steps (6),(8).\\n\\n\\n- Please define exp_{x^(t)_i}(vi) and Exp(U) more precisely. I suppose that they denote exponential maps.\\n\\nExp denotes the exponential map as defined in Section 2. The image Exp(U) refers to the standard notation f(S) := {f(s) : s \\\\in S} where S is a set.\\n\\n\\n- How do you initialize x^(0) randomly?\\n\\nThe initialization scheme depends on the application. An example of a standard initialization selects each coordinate of x^(0) either uniform or Gaussian with std on the order of 1e-2 to 1e-3 [NK1, LW], which is what we also use in our empirical evaluation. We have clarified this in Appendix D.\\n\\n- The notation is pretty confusing and ambiguous. First, does x belong to an embedded Riemannian manifold P or a point on the graph, which will be embedded? According to equation (2), they are on the graph and they will be embedded. According to Algorithm 1, x^0 belongs to P, which is a Riemannian manifold as defined before. So, if x^(0) belongs to P, then L is already defined from P to R (in input of the Algorithm 1). Thereby, gradient \\\\nabla L(x) is already a Riemannian gradient, not the Euclidean gradient, while you claim that \\\\nabla L(x) is the Euclidean gradient in the text.\\n\\nx is the manifold point to be optimized. The notation \\\\nabla L(x) is defined to be the Euclidean gradient at the bottom of page 4 of the initial submission. Note that this is the gradient of the embedding into ambient space; this is standard as in [NK2, WL].\\n\\n\\n- Overall, Algorithm 1 just performs a projection of Riemannian or Euclidean gradient \\\\nabla L(x) onto a point v_i for each ith individual manifold. Then, each v_i is projected back to a point on an individual component of the product manifold by an exponential map.\\n\\nThat is correct.\\n\\n- What do you mean by \\u201csectional curvature, which is a function of a point p and two directions x; y from p\\u201d? Are x and y not points on a manifold?\\n\\nAs mentioned earlier in the section, sectional curvature is a function of a point p and two directions (i.e. tangent vectors) u,v. However, tangent vectors can be identified with points on the manifold via geodesics (i.e. through Exp). The way our discrete curvature estimation is described in this section is analogous to other discrete curvature analogs [B]. For example, the Ricci curvature is defined for a point p and a tangent vector u, and the coarse Ricci curvature is defined for a node p and neighbor x [Ollivier2].\\n\\n\\n- You define \\\\xi_G(m;b,c) for curvature estimation for a graph G. However, the goal was to map G to a Riemannian manifold. Then, do you also consider that G is itself a Riemannian manifold, or a submanifold?\\n\\nG is a graph and does not have manifold structure. The goal of \\\\xi is to provide a discrete analog of curvature which satisfies similar properties to curvature and facilitates choosing an appropriate Riemannian manifold to embed G into. There are other similar notions of discrete curvature on graphs, for example the Forman-Ricci [WSJ] and Ollivier-Ricci [Ollivier1] curvatures.\\n\\n\\n- What is P in the statement \\u201cthe components of the points in P\\u201d in Lemma 2?\\n\\nIt is the product manifold. We have changed it to \\\\mathcal{P}.\\n\\n\\n- What is \\\\epsilon in Lemma 2?\\n\\n\\\\epsilon refers to a desired tolerance within which to compute the solution, in this case the mean. This is also explicitly mentioned in the last line of the second to last paragraph of Section 1. This is standard notation for gradient descent-based rates.\"}",
"{\"title\": \"Line-by-line response (3)\", \"comment\": \"- How do you optimize positive w_i, i=1,2,...,n?\\n\\nBy convention, the weights w_i are constants independent of the optimization. For example, to compute the standard Euclidean mean one would take w_i = 1/n for all i.\\n\\n\\n- What is the \\u201cgradient descent\\u201d refered to in Lemma 2?\\n\\nThe usual Riemannian gradient descent, since it is a manifold.\\n\\n\\n- Please provide computational complexity and running time of the methods.\\n\\nThe complexity of the Karcher mean algorithm is O(nr log epsilon^(-1)), as described on Page 2, PP 3, line 4. The convergence rate of RSGD is standard [ZS]: it converges to a stationary point with rate O(c/t), where c is a constant and t is the number of iterations. Algorithms 2 and 3 find good estimates of the corresponding distributions in a small number (~10^4) of samples; each sample requires constant time for both algorithms.\\n\\n\\n- Please define \\\\mathbb{I}_r.\\n\\nThis is standard notation for the r x r identity matrix, but we have explicitly defined it now.\\n\\n\\n- At the third line of the first equation of the proof of Lemma 1, there is no x_2. Is this equation correct?\\n\\nThe second R_1(x_1, y_1)x_1 should be R_2(x_2, y_2)x_2, which follows from directly applying equation (5) to the previous line.\\n\\n\\n- If at least of two of x1, y1, x2 and y2 are linearly dependents, then how does the result of Lemma 1 change?\\n\\nThe result does not change.\\n\\n\\n- Statements and results given in Lemma 1 are confusing. According to the result, e.g. for K=1, curvature of product manifold of sphere S and Euclidean space E is 1, and that of E and hyperbolic H is 0. Then, could you please explain this result for the product of S, E and H, that is, explain the statement \\u201cThe last case (one negative, one positive space) follows along the same lines.\\u201d? If the curvature of the product manifold is non-negative, then does it mean that the curvature of H is ignored in the computations?\\n\\nIn the case of a product of E and H, the sectional curvature ranges in [-1,0]. The line \\u201cand similarly for K_1, K_2 non-positive\\u201d implies that in the non-positive case we have K(u,v) \\\\in [min(K_1, K_2), 0], since everything is negated.\\n\\n\\n- What is \\\\gamma more precisely? Is it a distribution or density function? If it is, then what does (\\\\gamma+1)/2 denote?\\n\\n\\\\gamma is a random variable which is distributed as the dot product of two uniformly random unit vectors, as defined on the bottom of page 16. Hence (\\\\gamma+1)/2 is a well-defined random variable.\\n\\n\\n- The statements related to use of Algorithm 1 and SGD to optimize equation (2) are confusing. Please explain how you employed them together in detail.\\n\\nEquation (2) is a loss function from \\\\mathcal{P}^n to \\\\mathbb{R} where the embeddings x_i are variables, and can thus be optimized using RSGD (Algorithm 1) on each point simultaneously. This is the same approach taken in previous works [NK1, SDGR, NK2] for the case of single space embeddings.\\n\\n\\n- On estimation of K_1, K_2 and matching moments\\n\\nAlgorithm 2 and 3 both produce distributions. Moment matching (or the method of moments) is a standard term referring to parameter estimation via equating the moments of distributions. More details have been added to the revised draft.\\n\\n\\n- Please define, \\u201crandom (V)\\u201d, \\u201crandom neighbor m\\u201d and \\u201c\\\\delta_K/s\\u201d used in Algorithm 3 more precisely.\\n\\nWe have clarified that the random sampling is uniform. \\\\delta_K refers to the delta function.\"}",
"{\"title\": \"References\", \"comment\": \"[B] Bauer et al. Modern Approaches to Discrete Curvature. Lecture Notes in Mathematics\\n[Ficken] Ficken, \\u201cThe Riemannian and Affine Differential Geometry of Product-Spaces\\u201d, Annals of Math., 1939.\\n[LW] Leimeister and Wilson. Skip-gram word embeddings in hyperbolic space.\\n[Levy] Levy, \\\"Symmetric Tensors of The Second Order Whose Covariant Derivatives Vanish\\\", Annals of Math., 1926.\\n[NK1] Nickel and Kiela. Poincar\\u00e9 embeddings for learning hierarchical representations.\\n[NK2] Nickel and Kiela. Learning continuous hierarchies in the Lorentz model of hyperbolic geometry.\\n[Ollivier1] Ollivier. Ricci curvature of Markov chains on metric spaces.\\n[Ollivier2] Ollivier. A visual introduction to Riemannian curvatures and some discrete generalizations.\\n[SDGR] Sala, De Sa, Gu, R\\u00e9. Representation tradeoffs for hyperbolic embeddings.\\n[Sarkar] Sarkar. Low distortion Delaunay embedding of trees in hyperbolic plane.\\n[TS] Turaga and Srivastava, Riemannian Computing in Computer Vision, Springer 2016.\\n[WSJ] Weber, Saucan, and Jost. Characterizing complex networks with Forman-Ricci curvature and associated geometric flows.\\n[WL] Wilson and Leimeister. Gradient descent in hyperbolic space.\\n[ZS] Zhang and Sra. First-order methods for geodesically convex optimization.\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate the reviewer\\u2019s thoughtful feedback on our work.\\n\\n- On the definition of K\\n\\nK is a constant that parametrizes the curvature of the model spaces (hyperbolic, Euclidean, and spherical); for any constant K, there is a corresponding space with curvature K. In our notation, \\\\mathbb{E}^d has curvature 0, \\\\mathbb{S}^d_K has curvature K, and \\\\mathbb{H}^d_K has curvature -K.\\n\\n\\n- On the use of the signature estimation\\n\\nTable 2 does not use Algorithms 2 and 3, instead using Algorithm 1 with a variety of signatures to show the interaction between signature and dataset. For every experiment, the curvatures are initialized to -1,0, or 1 for H, E, and S components resp., and learned using the method described in Section 3.1; this is what is reported in the Best model. These details have been clarified in Appendix D.\\n\\nAs the reviewer has correctly observed, Algorithm 1 can be initialized with the estimated signature from Algorithms 2 and 3, which saves on hyperparameter searching and computation time. Table 3 shows that this method would indeed choose the best signature among the two-component options.\\n\\n\\n- On comparison vs ISOMAP\\n\\nWe thank the reviewer for pointing out ISOMAP, a non-linear dimensionality reduction algorithm. We ran an experiment to compare against our proposed techniques. We first embedded the graphs from 4.1 into a higher (100) dimensional Euclidean space, than used ISOMAP to reduce the dimension to 10 in order to compare the average distortion against the product manifolds from Section 4.1. We saw a d_avg for PhD's/Facebook/Power Graph/Cities 0.4085 / 2.2295 / 0.4863 / 0.3711. We hypothesize that while ISOMAP can be good for dimensionality reduction for an already-good Euclidean embedding (with many dimensions), it does not perform as well as our technique for situations when the higher-dimensional Euclidean embeddings themselves have non-zero distortion---nor can it capture the mixed-curvature manifolds our approach offers.\\n\\n\\n- On the link between the different contributions \\n\\nThe operational flow is the following. We start with the data to be embedded. We \\n\\n(i) seek an appropriate space to embed it in (in order to get a high-quality representation). To find what this embedding space should be, we estimate the signature (Section 3.2). More concretely, we use Algorithm 3 to estimate the distribution of discrete curvature of the data and Algorithm 2 to find a matching product manifold. This yields the \\\"signature\\\", i.e., a the number of factors and each factor's type and dimension for our product manifold.\\n\\nWe have now selected an embedding space, and we \\n\\n(ii) perform the embedding. This is done via Algorithm 1(RSGD) in Section 3.1.\\n\\nNow we have an embedding. There are many further tasks to be done with these representations. Perhaps the most fundamental is to take the mean of the representations for a subset of the data. Since our embeddings are into a product manifold, this requires a slightly more sophisticated approach; we\\n\\n(iii) compute this mean via the Karcher mean detailed in Section 3.3.\\n\\n\\n- On the complexity of learning the product space and the limited data sample regime\\n\\nThis is an excellent point. We point out that (1) Optimization in the sphere and hyperboloid has the same complexity up to a constant as in Euclidean space, so that the complexity of our product manifold proposal is roughly the same as using SGD to produce typical embeddings, as we simply use R-SGD on the factor spaces. (2) The heuristic for choosing a space is very cheap (i.e., Algorithms 2 and 3) compared to the main embedding procedure, and is better suited for simple products anyways, avoiding the sample complexity issue of a large search space. Indeed, we do not seek to embed into higher dimensional spaces: our approach shows good results with few dimensions in a product space.\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate the reviewer\\u2019s positive comments about our work.\"}",
"{\"title\": \"Solid Paper\", \"review\": \"This paper proposes a new method to embed a graph onto a product of spherical/Euclidean/hyperbolic manifolds. The key is to use sectional curvature estimations to determine proper signature, i.e., all component manifolds, and then optimize over these manifolds. The results are validated on various synthetic and real graphs. The proposed idea is new, nontrivial, and is well supported by experimental evidence.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"The problem studied in the paper is interesting. However, there are various mathematical and theoretical problems with the paper, some of which are mentioned below. In addition, the claims and novelty of the paper fall short in the provided methods and results.\", \"review\": \"\", \"page_2\": \"What are p_i, i=1,2,...,n, their set T and \\\\mathcal{P}?\\n\\nWhat is | | used to compute distortion between a and b?\\n\\nPlease fix the definition of the Riemannian manifold, such that M is not just any manifold, but should be a smooth manifold or a particular differentiable manifold. Please update your definition more precisely, by checking page 328 in J.M. Lee, Introduction to Smooth Manifolds, 2012, or Page 38 in do Cormo, Riemannian Geometry, 1992.\\n\\nPlease define \\\\mathcal{P} in equation (1).\\n\\nDefine K used in the definition of the hyperboloid more precisely.\", \"please_provide_proofs_of_these_statements_for_product_of_manifolds_with_nonnegative_and_nonpositive_curvatures\": \"\\u201cIn particular, the squared distance in the product decomposes via (1). In other words, dP is simply the l2 norm of the component distances dMi.\\u201d\\n\\nPlease explain what you mean by \\u201cwithout the need for optimization\\u201d in \\u201cThese distances provide simple and interpretable embedding spaces using P, enabling us to introduce combinatorial constructions that allow for embeddings without the need for optimization.\\u201d In addition, how can you compute geodesic etc. if you use l1 distance for the embedded space?\\n\\nBy equation (2), the paper focuses on embedding graphs, which is indeed the main goal of the paper. Therefore, first, the novelty and claims of the paper should be revised for graph embedding. Second, three particular spaces are considered in this work, which are the sphere, hyperbolic manifold, and Euclidean space. Therefore, you cannot simply state your novelty for a general class of product spaces. Thus, the title, novelty, claims and other parts of the paper should be revised and updated according to the particular input and output spaces of embeddings considered in the paper. \\n\\nPlease explain how you compute the metric tensor g_P and apply the Riemannian correction (multiply by the inverse of the metric tensor g_P) to determine the Riemannian gradient in the Algorithm 1, more precisely. \\n\\nStep (9) of the Algorithm 1 is either wrong, or you compute v_i without projecting the Riemannian gradient. Please check your theoretical/experimental results and code according to this step.\\n\\nWhat is h_i used in the Algorithm 1? Can we suppose that it is the ith component of h?\\n\\nIn step (6) and step (8), do you project individual components of the Riemannian gradient to the product manifold? Since their dimensions are different, how do you perform these projections, since definitions of the projections given on Page 5 cannot be applied? Please check your theoretical/experimental results and code accordingly.\\n\\nPlease define exp_{x^(t)_i}(vi) and Exp(U) more precisely. I suppose that they denote exponential maps.\\n\\nHow do you initialize x^(0) randomly?\\n\\nThe notation is pretty confusing and ambiguous. First, does x belong to an embedded Riemannian manifold P or a point on the graph, which will be embedded? According to equation (2), they are on the graph and they will be embedded. According to Algorithm 1, x^0 belongs to P, which is a Riemannian manifold as defined before. So, if x^(0) belongs to P, then L is already defined from P to R (in input of the Algorithm 1). Thereby, gradient \\\\nabla L(x) is already a Riemannian gradient, not the Euclidean gradient, while you claim that \\\\nabla L(x) is the Euclidean gradient in the text.\\n\\nOverall, Algorithm 1 just performs a projection of Riemannian or Euclidean gradient \\\\nabla L(x) onto a point v_i for each ith individual manifold. Then, each v_i is projected back to a point on an individual component of the product manifold by an exponential map. \\n\\nWhat do you mean by \\u201csectional curvature, which is a function of a point p and two directions x; y from p\\u201d? Are x and y not points on a manifold?\\n \\nYou define \\\\xi_G(m;b,c) for curvature estimation for a graph G. However, the goal was to map G to a Riemannian manifold. Then, do you also consider that G is itself a Riemannian manifold, or a submanifold?\\n\\nWhat is P in the statement \\u201cthe components of\\nthe points in P\\u201d in Lemma 2?\\n\\nWhat is \\\\epsilon in Lemma 2?\\n\\nHow do you optimize positive w_i, i=1,2,...,n?\\n\\nWhat is the \\u201cgradient descent\\u201d refered to in Lemma 2?\\n\\nPlease provide computational complexity and running time of the methods.\\n\\nPlease define \\\\mathbb{I}_r.\\n\\nAt the third line of the first equation of the proof of Lemma 1, there is no x_2. Is this equation correct?\\n\\nIf at least of two of x1, y1, x2 and y2 are linearly dependents, then how does the result of Lemma 1 change?\\n\\nStatements and results given in Lemma 1 are confusing. According to the result, e.g. for K=1, curvature of product manifold of sphere S and Euclidean space E is 1, and that of E and hyperbolic H is 0. Then, could you please explain this result for the product of S, E and H, that is, explain the statement \\u201cThe last case (one negative, one positive space) follows along the same lines.\\u201d? If the curvature of the product manifold is non-negative, then does it mean that the curvature of H is ignored in the computations?\\n\\nWhat is \\\\gamma more precisely? Is it a distribution or density function? If it is, then what does (\\\\gamma+1)/2 denote?\\n\\nThe statements related to use of Algorithm 1 and SGD to optimize equation (2) are confusing. Please explain how you employed them together in detail.\\n\\nCould you please clarify estimation of K_1 and K_2, if they are unknown. More precisely, the following statements are not clear;\\n\\n- \\u201cFurthermore, without knowing K1, K2 a priori, an estimate for these curvatures can be found by matching the distribution of sectional curvature from Algorithm 2 to the empirical curvature computed from Algorithm 3. In particular, Algorithm 2 can be used to generate distributions, and K1, K2 can then be found by matching moments.\\u201d Please explain how in more detail? What is matching moments?\\n\\n- \\u201cwe find the distribution via sampling (Algorithm 3) in the calculations for Table 3, before being fed into Algorithm 2 to estimate Ki\\u201d How do you estimation K_1 and K_2 using Algorithm 3?\\n\\n- Please define, \\u201crandom (V)\\u201d, \\u201crandom neighbor m\\u201d and \\u201c\\\\delta_K/s\\u201d used in Algorithm 3 more precisely.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting ideas to explore towards understanding the geometry of data sets\", \"review\": [\"The paper proposes a dimensionality reduction method that embeds data into a product manifold of spherical, Euclidean, and hyperbolic manifolds. The proposed algorithm is based on matching the geodesic distances on the product manifold to graph distances. I find the proposed method quite interesting and think that it might be promising in data analysis problems. Here are a few issues that would be good to clarify:\", \"Could you please formally define K in page 3?\", \"I find the estimation of the signature very interesting. However, I am confused about how the curvature calculation process is (or can be) integrated into the embedding method proposed in Algorithm 1. How exactly does the sectional curvature estimation find use in the current results? Is the \\u201cBest model\\u201d reported in Table 2 determined via the sectional curvature estimation method? If yes, it would be good to see also the Davg and mAP figures of the best model in Table 2 for comparison.\", \"I think it would also be good to compare the results in Table 2 to some standard dimensionality reduction algorithms like ISOMAP, for instance in terms of Davg. Does the proposed approach bring advantage over such algorithms that try to match the distances in the learnt domain with the geodesic distances in the original graph?\", \"As a general comment, my feeling about this paper is that the link between the different contributions does not stand out so clearly. In particular, how are the embedding algorithm in Section 3.1, the signature estimation algorithm in Section 3.2, and the Karcher mean discussed in Section 3.3 related? Can all these ideas find use in an overall representation learning framework?\", \"In the experimental results in page 7, it is argued that the product space does not perform worse than the optimal single constant curvature spaces. The figures in the experimental results seem to support this. However, I am wondering whether the complexity of learning the product space should also play a role in deciding in what kind of space the data should be embedded in. In particular, in a setting with limited availability of data samples, I guess the sample error might get too high if one tries to learn a very high dimensional product space.\"], \"typos\": \"\", \"page_3\": \"Note the \\u201canalogy\\u201d to Euclidean products\", \"page_7_and_table_1\": \"I guess \\u201cring of cycles\\u201d should have been \\u201cring of trees\\u201d instead\", \"page_13\": \"Ganea et al formulates \\u201cbasic basic\\u201d machine learning tools \\u2026\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Syxgbh05tQ | Lyapunov-based Safe Policy Optimization | [
"Yinlam Chow",
"Ofir Nachum",
"Mohammad Ghavamzadeh",
"Edgar Guzman-Duenez"
] | In many reinforcement learning applications, it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that do not take the agent to certain undesirable situations. These problems are often formulated as a constrained Markov decision process (CMDP) in which the agent's goal is to optimize its main objective while not violating a number of safety constraints. In this paper, we propose safe policy optimization algorithms that are based on the Lyapunov approach to CMDPs, an approach that has well-established theoretical guarantees in control engineering. We first show how to generate a set of state-dependent Lyapunov constraints from the original CMDP safety constraints. We then propose safe policy gradient algorithms that train a neural network policy using DDPG or PPO, while guaranteeing near-constraint satisfaction at every policy update by projecting either the policy parameter or the action onto the set of feasible solutions induced by the linearized Lyapunov constraints. Unlike the existing (safe) constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Furthermore, the action-projection version of our algorithms often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with CPO and the Lagrangian method on several high-dimensional continuous state and action simulated robot locomotion tasks, in which the agent must satisfy certain safety constraints while minimizing its expected cumulative cost. | [
"Reinforcement Learning",
"Safe Learning",
"Lyapunov Functions",
"Constrained Markov Decision Problems"
] | https://openreview.net/pdf?id=Syxgbh05tQ | https://openreview.net/forum?id=Syxgbh05tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylbWnAgxE",
"rJxbF0F207",
"Skel86Fh0m",
"SJg1Qcri07",
"S1lgvRSc0m",
"B1xu5AMKC7",
"HJxb9lXfC7",
"HylotAzfRX",
"HygKIyvyam",
"Syghhbsp37",
"HkxnHeScnX"
],
"note_type": [
"meta_review",
"comment",
"comment",
"official_review",
"official_comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544772601296,
1543442040948,
1543441736348,
1543358999170,
1543294552156,
1543216784254,
1542758536781,
1542758018931,
1541529424677,
1541415348423,
1541193795744
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1138/Area_Chair1"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1138/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1138/AnonReviewer3"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1138/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1138/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1138/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This is an interesting direction but multiple reviewers had concerns about the amount of novelty in the current work, and given the strong pool of other papers, this didn't quite reach the threshold.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Important topic, limited novelty\"}",
"{\"comment\": \"We thank the reviewer for going over our response and adjusting her/his score accordingly. We are happy that we managed to address some of her/his concerns.\", \"regarding_comparison_with_cpo\": \"Unfortunately, the current version of CPO on github is built in rllab, and no implementation of this algorithm is available outside this setting. As a result, a fair comparison with CPO requires implementing our algorithms in rllab, which is rather difficult and time consuming. This is why we used SPPO (the PPO alternative of CPO) as a replacement for CPO in our experiments. Finally, we would like emphasize that the message of the paper is NOT to advocate our Lyapunov-based algorithms as a replacement for CPO, but rather as an alternative to CPO for solving CMDPs with continuous actions, when safety (not violating the constraints) is critical even during training.\", \"title\": \"Thank you. Please find our additional remarks about CPO\"}",
"{\"comment\": \"We thank the reviewer for useful comments. Below is our response to the reviewer\\u2019s comments.\\n\\n\\\"contribution is incremental\\\"\\nIt is true that the paper is built based on the Lyapunov-function-based approach to CMDPs from Chow et al., 2018, and borrows ideas from Dalal et al., 2018 and CPO (Achiam et al., 2017). However, we believe that extending the setting of Chow et al. 2018 to continuous actions is not straightforward, our improvement over the safety layer idea of Dalal et al. 2018 is significant, and finally we have clear differences with CPO. Below is a detailed comparison of our work with each of these papers. We will make the comparisons and our contributions more clear in the final version of the paper.\\n\\n1) The Lyapunov-function-based algorithms of Chow et al., 2018 were all value-function-based (approximate policy and value iteration), and thus, could not easily handle continuous action CMDPs. In order to handle continuous actions, we had to develop policy gradient and actor-critic type algorithms based on the Lyapunov formulation, which is not a straightforward extension of the results of Chow et al., 2018.\\n2) Dalal et al., 2018 proposed the idea of using a safety layer for constraints, but their results were restricted to constraints that can be expressed locally. We show how this idea can be used to solve more general CMDPs with trajectory-based constraints. This provides an elegant recipe (through the use of Lyapunov functions) to adopt the safety layer concept and solve CMDPs with direct back-propagation.\\n3) Compared to CPO, our Lyapunov-based policy gradient algorithms can be used in the off-policy setting, which makes them more data-efficient, as they can utilize the data from the replay buffer. Moreover, we show in Section 4.1 that CPO can be viewed as a special case of the Lyapunov-based approach. From an application standpoint, since all the algorithms proposed in this paper are back-propagatable, it might be more efficient to implement them in TensorFlow and PyTorch, than the original (TRPO-based) CPO, which is not back-propagatable. \\n\\n\\u201cExperiments and comparisons with Lagrangian approach\\u201d\\nIn our experiments, we compare our two safe RL algorithms, one derived from constrained optimization and one from the safety layer idea, with the unconstrained and Lagrangian baselines in four problems: PointGather, AntGather, PointCircle, and HalfCheetahSafe. We perform these experiments with both off-policy (DDPG version) and on-policy (PPO version) versions of the algorithms. \\n\\nIn PointCircle DDPG, although the Lagrangian algorithm significantly outperforms the safe RL algorithms in terms of return, it violates the constraint more often. The only experiment in which Lagrangian performs similarly to the safe algorithms in terms of both return and constraint violation is PointCircle PPO. In other experiments, either 1) the policy learned by Lagrangian has a significantly lower performance than that learned by one of the safe algorithms (see HalfCheetahSafe DDPG, PointGather DDPG, AntGather DDPG), or 2) the Lagrangian method violates the constraint during training, while the safe algorithms do not (see HalfCheetahSafe PPO, PointGather PPO, AntGather PPO). We will make these comparisons more clear in the paper.\\n\\nWe admit that the differences are not quite significant in the standard benchmarks used in our experiments (similar to the experiments in most papers on this topic). It would interesting to evaluate all these algorithms in real problems, for example in robotics, but this is a separate contribution that we leave for future work.\\n\\n\\u201cWhy is jiggling a problem in practice\\u201d\\nJiggling around the threshold means that the algorithm generates policies that violate the constraints during training. This might be ok in certain applications, but there are problems in which it would be critical to violate the constraints even during training. This is one of the major problems of using Lagrangian algorithms to solve CMDPs. As it was mentioned in the introduction, similar to CPO and Chow et al., 2018, one of our goals is to develop CMDP algorithms that do not violate the constraints during training (or to reduce the violation as much as possible, as achieving this goal is difficult, in particular with function approximations in complex domains). We will make this more clear in the paper.\\n\\n\\u201cGrid-search over initialization Lagrange multiplier\\u201d\\nThe reviewer is right, we did not do a grid-search over the initial Lagrange multiplier. The main reason is that we tried a few values and used heuristics to balance the learning progress of reward and constraint reward, but we did not observe a significant difference in the performance and constraint violation, once learning was stabilized. However, we totally agree that a more systematic search over this parameter would result in a better comparison with the Lagrangian algorithms. \\n\\n\\u201cMagenta versus Teal\\u201d\\nThanks for pointing this out. We will correct this in the paper.\", \"title\": \"Thank you, please find our response to your questions below\"}",
"{\"title\": \"Review\", \"review\": \"The paper generalized the approach for safe RL by Chow et al, leveraging Lyapunov functions to solve constraint MDPs and integrating this approach into policy optimization algorithms that can work with continuous action spaces.\\nThis work derives two classes of safe RL methods based on Lyapunov function, one based on constrained parameter policy optimization (called theta-projection), the other based on a safety layer. These methods are evaluated against Lagrangian-based counter-parts on 3 simple domains. \\n\\nThe proposed Lyapunov-function based approach for policy optimization is certainly appealing and the derived algorithms make sense to me. This paper provides a solid contribution to the safe RL literature though there are two main reasons that dampen the excitement about the presented results:\\nFirst, the contribution is solid but seems to be of somewhat incremental nature to me, combining existing recent techniques by Chow et al (2018) [Lyapunov-function based RL in CMDP], Achiam et al (2017) [CPO with PPO is identical to SPPO] and Dalal et al (2018) [safety layer]. \\nSecond, the experimental results do not seem to show a drastic benefit over the Lagrangian baselines. It is for example unclear to me whether the jiggling of the Lagrange approach around the threshold is a problem is practice. Further, it seems that PPO-Lagrangian can achieve much higher performance in the Point and Gather task while approximately staying under the constraint threshold of 2. Also, as far as I understand, the experiments are based on extensive grid search over hyper-parameters including learning rates and regularization parameters. However, it is not clear why the initialization of the Lagrange multiplier for the Lagrangian baselines was chosen fixed. I would be curious to see the results w.r.t. the best initial multiplier or a comparison without hyper-parameter grid search at all. \\n\\nThis is a light review based on a brief read.\", \"minor_note\": \"\", \"in_the_figures\": \"where is the magenta line? I assume magenta labels refer to teal lines?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Appreciate the updates\", \"comment\": \"I appreciate the updates to the work. While some of my concerns remain regarding the comparison with CPO, I think the recent revision is improved and I'll be updating my original rating to reflect this.\"}",
"{\"comment\": \"We thank the reviewer for useful comments. We are glad that the reviewer found the proposed framework interesting as a novel generalization of several existing safe RL algorithms. Below is our response to the reviewer\\u2019s comments. We hope that with the additional results, the reviewer will find the evaluation of the paper stronger and more satisfactory.\\n\\n\\n\\u201cThis seems directly to contrast to the earlier statement which states that it is unclear how to modify the CPO methodology to other RL algorithms\\u201d\\nThe reviewer is absolutely right. We realize that the current wording is not quite accurate and need to be revised. The main message we aim to deliver is that while one can easily transfer ideas from CPO (based on TRPO) to PPO, to the best of our knowledge there are no direct ways to further apply this idea to more general policy gradient algorithms, such as DDPG, in order to use them to solve CMDPs.\", \"we_hereby_updated_the_corresponding_part_of_the_introduction_to\": \"\\u201cWhile it is straightforward to adopt this methodology to PPO for constrained optimization (this is exactly how we derive the SPPO algorithm in our paper), it is unclear how to combine it with algorithms that do not belong to the family of proximal PG algorithms (i.e., PG algorithms that are regularized with relative entropy), such as DDPG.\\u201d\\n\\n\\u201cFair comparison with CPO and without loss of generality?\\u201d\", \"below_is_what_we_think_about_our_empirical_evaluation_and_comparison_with_cpo\": \"The essence of CPO, which is to add a first order constraint in proximal policy optimization, is preserved in the SPPO algorithm. In fact in Section 4.1, we show that it is a special case of the Lyapunov-based approach. In fact, besides using the relative entropy penalty update schedules in PPO, we also ran experiments based on the relative entropy penalty term suggested by TRPO (as shown in Appendix C of the TRPO paper). We treated this choice as one of the hyper-parameters in the algorithm (see the response to Reviewer 2 about details on systematic comparisons) and reported the best ones as performance of SPPO. The only difference between CPO and SPPO is that SPPO does not perform backtracking line-search. \\nUnfortunately, the computational complexity of PG is significantly increased with line-search and the corresponding CPO algorithm is not back-propagatable. This is why we decided to test the safety algorithms based on the popular PPO algorithm (which belongs to the family of proximal PG algorithms) instead of TRPO, without losing the essence of the ideas behind CPO.\\nWe did try to reimplement the original CPO algorithm, but we did not obtain the results reported in the CPO paper. We also ran into trouble understanding which part of the original CPO implementation in rllab contributes to the difference, and thus, found their comparisons difficult with the safe RL algorithms, which are not implemented in rllab.\\nWe think that the CPO modification from TRPO-based to PPO-based algorithms is indeed needed in our experiments, because otherwise their comparison with safe DDPG-based algorithms and unconstrained PG algorithms with safety layer (see Section 4.2), which do not perform backtracking line-search, may be unfair. \\n\\n\\nWhile we think our comparison with CPO is valid, we agree with the reviewer that the claim of \\u201cwithout loss of generality\\u201d is too strong. Given the current results, we view the Lyapunov safe RL algorithms as an alternative to CPO and will make sure that we deliver this message in our paper. We summarized the above points and modified the sentence of justifying the switch from TRPO to PPO as follows:\\n\\u201cInstead of the original CPO algorithm, we use its PPO alternative (which coincides with the SPPO algorithm derived in Section 4.1) as the safe RL baseline for comparison. SPPO preserves the essence of CPO by adding a 1st order constraint to the proximal policy optimization. The main difference between CPO and SPPO is that the latter does not perform backtracking line-search. The decision to compare with SPPO instead of CPO is 1) to avoid the additional computational complexity of line-search in TRPO, while maintaining the performance of PG using the popular PPO algorithm, 2) to have a back-propagatable version of CPO, and 3) to have a fair comparison with other back-propagatable safe RL algorithms, such as the DDPG and safety layer counterparts.\\u201c\\n\\n\\\"Experiments, Figure 3, 10 Random seeds, statistical significance, and variance\\u201d\\nThank you for the suggestions to improve the readability of results. We updated all figures in the paper, added DDPG for PointCircle, added confidence intervals (over 10 random seeds) to the learning curves, and did log-transformation to the y-label of Figure 3. Although we haven't run all the experiments from the CPO paper, we did increase the difficulty of some tasks (e.g., AntGather) and try both DDPG and PPO safe RL algorithms in each domain for comprehensive comparisons. The results still deliver similar message as in the old revision.\", \"title\": \"Thank you, our response regarding comparison with CPO\"}",
"{\"comment\": \"We thank the reviewer for the useful comments. Below is our response to the reviewer\\u2019s main comments. \\n\\n\\u201cQuestionable Contributions\\u201d\\nDue to the space limit, please see the list of our contributions in our response to Reviewer 1. \\n\\n\\u201cComparison with inference of Markov Decision Processes under constraints\\u201d\\nFor solving CMDPs, Chow et al, 2018 provided a good literature survey on existing methods. Specifically the Lyapunov-based safe RL algorithms are hinged on the ``primal method\\u2019\\u2019, which aims to learn the value and Lyapunov functions. On the other hand, there is also the ``dual method\\u2019\\u2019 that learns the occupation measure. However, this algorithm requires the knowledge of the dynamics and the LP-based algorithm is limited to solving CMDP problems with finite state and action spaces, which is different than what we are interested in here. \\n\\nRegarding the inference perspective of MDPs, we looked into some of the standard formulations. While this approach may resemble the spirit of learning occupation measures, it appears that the MDP problems studied in the literature are mostly unconstrained (for example, see https://ipvs.informatik.uni-stuttgart.de/mlr/marc/publications/06-toussaint-ICML.pdf). Furthermore, they usually derive stochastic optimal control algorithms by assuming Gaussian noise and linear dynamics. It would helpful if the reviewer mention the references (s)he has in mind that we can look into and cite them.\\n\\n\\u201cCreating a Lyapunov function\\u201d\\nWe agree with the reviewer that the choice of the Lyapunov function is quite important. This is why we learn the Lyapunov function in our algorithms and update it as the algorithms progress. The reviewer\\u2019s comment regarding the 2nd derivative of the Lyapunov function is unclear to us. Similar to the standard policy gradient algorithms (DDPG, PPO), we only use the first order information of the objective and Lyapunov functions in all the safe RL algorithms used in the paper. To make sure that the first order taylor series expansions are good enough approximations, similar to the trust region method (such as TRPO), we restrict the local policy update using a quadratic constraint that is constructed via second-order gradient of the relative entropy term. But this has nothing to do with the Lyapunov and Q functions. \\n\\n\\u201cComplex nonlinearities and instability\\u201d\\nIn this work, we mainly focus on the derivation of model-free RL algorithms with safety guarantees (w.r.t. CMDP constraints), especially when the action space is continuous. To guarantee constraint satisfaction and safety during training (which may restrict the exploration to be conservative), we leverage the recent results of Lyapunov functions from Chow et al (2018) and extend it to policy gradient methods. Although Lyapunov functions are used here, it is not for stability, it is for bounding the constraint performance of an MDP. A similar work is https://pdfs.semanticscholar.org/9b24/d6a26526d9a02168432988060ba6721ff926.pdf. While stability of nonlinear dynamics is an important safety criterion, it is NOT the focus of this paper. That being said, we believe RL with stability guarantees is an interesting future direction, especially in model-based RL.\\n\\n\\u201cthe actual tasks chosen are quite simple\\u201d\\nWe agree that the chosen tasks are simple control problems, but indeed they are among the standard RL benchmarks and have been used by similar papers, including CPO (with which we would like to make comparisons). While these tasks (besides HalfCheetah) might not have complex instabilities in the dynamics, our major focus in the experiments is to evaluate the proposed safety algorithms in terms of the return maximization and constraint satisfaction during training. To handle more complicated/realistic control tasks, we speculate model-based RL algorithms might be more suitable. We leave this important research direction for future work.\\n\\n\\u201cComparison are not systematically explored by the paper.\\u201d\\nIn all numerical experiments and for each algorithm (SPPO, SDDPG, SPPO-modular, SDDPG-modular, CPO, Lagrangian, and the unconstrained PG counterparts), we systematically explored different settings by doing grid-search over the following factors: (i) learning rates in the actor-critic algorithm, (ii) batch size, (iii) regularization parameters of the policy relative entropy term, (iv) with-or-without natural policy gradient updates, (v) with-or-without the emergency safeguard PG updates (see Appendix A for more details). Although each algorithm might have a different parameter setting that leads to the optimal performance in training, the results reported in the paper are the best ones for each algorithm, chosen by the same criteria (which is based on value of return + degree of constraint satisfaction). We also add this description in Appendix C for clarification. Please also check the updated numerical results in the revised paper.\", \"title\": \"Thank you, please find our response to your questions below\"}",
"{\"comment\": \"We thank the reviewer for the useful comments. We are glad that the reviewer found the paper interesting and the experimental evaluation convincing. We appreciate the comments regarding clarity of the writing - we will fix the typos. We have also updated the paper to appropriately introduce the acronyms (DDPG, PPO, PG, CPO). Please see the updated version of the paper.\\n\\n\\u201cIncremental advances\\u201d\\nRegarding the contributions of this paper, the main objective here is to present a general and unified method for deriving safe RL algorithms in problems with continuous actions. While we base our approach on ideas from Lyapunov theory (previously applied to discrete-control problems in Chow et al., 2018), we believe our theoretical and empirical results to be significant for several reasons:\\n1) The focus of Chow et al, 2018 is to derive value-based safe RL algorithms which are more suitable for solving problems with discrete action spaces. In general, it is unclear how to apply the same techniques to continuous control problems, which are more common in robotics applications. This is the main issue we address in our paper.\\n2) Compared to CPO algorithm (Achiam et al 2017), our work can be applied to the off-policy setting. Therefore, our Lyapunov-based policy gradient algorithms are more data-efficient, as one can utilize data from replay buffer. Furthermore, from the derivations in Section 4.1, it would be possible to view CPO as a special case of Lyapunov-based PG, which is an interesting result by itself.\\n3) Compared to Dalal et al 2018, which (to our knowledge) is the first work to propose the idea of a safety layer, our work is more general. While they focus on constraints that can only be expressed locally, we solve the more general CMDP problem (with trajectory-based constraints). Our paper provides an elegant recipe (through the use of Lyapunov functions) to adopt the safety layer concept to solve safe RL problems with general trajectory-based constraints using direct backpropagation.\", \"title\": \"Thank you\"}",
"{\"title\": \"entirely reasonable paper, but novelty is unclear, empirical verification incomplete\", \"review\": \"In this paper, authors compare different ways to enforce stability constraints on trajectories in dynamic RL problems. It builds on a recent approach by Achiam et al on Constrained Policy Optimization (oft- mentioned \\\"CPO\\\") and an accepted NIPS paper by Chow which introduces Lyapunov constraints as an alternative method. While this approach is reasonable indeed, the novelty of the approach is questionable, not only in light of recent papers but older literature: inference of Markov Decision Processes under constraints is referred to and has been known a long time. Furthermore, the actual tasks chosen are quite simple and do not present complex instabilities. Also, actually creating a Lyapunov function and weighing the relative magnitude of its second derivative (steep/shallow) is not trivial and must influence the behavior of the optimizer. Also worth mentioning that complex nonlinearities might imply that instabilities in the observed dynamics are not seen and learned unless the space exploration is conservative. That is, comparison of CPO and Lagrangian constraint based RL with Lyapunov based method proposed depends on a lot of factors (such as those just mentioned) that are not systematically explored by the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Incremental, but quite solid\", \"review\": \"In this paper, authors propose safe policy optimization algorithms based on the Lyapunov approach to constrained Markov decision processes.\\nThe paper is very well written (a few typos here and there, please revise) and structured, and, to the best of my knowledge, it is technically sound and very detailed.\\nIt provides incremental advances, mostly from Chow et al., 2018.\\nIt fairly accounts for recent literature in the field.\\nExperimental settings and results are fairly convincing.\", \"minor_issues\": \"Authors should not use not-previously-described acronyms (as in the abstract: DDPG, PPO, PG, CPO)\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"\\ufffc\\nTo my understanding, this paper builds on prior work from Chow et al. to apply Lyapunov-based safe optimization to the policy-gradient setting. This seems is similar to work by Achiam 2017. While this work seems like an interesting framework for encompassing several classes of constrained policy optimization settings in the Lyapunov-based setting, I have some concerns about the evaluation methodology. \\n\\nIt is claimed that the paper compares against \\u201ctwo baselines, CPO and the Lagrangian method, on several robot locomotion tasks, in which the agent must satisfy certain safety constraints while minimizing its expected cumulative cost.\\u201d Then it is stated in the experimental section \\u201cHowever since backtracking line-search in TRPO can be computationally expensive, and it may lead to conservative policy updates, without loss of generality we adopt the original construction of CPO to create a PPO counterpart of CPO (which coincides with SPPO) and use that as our baseline.\\u201d This seems directly to contrast to the earlier statement which states that it is unclear how to modify the CPO methodology to other RL algorithms. Moreover, is this really a fair comparison? The original method has been modified to form a new baseline and I\\u2019m not sure that it is \\u201cwithout loss of generality\\u201d. \\n\\nAlso, it is unclear whether the results can be accepted at face value. Are these averaged across several random seeds and trials? Will performance hold across them? What would be the variance? Recent work has shown that taking 1 run especially in MuJoCo environments doesn\\u2019t necessarily provide statistically significant values. In fact the original CPO paper shows the standard deviations across several random seeds and compares directly against an earlier work in this way (PDO). Moreover, it is unclear why CPO was not directly compared against and neither was the \\\"equivalent\\\" baseline not compared on similar environments as in the original CPO paper.\", \"comments\": \"Figure 3 is difficult to parse, the ends of the graphs are cut off. Maybe putting the y axis into log format would help with readability here or having the metrics be in a table.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BkxgbhCqtQ | Predictive Uncertainty through Quantization | [
"Bastiaan S. Veeling",
"Rianne van den Berg",
"Max Welling"
] | High-risk domains require reliable confidence estimates from predictive models.
Deep latent variable models provide these, but suffer from the rigid variational distributions used for tractable inference, which err on the side of overconfidence.
We propose Stochastic Quantized Activation Distributions (SQUAD), which imposes a flexible yet tractable distribution over discretized latent variables.
The proposed method is scalable, self-normalizing and sample efficient. We demonstrate that the model fully utilizes the flexible distribution, learns interesting non-linearities, and provides predictive uncertainty of competitive quality.
| [
"variational inference",
"information bottleneck",
"bayesian deep learning",
"latent variable models",
"amortized variational inference",
"uncertainty",
"learning non-linearities"
] | https://openreview.net/pdf?id=BkxgbhCqtQ | https://openreview.net/forum?id=BkxgbhCqtQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkxNE3NZgN",
"BJengAhS0Q",
"S1gCAa2SR7",
"B1xq262SRm",
"HJl5Z4Kf67",
"Sye-PQNA2m",
"S1g8up0dhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544797227660,
1542995443902,
1542995413846,
1542995377904,
1541735425536,
1541452633411,
1541102958092
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1137/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1137/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1137/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1137/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1137/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1137/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1137/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers agree this paper is not good enough for ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not acceptable for ICLR in current form\"}",
"{\"title\": \"Response for reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you for your time, comments and suggestions! We think that you'll find your concerns appropriately addressed, and we are looking forward to hear your thoughts. I'll address them line-by-line below:\\n\\n\\\"1. The paper proposes a generic discrete distribution as the variational distribution to run inference for a wide range of models.\\\"\\nThe model we propose is a bit different from leveraging generic discrete distributions. It's not comparable with a categorical latent variable model for example, but rather closer to a mixture-of-diracs, which to the best of our knowledge has not been applied in this context yet.\\n\\n\\\" Even the model is able to do a good job in approximating marginal distributions, it is hard to evaluate whether the model is gaining benefit overall.\\\"\\nTo make the performance of our model more convincing, we have extended the result tables with accuracy and negative log-likelihood metrics.\\n\\n\\\"2. I don't see a strong reason for using discrete distributions. In one dimensional space, a distribution can be approximated in different ways. Using discrete distributions only increases the difficulty of reparameterization.\\\"\\nApproximate distributions with high expressiveness, even in the mean-field scenario, are still an open field of research. We explore the avenue of using quantization of the continuous real line to allow a more flexible distribution different form conventional exponential family distributions used in the field, and show not only that it possible to train such a model with competitive performance, but that the expressiveness is used.\\n\\n\\\"3. In the experiment evaluation, the algorithm seems only marginally outperforms competing methods.\\\"\\nOur goal is not to reach state of the art results, but to show a novel new model that provides a variety of benefits over existing models. We believe the community benefits from exploring a broad range of models, and not just pursue the single angle that currently seems to lead to the strongest performance. There is still room for improvement in this domain, but we believe that the current results are promising enough to communicate to the community.\\n\\n\\\"In the motivation of the paper, it cites low-precision neural networks. However, low-precision networks are for a different purpose -- small model size and saving energy.\\\"\\nI believe that low precision networks have a variety of use-cases, not just small model size. Previous work has shown that low precision networks can outperform their high precision counterparts. It can also serve as an information bottleneck which potentially forces the model to throw away nuisance factors. In our case, we use quantization to make our proposed method tractable, and show that this does not degrade performance\\n \\n\\\"equation 6 is not clear to me.\\\" \\nWe have improved the clarity of section 3, does this clear things up for you?\\n\\n\\n\\\"In equation 10, how are these conditional probabilities parameterized? Is it like: z ~ Bernoulli( sigmoid(wz) ) ? \\\"\\nThe individual $p(z_l|.)$ are parametrized as in equation 4 in the updated paper.\\n\\n\\n\\\"It is nice to have a brief introduction of the evaluation measure SGR.\\\"\\nAgreed, we have extended this introduction and have provide some intuition on its use.\\n\\n\\\"In table 3, 1st column, the third value seems to be the largest, but the fourth is bolded.\\\"\\nThank you for pointing this out, we have updated the results since and reformatted the table.\"}",
"{\"title\": \"Respone for reviewer 1\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for your time and comments, they have helped us further clarify our paper. I believe you might find that your valid concerns have been addressed, and we hope to hear your thoughts on these improvements! I will address your comments line by line below:\\n\\n\\\"The proposed approach relies on optimizing an information bottleneck objective instead of the ELBO.\\\"\\nAs an aside, deriving the ELBO using Jensen's inequality on a conditional log-likelihood $\\\\log p(y|x) = \\\\int_z p(y|z) p(z|x) dz $ leads to a bound $L= E_{q(z|x)} [ log p(y|z) ] - KL(q(z|x) || p(z|x))$. As $p(z|x)$ is not known, this bound can not be trivially optimized. Hence the focus on the information bottleneck objective which is a more natural motivation for the bound we optimize. From this perspective the $q_\\\\phi(z|x)$ normally found in the ELBO, is now replaced with the true posterior $p_\\\\theta(z|x)$ which is learned directly. This shift in what is assumed to be the variational approximation is somewhat mysterious, and I believe this calls for further study, but perhaps not in this work.\\n\\n\\\"While the approach is of interest, a number of questions, central to the work, remain. For example, it is not clear how parameter \\\\beta is chosen/optimised, how the number of bins C is chosen and how the annealing scheme is tuned.\\\"\\nWe use an extensive hyperparameter optimization scheme using TPE, having run 10.000's of experiments with different hyperparameter configurations for both our model and baselines. We briefly described this in section 5's penultimate paragraph due to space constraints, but we have extended this description in the paper. We believe this is about as rigorous as comparisons between models can get. \\n\\n\\\"The authors do not discuss the quantization parameters, such as bin size and location\\\"\\nWe discuss this in figure 5, and use the hyperparameter optimization scheme to find the optimal configuration.\\n\\n\\\"Then the authors propose to use a hierarchical set of latent variables without properly justifying the need, nor discuss how to select the depth and its impact on the performance.\\\" \\nThis is a fair point. We were under the impression that using a hierarchical set of latent variables to create a deep latent variable model was a commonly accepted approach to improve models. We have done some initial experimentation where we found that stacking these latent variable layers improve performance, but we believe that to add a rigorous experiment, under the expensive hyperparameter optimization scheme used in the rest of the paper, would be lead to diminishing returns. \\n\\n\\\"Finally the authors propose yet another extension based on a matrix-factorization with little justification.\\\"\\nWe elaborated on the motivation for this extension in section 5.2. We have moved some of the argumentation further up in the hopes of clarifying this extension.\\n\\n\\\"Overall, this paper does not fully develop the ideas proposed in the paper or discuss them in sufficient detail. The experiments do not provide additional intuition on what's going on and why this helps and are insufficiently documented/made accessible to be convincing. \\\"\\nWe have strived to clarify the experiments, and have extended them with more conventional metric results on Negative log-likelihood and accuracy, where we show convincing performance as well.\\n\\n\\\"For example, I am not sure what to conclude from experiments that rely on no (or \\\"light\\\") hyperparameter tuning\\\"\\nThanks for pointing this out, we have adjusted the wording to clarify. In summary: the notMNIST experiments uses the hyperparameters found on fashionMNIST to evaluate if these hyperparameters are simply overfitted on the dataset specifics, and we find that his is not the case. The SVHN 'light' hyperparameter optimization uses a smaller range of hyperparameter values informed by the findings on the fashionMNIST experiment. \\n\\n\\\"More importantly, the initial claim that uncertainty is better captured relies on SGR, a metric which is not standard and mentioned in passing without being properly defined\\\"\\nTo address this we have both included NLL as an extra metric to show the effective uncertainty estimation, and elaborated on the details of SGR and why we use this method.\\n\\n\\\"Finally, the presentation of Section 3 could be significantly improved.\\\"\\nGreat points, we have incorporated these in the updated paper, please take a look at let me know what you think.\"}",
"{\"title\": \"Thoughts for Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you for your review! Glad to hear you find our direction interesting and we are grateful for your feedback. I am happy to tell you that most of your concerns are addressed in the (updated) paper, and we have strived to improve the clarity of the writing to make that clear. I'll address your comments line-by-line below, we look forward to hear your thoughts!\\n\\n\\\"While the topic is interesting, the work could improve by making more precise the benefit of (relaxed) discrete random variables.\\\"\\nThe benefits are tremendous! As stated in the paper, by quantizing the domain of the individual activations and using a categorical distribution over the bins, we can now learn non-linearity's, do not require any batch normalization and fit any mean-field distribution under the quantization scheme, all in a tractable manner.\\n\\n\\\"compare to any structured distribution such as a flow)\\\"\\nA fair point, but as far as I know normalizing flows have not been used in the context of feed-forward prediction, and are rather expensive. We have instead chosen to compare with baselines that match the computational and application complexity of our model. This would definitely be interesting to study in further work.\\n\\n\\\" if multimodality is the issue, compare to a mixture model\\\"\\nApplying a mixture of Gaussian posterior to the Information Bottleneck objective is non-trivial. It is out of scope to make a fair comparison without previous work paving the way for determining an effective configuration. There are no analytic solutions for the KL between two mixture of Gaussian's, and sampling from the mixture is non-trivial too. We would be interested to see how our model compares against a mixture of Gaussian's posterior, but this more suitable for further work, rather than as a baseline comparison in this paper.\\n\\n\\\"They have a number of hyperparameters that make it difficult to compare without a more rigorous sensitivity analysis (e.g., bin size).\\\"\\nTo prevent the effect of these hyperparameters, we have performed an extensive hyperparameter optimization scheme on both our models and the baselines. This is part of the reason why the extra baselines you request are a rather non-trivial extension of this paper. A sensitivity analysis of the hyperparameters was planned but did not make the paperlimit cut. We believe that the experiment presented in table 2 -- using the hyperparameters found on fashionMNIST on notMNIST -- instead present more convincing evidence that the method is robust. Figure 10 in the appendix provides further insight into the pairwise effect of hyperparameters on the coverage.\\n\\n\\\"the resulting network ends up looking a lot like continuous values but now constrained under a simplex rather than the real line\\\"\\nThe probability distribution of the values are indeed constrained under a simplex, but the actual latent variable values are constrained by the quantization scheme imposed by the prior. The values v assigned to the 'categories', so to say, are thus ordinal and on the real line. Quantization schemes are inherent to any computational model, we simply increase the quantization noise and incur a small bias penalty of the Gumbel softmax scheme. In return we get a much more expressive distribution under the mean-field constraint.\\n\\n\\\"Given that the number of bins they use is only 11, I\\u2019m also unclear on what the matrix factorization approach benefits from. Is this experimented with and without?\\\"\\nIn section 5.2 we explain that for the SVHN dataset we explore a larger number of bins, with the optimization scheme finding C=37 with a factorization factor of B=4 as the optimum. We compare against the non-factorized version as shown in table 3. The factorization scheme allows for a higher fidelity of the latent variables, whilst reducing the amount of parameters required per latent variable.\"}",
"{\"title\": \"Interesting topic but contributions are not well-motivated\", \"review\": \"The authors propose \\u201cStochastic Quantized Activation Distributions\\u201d (SQUAD). It quantizes the continuous values of a network activation under a finite number of discrete (non-ordinal) values, and is distributed according to a Gumbel-Softmax distribution. While the topic is interesting, the work could improve by making more precise the benefit of (relaxed) discrete random variables. This will also allow the authors to more precisely display in the experiments why this particular approach is more natural than other baselines (e.g., if multimodality is the issue, compare to a mixture model; if correlation is a difficulty, compare to any structured distribution such as a flow).\\n\\nDerivation-wise, the method ends up resembling Gumbel-Softmax VAEs but under an information bottleneck (discriminative model) setup rather than under a generative model. Unfortunately, that in and of itself is not original. \\n\\nThe idea of quantizing a continuous distribution over activations using a multinomial is interesting. However, by ultimately adding Gumbel noise (and requiring a binning procedure), the resulting network ends up looking a lot like continuous values but now constrained under a simplex rather than the real line. Given either the model bias against a true Categorical latent variable, or continuous simplex-valued codes, it seems more natural as a baseline to compare against a mixture of Gaussians. They have a number of hyperparameters that make it difficult to compare without a more rigorous sensitivity analysis (e.g., bin size).\\n\\nGiven that the number of bins they use is only 11, I\\u2019m also unclear on what the matrix factorization approach benefits from. Is this experimented with and without?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Too many moving parts\", \"review\": \"The authors consider uncertainty estimation in deep latent variable models. They propose to use quantised latent variable and argue that this solves the overconfidence problem, commonly encountered in variational inference. The proposed approach relies on optimizing an information bottleneck objective instead of the ELBO.\\n\\nWhile the approach is of interest, a number of questions, central to the work, remain. For example, it is not clear how parameter \\\\beta is chosen/optimised, how the number of bins C is chosen and how the annealing scheme is tuned. The authors do not discuss the quantisation parameters, such as bin size and location, which are likely to have a major effect on the performance (and the complexity). Then the authors propose to use a hierachical set of latent variables without properly justifying the need, nor discuss how to select the depth and its impact on the performance. Finally the authors propose yet another extension based on a matrix-factorization with little justification.\\n\\nOverall, this paper does not fully develop the ideas proposed in the paper or discuss them in sufficient detail. The experiments do not provide additional intuition on what's going on and why this helps and are insufficiently documented/made accessible to be convincing. For example, I am not sure what to conclude from experiments that rely on no (or \\\"light\\\") hyperparameter tuning, when the proposed method has many and not discussion is provided about how to set them or how sensitive results are to their actual value. More importantly, the initial claim that uncertainty is better captured relies on SGR, a metric which is not standard and mentioned in passing without being properly defined. The evaluation further depends on a \\\"selective classifier\\\" which is not detailed, but critical to understanding the experiments.\\n\\nFinally, the presentation of Section 3 could be significantly improved. For example, I would suggest distinguishing the neural network parameters of the encoder and the decoder as well as the encoder and decoder networks. I would also refrain using notations like \\\"...\\\" or and always specify what is left and right of an equality. Please spell out all abbreviations at least once in the paper and define all important quantities and concepts.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Variational inference with discrete distribution for uncertainty estimation\", \"review\": \"This paper proposes runs variational inference with discrete mean-field distributions. The paper claims the proposed method is able to give a better estimation of uncertainty from the model.\\n\\nRating of the paper in different aspects ( out of 10)\\nQuality 6, clarify 5, originality 8, significance of this work 5\", \"pros\": \"1. The paper proposes a generic discrete distribution as the variational distribution to run inference for a wide range of models.\", \"cons\": \"1. When the method begins to use mean-field distributions, it begins to lose fidelity in approximating the posterior distributions. Even the model is able to do a good job in approximating marginal distributions, it is hard to evaluate whether the model is gaining benefit overall. \\n\\n2. I don't see a strong reason for using discrete distributions. In one dimensional space, a distribution can be approximated in different ways. Using discrete distributions only increases the difficulty of reparameterization. \\n\\n3. In the experiment evaluation, the algorithm seems only marginally outperforms competing methods.\", \"detailed_comments\": \"In the motivation of the paper, it cites low-precision neural networks. However, low-precision networks are for a different purpose -- small model size and saving energy. \\n\\nequation 6 is not clear to me.\\n\\nIn equation 10, how are these conditional probabilities parameterized? Is it like: z ~ Bernoulli( sigmoid(wz) ) ?\\n\\nIt is nice to have a brief introduction of the evaluation measure SGR. \\n\\nIn table 3, 1st column, the third value seems to be the largest, but the fourth is bolded.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Hygxb2CqKm | Stable Recurrent Models | [
"John Miller",
"Moritz Hardt"
] | Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks. In this work, we conduct a thorough investigation of stable recurrent models. Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks. Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime. Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.
| [
"stability",
"gradient descent",
"non-convex optimization",
"recurrent neural networks"
] | https://openreview.net/pdf?id=Hygxb2CqKm | https://openreview.net/forum?id=Hygxb2CqKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xscrn0b4",
"HylIggEFW4",
"SJgGCBrZx4",
"Ske9haIsRX",
"ByeBXzI5AQ",
"r1gN66Jqam",
"rkgdKTUw6X",
"BJeX698vpQ",
"SkgZ4aMPTm",
"SJxBPyJHam",
"SJeu1Uh7aQ",
"rkeKirnQp7",
"HJe5FH37pX",
"HJeGRbFZT7",
"r1xhUKOThQ",
"rylX3rB_27"
],
"note_type": [
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1546728850998,
1546366958026,
1544799689747,
1543364018383,
1543295517017,
1542221243708,
1542053248286,
1542052538562,
1542036777070,
1541889885195,
1541813728349,
1541813665285,
1541813634059,
1541669321524,
1541405012088,
1541064107052
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1136/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1136/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1136/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1136/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1136/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1136/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1136/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1136/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your interest in our paper and highlighting additional related work. We will incorporate these references into the final version of the paper.\"}",
"{\"comment\": \"I like the way the problem of RNN stability is tackled and the feasibility of replacing them with feed-forward networks is demonstrated in this paper.\\n\\nThere are a couple of papers on stabilizing RNN training which were published in ICML 2018 and NeurIPS 2018 which can be included into related work.\\n\\n1) Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization - (Zhang et al, ICML 2018)\\n2) Kronecker Recurrent Units - (Jose et al, ICML 2018)\\n3) FastGRNN: A Fast, Accurate, Stable and Tiny Kilobyte Sized Gated Recurrent Neural Network - (Kusupati et al, NeurIPS 2018)\\n\\nAlso, in ICML 2017, along with (Vorontsov et al.) there were two more paper dealing with stabilization of RNNs \\n1) Efficient orthogonal parametrisation of recurrent neural networks using householder reflections - (Mhammedi et al., ICML 2017)\\n2) Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs - (Jing et al., ICML 2017).\\n\\nIt would be great if the authors could add these to the camera ready version of the paper make their related search more comprehensive and complete.\\n\\nThanks.\", \"title\": \"Interesting take on RNN stability and needs small updates in Related Work\"}",
"{\"metareview\": \"The paper presents both theoretical analysis (based upon lambda-stability) and experimental evidence on stability of recurrent neural networks. The results are convincing but is concerns with a restricted definition of stability. Even with this restriction acceptance is recommended.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Important topic, favorable reviews but are the stated implications general?\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the clarification and fixing the notations in Theorem 1. I think the discussion of unitary RNN models makes the paper more well-rounded. I hope this work will inspire more research in this direction in the future and help us understand the dynamics of recurrent networks. I would like to keep my rating.\"}",
"{\"title\": \"Well-written, thorough responses\", \"comment\": \"I don't have much to add to the thorough discussion below. I was already in the \\\"accept\\\" camp, and I remain there. I will confer with the other reviewers and consider a revised score.\"}",
"{\"title\": \"Revision to paper\", \"comment\": [\"Thank you for your response. We have updated the paper to reflect our discussion. In particular,\", \"we make clear the sufficient stability conditions are only new in the case of the LSTM and appropriately cite Jin et al. for the 1-layer RNN\", \"we added a discussion around the relationship between stability and data-dependent stability\", \"we clarify our notion of \\\"equivalence\\\" is only in terms of the context required to make predictions and not, e.g., in terms of number of parameters or some other measure, and added further discussion of this distinction to Section 5.\", \"We're happy to address any additional concerns with the current presentation.\"]}",
"{\"title\": \"Reasonable response\", \"comment\": \"Appreciate your response. I am willing to upgrade the rating if the authors can tone down the theoretical claims.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your prompt response. We address these concerns in turn.\", \"gap_between_stability_conditions\": \"The data-dependent condition is a strict relaxation of the Lipschitz condition. Two additional comments are in order.\\n1) Stability is a clarifying concept. The Lipschitz condition is clean and allows us to understand the core phenomena associated with stability. The data-dependent definition is a useful diagnostic-- when our sufficient (Lipschitz) stability conditions fail to hold, the data-dependent condition addresses whether the model is still operating in the stable regime. \\n2) In many cases, we can still prove results with the data-dependent guarantee.\\n--If the input representation is fixed, then all of the proofs go through with the data-dependent condition. If S is the set of inputs from the data distribution, we can simply replace all instances of \\u201cfor all x\\u201d with \\u201cfor all x in S\\u201d. This is the case with polyphonic music modeling. \\n--When the input is not fixed (e.g. word vectors that are updated during training), the proofs go through provided S is interpreted as \\u201call word vectors generated during training.\\u201d\\n\\nIn section 2.2, the subscript t is dropped because the Lipschitz definition of stability (eq 2) must hold for all x.\", \"theoretical_contribution\": \"Our main theoretical contribution is feed-forward approximation of stable recurrent models, especially Proposition 3 and Theorem 1. The results in section 2.2 give concrete examples of our general stability definition. For a 1-layer RNN, the cited paper [1] gives similar stability conditions. However, [1] does not touch on the question of feed-forward approximation, particularly approximation during training, nor does it mention LSTMs. We will add the appropriate citation, but note the RNN stability conditions are a routine one-line calculation and far from our main technical contribution.\", \"equilibrium_states\": \"We only claim equivalence between *stable* RNNs and feed-forward networks. In stable RNNs, all trajectories converge to an equilibrium state. Certainly, general (unstable) RNNs cannot be approximated with feed-forward networks. Understanding to what extent models trained in practice are stable or can be made stable is then an empirical question, and we address this question in Section 4. \\n\\nImplementing truncated models as feed-forward networks increases the number of weights by a factor of $k$. This increase is an artifact of our analysis, and it is an interesting open question to find more parsimonious approximations. From a memory perspective, a feed-forward network with more weights is still a feed-forward network, and our result establishes stable recurrent models cannot have more memory than feed-forward models.\"}",
"{\"title\": \"significance of the theoretical claim\", \"comment\": [\"there is a gap between 'Lipschitz' and 'data-dependent' stability. why is that? In the proof of Section 2.2, in order to satisfy the contractive mapping condition, input data x does not have subscript t, can you justify?\", \"the global stability property for one-layer RNN based on the Lipschitz condition of the activation function is a known result (e.g.[1]). what is the new contribution here?\", \"Jin, Liang, Peter N. Nikiforuk, and Madan M. Gupta. \\\"Absolute stability conditions for discrete-time recurrent neural networks.\\\" IEEE Transactions on Neural Networks 5.6 (1994): 954-964.\", \"The equivalence between RNN and feedforward networks is at the equilibrium state. But how about non-equilibrium states? and the number of weights? It is misleading to claim the two to be equivalent.\"]}",
"{\"title\": \"Thanks!\", \"comment\": \"Thank you for the prompt and thoughtful response. I wanted to let you know that I have read it (and your other responses) and am thinking about follow-up questions. Expect me to reply by mid-next week.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your detailed comments and feedback. We have incorporated some of these suggestions into a revision of the paper. We discuss your concerns below.\", \"motivation_of_stable_models\": \"\", \"there_are_two_reasons_to_consider_stability_in_recurrent_models\": \"1) Stability is natural criterion for learnability in recurrent models. Outside the stable regime, learning recurrent models requires a delicate mix of heuristics. Studying stable models addresses whether this collection of tricks is actually necessary, and our results suggest a better-behaved model class can solve many of the same problems. \\n\\n2) Understanding whether models trained in practice are in the stable regime helps answer when recurrent models are truly necessary. As the reviewer noted, whether the stable model is \\u201cdesirable\\u201d depends on experimentation. However, when a stable model achieves similar performance with an unstable model, the conclusion is a feed-forward network suffices to solve the task. We demonstrate sequence learning happens in the stable regime, and this helps explain the widespread success of feed-forward models on sequence problems.\", \"vanishing_gradients\": \"Stable recurrent models always have vanishing gradients, and vanishing gradients are an important part of proving our approximation results. However, vanishing gradients are not unique to stable models. In the updated version of the paper, we show unstable language models also exhibit vanishing gradients. This corroborates the evidence in section 4.3 showing these models operate in the stable regime.\\n\\nThe cited unitary RNN models may help reduce vanishing gradients. Even in these works, there is still gradient decay over time (e.g. Figure 4, ii in [1]), but the rate of decay is slower. The updated version of the paper includes a brief discussion of these works. At minimum, these models have not yet seen widespread use, and our work demonstrates models frequently trained in practice are either stable or can be made stable without performance loss.\", \"empirical_study_of_the_difference_between_recurrent_and_truncated_models\": \"In the revision, we added experiments studying truncation in the unstable models and also show unstable models satisfy a qualitative version of Theorem 1. All of the models considered, including the LSTM language models, exhibit sharply diminishing returns to larger values of the truncation parameter. As predicted by theorem 1, the difference between the truncated and full recurrent matrix during training becomes small for moderate values of the truncation parameter.\", \"comparison_between_stable_and_unstable_models\": \"We disagree with the interpretation of Table 1. Except for the LSTM language models, the variation in performance between stable and unstable models is within standard-error. We do not retune the hyperparameters when imposing stability, and the near equivalence of the results is evidence the unstable models do not offer a large performance boost. For the LSTM language models, in section 4.3 and 4.4, we argue the unstable LSTM language models are close to the stable regime, and the gap between stable and unstable models is an artifact of the particular way we impose stability.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"Thank you for your comments and feedback. We address each of your concerns below.\", \"take_home_message\": \"The message of the paper is that sequence learning happens, or can be made to happen, in the stable regime. The Lipschitz definition of stability (eq. 2) and the \\u201cdata-dependent\\u201d definition introduced in the experiments are complementary. The data-dependent definition is just a relaxation of the Lipschitz criteria-- we only require equation 2 to hold for inputs from the data-distribution. For the proofs and the majority of the experiments, the strict Lipschitz condition suffices. Most models can be made stable in the sense of equation 2 without performance loss. For LSTMs on language modeling, the data-dependent version illustrates even the nominally unstable LSTMs are close to the stable regime-- a truly unstable model would not satisfy even this weaker definition. We view results with both definitions as evidence recurrent models trained in practice operate in the stable regime.\", \"instantaneous_dynamics\": \"The theory in our paper does consider unrolling the RNNs over time. While the stability condition is stated purely in terms of the the state-transition function from step t to step t+1, the main theoretical results (Proposition 3 and Theorem 1) specifically concern the unrolled RNN. In particular, our results show that the unrolled (stable) RNN can be approximated by a feed-forward network.\", \"spectral_normalization\": \"In our experiments, our focus is more on comparing the performance of stable and unstable models and less on the particular form of normalization used to achieve stability. In the RNN case, enforcing stability via constraining the spectral norm of the recurrent matrix is fairly routine. In the LSTM case, the stability conditions given in Proposition 2 are new and allow one to experiment with stable LSTMs. The updated version of the paper includes a discussion of these other works.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your detailed comments and feedback.\\n\\nWe agree it is difficult to know a priori whether particular dataset will be amenable to stable models. However, stability can still be a clarifying idea in practice. Given a dataset where stable models perform comparably with unstable models, either the dataset does not require long-term memory (i.e. feed-forward approximation suffices), or the unstable models do not take advantage of it. We conjecture most recurrent models successfully trained in practice are operating in the stable regime. To further test this claim, it would be interesting to find datasets (if any) where unstable models significantly outperform stable models, or datasets where non-recurrent models aren\\u2019t competitive with their recurrent counterparts. \\n\\nIn the revision, we added discussion of the several recent works constraining RNN matrices. These works try to keep the model just outside the stable regime to avoid vanishing gradients and side-step exploding gradients (i.e. take lambda ~ 1). The spectral norm thresholding technique for RNNs is straightforward, whereas the stability conditions for the LSTM is new. In either case, our focus is on using these techniques to understand the consequences of imposing stability on recurrent models.\\n\\nIn general, answering the question of accuracy is fairly delicate. We\\u2019re able to show stable and truncated/feed-forward models have the same accuracy. Bounds relating the accuracy of an unstable model with the accuracy of an stable one almost certainly require further assumptions on the data distribution. Obtaining such accuracy bounds for neural networks has been elusive, and part of the contribution of our work is proving a connection between the performance two model classes (stable RNNs and truncated/feed-forward models) without needing to resolve these questions.\"}",
"{\"title\": \"Interesting theoretical angle on RNNs that provides insights but also feel incomplete\", \"review\": [\"This is an interesting paper that I expect will generate some interest within the ICLR community and from deep learning researchers in general. The definition of stability is both intuitive and sound and the connection to exploding gradients is perhaps the most interesting and useful part of the paper. The sufficient conditions yield practical techniques for increasing the stability of, e.g., an LSTM, by constraining the weight matrices. They also show that stable recurrent models can be approximated by models with finite historical windows, e.g., truncated RNNs. Experiments in Sec 4 suggest that stable models produced by constraining standard RNN architectures can compete with their unconstrained unstable counterparts, and often without necessitating significant changes to architecture or hyperparameters. The perhaps most interesting observations are in Sec 4.3, in which the authors claim that even fundamentally unstable models, e.g., unconstrained RNNs, often operate in a stable regime, at least when being applied to in-sample data. I lean toward acceptance at the moment, but I am eager to discuss with the authors and other reviewers as I am not 100% confident that I fully understood the theory.\", \"SUMMARY\", \"This paper proposes a simple, generic definition of \\u201cstability\\u201d for recurrent, non-linear dynamical systems such as RNNs: that given two hidden states h, h\\u2019, the difference between their updated states given input x is bounded by the product between the difference between the states themselves and a small multiplier. The paper then immediately draws a connection between stability, asserting that unstable models are prone to gradient explosions during gradient descent-based training. In Sec 2.2, the paper presents sufficient conditions for basic RNNs and LSTMs to be stable. Secs 3.2 and 3.3 argue that stable recurrent models can be approximated by feedforward models during both inference and training with a finite history horizon, such as a RNN with a truncated history. Experiments in language and music modeling substantiate this claim: constrained, stable models are competitive with standard unconstrained models. Sec 4.3 sheds some light on this phenomenon, arguing that there is a weaker form of data-dependent stability and that even unstable models may operate in a stable regime for some problems, thus explaining the parity between stable and unstable models.\", \"STRENGTHS\", \"This paper is surprisingly engaging and easy to read.\", \"The theorems are clearly stated and the proofs appear sound to me, though I will admit that I am not confident that I would catch a significant bug.\", \"This paper provides a new (to me, anyway) and thought-provoking analysis of RNNs. In particular, I was especially interested in the observation that stable models can be approximated by truncated models and that there is a connection between stability and long-term dependencies. This seems consistent with the fact that for many problems, non-recurrent models (ConvNets, Transformers, etc.) are often competitive with more complex architectures.\", \"WEAKNESSES\", \"In practice it seems as though stability may depend on not only choice of model architecture but also the data themselves. There is probably no good way to know a priori what the stability characteristics of a given data set are, making it tough to apply the ideas of this paper in practice\", \"The literature review seems a bit limited and appears to ignore the growing body of work on constraining RNN weight matrices to address both exploding and vanishing gradients. For example, I am pretty confident that the singular thresholding trick for renormalizing neural net weights has been described in the literature previously.\", \"Although stable and unstable models appear to be competitive in experiments, the theoretical analysis provides no insights into stability and how it relates to accuracy.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting theoretical and practical results but false claims on RNNs\", \"review\": [\"An interesting problem to study on the stability of RNNs\", \"Investigation of spectral normalization to sequential predictions is worthwhile, especially Figure 2\", \"Some theoretical justification of SGD for learning dynamic systems following Hardt et al. (2016b).\", \"The take-home message of the paper is not clear. First, it defines a notion of stability based on Lipchitz-continuity and proves SGD can learn it. Then the experiments show such a definition is actually not correct, but rather a data-dependent one.\", \"The theory only looks at the instantaneous dynamics from time t to t+1, without unrolling the RNNs over time. Then it is not much different from analyzing feed-forward networks. The theorem on SGD is remotely related to the contribution of the paper.\", \"The spectral normalization technique that is actually used in experiments is not new\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"In this paper, the authors study the stability property of recurrent neural networks. Adopting the definition of stability from the dynamical system literature, the authors present a generic definition of stable recurrent models and provide sufficient conditions of stable linear RNNs and LSTMs. The authors also study the \\\"feed-forward\\\" approximation of recurrent networks and theoretically show that the approximation works for both inference and training. Experimental studies compare the performance of stable and unstable models on various tasks.\\n\\nThe paper is well-written and very pleasant to read. The notations are clear and the claims are relatively easy to follow. The theoretical analysis in Section 3 is novel, interesting and solid. However, the reviewer has concerns about the motivation of the presented analysis and insufficient empirical results.\\n\\nThe stability property only eliminates the exploding gradient problem, but not the vanishing gradient problem. The reviewer suspects that a stable recurrent model always suffers from vanishing gradient. Therefore, stability might not necessarily be a desirable property. There has been a line of work that constrain the weight matrix in RNNs to be orthogonal or unitary so that the gradient won't explode, e.g. [1], [2], [3]. It seems that the orthogonal or unitary conditions are stronger than the stability condition, and are probably less prone to the vanishing gradient problem. \\n\\nThe vanishing gradient problem is also related to the analysis in Section 3. If a recurrent network is very stable and has vanishing gradient, then a small perturbation of the initial hidden state has little effect on later time steps. This intuitively explains why it can be well approximated by using only the last k time steps. However, the recurrent model itself might not be a desirable model. In other words, although Theorem 1 shows that $y_T$ and $y_T^k$ can be arbitrarily close, $y_T$ might not be a good prediction.\\n\\nThe experimental study seems weak. Again, in the RNN case, constraining the singular values of the weight matrix is not a new idea. Furthermore, the results in Table 1 seem to suggest that the stable models perform worse than unstable ones. What is the benefit in using stable models? Proposition 2 is only a sufficient condition of a stable LSTM and it seems very restrictive, as the authors point out. This might explain the worse performance of the stable LSTMs in Table 1. The reviewer was expecting more experimental results to support the claims in Section 3. For example, an empirically study of the difference between a recurrent model and a \\\"feed-forward\\\" or truncation approximation.\", \"minor_comments\": \"* Lemma 1: $\\\\lambda$-contractive => $\\\\lambda$-contractive in $h$?\\n* Theorem 1: $k=O(...)$ => $k=\\\\Omega(...)$? Intuitively, a bigger k leads to a better feed-forward approximation.\\n\\n[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. ICML, 2016.\\n[2] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. NIPS, 2016.\\n[3] Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning recurrent networks with long term dependencies. ICML, 2017.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJgy-n0cK7 | Inter-BMV: Interpolation with Block Motion Vectors for Fast Semantic Segmentation on Video | [
"Samvit Jain",
"Joseph Gonzalez"
] | Models optimized for accuracy on single images are often prohibitively slow to
run on each frame in a video, especially on challenging dense prediction tasks,
such as semantic segmentation. Recent work exploits the use of optical flow to
warp image features forward from select keyframes, as a means to conserve computation
on video. This approach, however, achieves only limited speedup, even
when optimized, due to the accuracy degradation introduced by repeated forward
warping, and the inference cost of optical flow estimation. To address these problems,
we propose a new scheme that propagates features using the block motion
vectors (BMV) present in compressed video (e.g. H.264 codecs), instead of optical
flow, and bi-directionally warps and fuses features from enclosing keyframes
to capture scene context on each video frame. Our technique, interpolation-BMV,
enables us to accurately estimate the features of intermediate frames, while keeping
inference costs low. We evaluate our system on the CamVid and Cityscapes
datasets, comparing to both a strong single-frame baseline and related work. We
find that we are able to substantially accelerate segmentation on video, achieving
near real-time frame rates (20+ frames per second) on large images (e.g. 960 x 720
pixels), while maintaining competitive accuracy. This represents an improvement
of almost 6x over the single-frame baseline and 2.5x over the fastest prior work. | [
"semantic segmentation",
"video",
"efficient inference",
"video segmentation",
"video compression"
] | https://openreview.net/pdf?id=BJgy-n0cK7 | https://openreview.net/forum?id=BJgy-n0cK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byl3d0NgxE",
"H1gbgaS3Cm",
"r1lbaHCXT7",
"HygcnN07pm",
"B1lRMlA7p7",
"SkgpZ7NJaQ",
"rJgToa76nQ",
"BygndE8qhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544732276066,
1543425257086,
1541821881191,
1541821617946,
1541820437595,
1541518085370,
1541385637211,
1541198964098
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1135/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1135/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1135/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1135/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1135/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1135/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1135/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1135/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths:\\nPaper uses an efficient inference procedure cutting inference time on intermediate frames by 53%, & yields better accuracy and IOU compared to the one recent closely related work.\\n\\nThe ablation study seems sufficient and well-designed. The paper presents two feature propagation strategies and three feature fusion methods. The experiments compare these different settings, and show that interpolation-BMV is indeed a better feature propagation.\", \"weaknesses\": \"Reviewers believed the work to be of limited novelty. The algorithm is close to the optical-flow based models Shelhamer et al. (2016) and Zhu et al. (2017). Reviewer asserts that the main difference is that the optical-flow is replaced with BMV, which is a byproduct of modern cameras. R3 felt that there was Insufficient experimental comparison with other baselines and that technical details were not clear enough.\", \"contention\": \"Authors assert that Shelhamer et al. (2016) does not use optical flow, and instead simply copies features from frame to frame (and schedules this copying). Zhu et al. (2017) then proposes an improvement to this scheme, forward feature warping with optical flow. In general, both these techniques fail to achieve speedups beyond small multiples of the baseline (< 3x), without impacting accuracy.\", \"consensus\": \"It was disappointing that some of the reviewers did not engage after the author review (perhaps initial impressions were just too low). However, after the author rebuttal R1 did respond and held to the position that the work should not be accepted, justified by the assertion that other modern architectures that are lighter weight and are able to produce fast predictions.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Some good ideas, but just not rated strongly enough by reviewers\"}",
"{\"title\": \"Final comments\", \"comment\": \"Dear all,\\n\\nI went through the answers from the authors and the opinions of the other reviewers. The authors provided an elaborated rebuttal with additional clarifications and experiments. The authors position themselves well w.r.t. to MPEG-flow and provide additional baselines.\\n\\nHowever, my concern regarding the relevance of this work in modern architectures which are lighter and powerful and can get fast predictions from frames only have not been adressed. The authors compare against LinkNet and DRN in terms of accuracy, but not in terms of throughput. Taking the results from the LinkNet paper, for images of size 1920 x 1080 on a NVIDIA Titan X GPU (relatively similar conditions with the current work), they reach 8.5 fps. The current pipeline which is more complex and not as easy to deploy does 9.1 fps. Of course, inter-BMV can do faster while sacrificing accuracy, but as I mentioned in my review, in practice the trade-off and decision of switching to video processing are not obvious.\\n\\nWrapping up, there are some nice ideas and results in the current paper, but I am not convinced for accepting it to the conference. I think this would be a very good contribution for the workshop track.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks very much for taking the time to review our paper.\\n\\n(1) The fact that block motion vectors (BMVs) are rougher motion estimates than optical flow is actually discussed in our results section (Sec. 4.1.2):\\n\\n \\u201cWhile motion vectors are slightly less accurate than optical flow in general, by cutting inference times by 53% on \\n intermediate frames (Sec. 3.3.1), prop-BMV enables operation at much lower keyframe intervals than optical flow to \\n achieve the same inference speeds. This results in a much more favorable accuracy-throughput curve.\\u201d\\n\\nAs a specific example (from Table 1), to achieve throughput of ~13.5 fps on CamVid requires operating at keyframe interval 10 with prop-flow (63.1 mIoU) but only keyframe interval 5 with prop-BMV (65.9 mIoU), which enables ~3% higher mIoU.\\n\\nIn essence, because motion estimation with block motion vectors is *much* cheaper than motion estimation with optical flow, block motion vectors allow us to operate at lower keyframe intervals, and thus achieve *higher accuracy*, for a given inference speed, than optical flow. This holds even given the small head-to-head accuracy difference between flow and BMV.\\n\\nThis is one of the key findings of our paper, and we would be happy to clarify further.\\n\\nAs for blocking artifacts, these are minor, but visible in our qualitative outputs (Fig. 8) -- for example, optical flow is better at preserving thin details, such as the street sign on the left (yellow in the segmentation output). In contrast, forward flow warping causes drastic distortion of moving objects (e.g. the \\u201cADAC\\u201d taxi), occluding objects in the background (e.g. the pedestrians). This is reflected in much lower *overall* quantitative accuracy (mIoU) for prop-flow than for inter-BMV. We will add a note about blocking artifacts to the caption of Fig. 8 in our revision.\\n\\n\\n(2) Our choice is inspired by the use of optical flow, in previous work (e.g. DFF), to warp deep features. Like block motion, optical flow is also computed directly on image pixels (albeit with more complex methods, e.g. Lukas-Kanade [i] or Farneback [ii]), but is still able to effectively warp the intermediate representations of ResNet-based image/video recognition networks. The core reason that pixel-level motion estimates suffice for feature warping is that fully convolutional architectures, such as e.g. FCN [iii] or DeepLab [iv] for segmentation, *preserve spatial structure* in their intermediate representations.\\n\\n[i] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In DARPA Image Understanding Workshop, pages 121\\u2013130, 1981.\\n[ii] G. Farneback. Two-frame motion estimation based on polynomial expansion. In SCIA, 2003.\\n[iii] J. Long et al. Fully convolutional networks for semantic segmentation. CVPR 2015.\\n[iv] Chen et al. Rethinking atrous convolution for semantic image segmentation. TPAMI 2017.\\n\\n\\n(3) We evaluated on two datasets (CamVid, Cityscapes). These are the two most popular benchmarks for segmentation research, and are representative of *realistic video*, which demonstrates strong temporal structure (i.e. lack of random motion from frame-to-frame). Our key point is that we exploit this temporal continuity to accelerate segmentation for practical applications, such as video analytics, interactive film editing, and autonomous perception.\\n\\nTo directly address the reviewer\\u2019s point that \\u201cthe authors are expected to demonstrate when the motion [is] chaotic\\u201d, our techniques are not dependent on any particular structure in the motion vectors. We apply a well-known warping operator that spatially transforms the features with a bilinear upsampling of the motion vector maps [i]. This operation applies even if the vector maps are highly dense or irregular.\\n\\nPlease also see our responses to other reviewers, which contain e.g. more extensive comparison with other state-of-the-art techniques!\\n\\n[i] Jaderberg et al. Spatial Transformer Networks. NIPS 2015.\\n\\n\\nThanks once again for reading through our paper. We look forward to hearing back!\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks a lot for taking the time to review our paper.\\n\\n(1) Limited novelty -- Our paper is very keen on the distinction between our work and Shelhamer et al. (2016) and Zhu et al. (2017). First, Shelhamer et al. (2016) does not use optical flow, and instead simply copies features from frame to frame (and schedules this copying). Zhu et al. (2017) then proposes an improvement to this scheme, forward feature warping with optical flow. In general, both these techniques fail to achieve speedups beyond small multiples of the baseline (< 3x), without impacting accuracy. The key reason for this is that both feature copying and forward warping are unable to capture *new scene content*. In fast moving footage (e.g. driving footage), copied and warped features quickly become obsolete, and warping error compounds significantly (see e.g. qualitative outputs, Fig. 8, in our paper).\\n\\nIn Inter-BMV, we exploit the observation that scenes tend to have semantic start and end points -- e.g. a pedestrian walking across a crosswalk, a car turning a street corner. This allows us to leverage bi-directional warping, a new idea, to strong effect. Our second insight -- that video is compressed by default in a temporally referential manner (e.g. P-/B-frames in H.264 video) -- lends itself to an alternate, computation-free motion estimation scheme. This, together with our observation that video can be more efficiently processed in mini-batches, e.g. of 10 frames, enables us to trade-off a small amount of latency for a large gain in throughput. Ten video frames consists of 330 ms of footage at 30 fps -- this is comparable to the human visual reaction time (230-400 ms, see studies [i]), yet allows us to accelerate segmentation by almost *6x* over frame-by-frame, while maintaining within 1-2% of baseline accuracy.\\n\\nTo the best of our knowledge, none of our core ideas -- (1) bi-directional feature warping, (2) the use of block motion vectors for deep representation warping, and (3) mini-batch processing of video to accelerate segmentation throughput -- have been proposed or published before.\\n\\n[i] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4374455/ \\n\\n\\n(2) Here are comparisons with other SoAs (ranked by accuracy). Note that we significantly outperform Clockwork Convnets (Shelhamer et al. 2016). Note also that CC does not report results on CamVid, nor does it report inference times.\\n\\nCityscapes\\t\\t\\tAccuracy (mIoU)\\tThroughput (fps)\\t Key interval\\nClockwork [i]\\t\\t\\t 64.4\\t\\t\\t --\\t\\t\\t 2\\nDFF [ii]\\t\\t\\t\\t 68.7\\t\\t\\t 4.0\\t\\t\\t 5\\nGRFP [iii]\\t\\t\\t 69.4\\t\\t\\t 2.1\\t\\t\\t 5\\nInter-BMV (us)\\t\\t\\t 70.5\\t\\t\\t 4.9\\t\\t\\t 5\\n\\nCamVid\\t\\t\\t Accuracy (mIoU)\\tThroughput (fps)\\t Key interval\\nGRFP [iii]\\t\\t\\t 66.1\\t\\t\\t --\\t\\t --\\nDFF [i]\\t\\t\\t\\t 67.4\\t\\t\\t 8.0\\t\\t\\t 3\\nInter-BMV (us)\\t\\t\\t 68.7\\t\\t\\t 9.1\\t\\t\\t 3\\n\\n[i] Shelhamer et al. Clockwork Convnets for Video Semantic Segmentation. ECCV Workshops 2016.\\n[ii] Zhu et al. Deep Feature Flow for Video Recognition. CVPR 2017.\\n[iii] D. Nilsson and C. Sminchisescu. Semantic video segmentation by gated recurrent flow propagation. CVPR 2018.\\n\\nFor a more extensive comparison with a number of other segmentation architectures, please see our response to AnonReviewer1!\\n\\n\\n(3) We describe our task network as follows (Sec. 3.1, p. 3):\\n\\n \\u201cWe identify two logical components in our final model: a feature network, which takes as input an image i \\u2208 \\n R^{1\\u00d73\\u00d7h\\u00d7w} and outputs a representation f_i \\u2208 R^{1\\u00d7A\\u00d7h/16\\u00d7w/16}, and a task network, which given the \\n representation, computes class predictions for each pixel in the image, p_i \\u2208 R^{1\\u00d7C\\u00d7h\\u00d7w}.\", \"the_task_network_n_task_is_built_by_concatenating_three_blocks\": \"(1) a feature projection block, which reduces the\\n feature channel dimensionality to A/2, (2) a scoring block, which predicts scores for each of the C segmentation \\n classes, and (3) an upsampling block, which bilinearly upsamples the score maps to the resolution of the input \\n image.\\u201d\\n\\nWe used the DeepLab segmentation architecture (Chen et al. 2017), so we omitted further details about the task network, provided here:\\n (1) Feature projection block - R^{1\\u00d7A\\u00d7h/16\\u00d7w/16} -> R^{1\\u00d7A/2\\u00d7h/16\\u00d7w/16}\\n (2) Scoring block - R^{1\\u00d7A/2\\u00d7h/16\\u00d7w/16} -> R^{1\\u00d7C\\u00d7h/16\\u00d7w/16}\\n (3) Upsampling block - R^{1\\u00d7C\\u00d7h/16\\u00d7w/16} -> R^{1\\u00d7C\\u00d7h\\u00d7w}\\n\\nRegarding Algorithm 2 in the Appendix, good catch!\\n\\tLine 8 should read f_{k+n} \\u2190 N_{feat} (I_{k+n}) NOT\\n\\t\\t\\t f_{k+n} \\u2190 N_{feat} (F_{k+n})\\n\\twhere I_{k+n} refers to the k+n-th frame in the video.\\n\\nWe will correct this in our revision.\\n\\n\\nThanks a lot once again for your comments. We look forward to your response!\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks very much for your thoughtful comments on our paper.\\n\\n(1) Thanks for pointing our attention to Kantorov and Laptev 2014. While Kantorov and Laptev do explore MPEG block motion vectors, they do so in a very different context, treating motion vectors as low-level video features (\\u201cdescriptors\\u201d) to learn more effectively on video. This is a very similar idea to that proposed in CoViAR [i], which trains directly on video I-frames, motion vectors, and residuals (also in the context of action recognition). CoViAR (Wu et al. 2018) is cited and discussed in our paper (Sec. 2.3):\\n\\n \\u201cWu et al. (2018) train a network directly on compressed video to improve both accuracy and performance on video \\n action recognition... Unlike these works, our main focus is not efficient training, nor reducing the physical size of \\n input data to strengthen the underlying signal for video-level tasks, such as action recognition.\\u201d\\n\\nIn contrast, we center our efforts on efficient, frame-level inference (Sec. 2.3 cont\\u2019d):\\n\\n \\u201cWe instead focus on a class of dense prediction tasks, notably semantic segmentation, that involve high- \\n dimensional output (e.g. a class prediction for every pixel in an image) generated on the original uncompressed \\n frames of a video. This means that we must still process each frame in isolation. To the best of our knowledge, we \\n are the first to propose the use of compressed video artifacts to warp deep neural representations, with the goal of... \\n improved inference throughput on realistic video.\\u201d\\n\\nWe will add a citation to Kantorov and Laptev 2014 in our paper revision. Thanks once again for the reference.\\n\\n[i] Wu et al. Compressed Video Action Recognition. CVPR 2018.\\n\\n\\n(2) We compare to these methods in the table below.\\n\\n\\n(3) Here are comparisons to CC, the single-frame models we cited, and other SoA methods. Note that even while none of these schemes report inference times, we still outperform (or are competitive) on accuracy. We\\u2019d be happy to include this table in the revised paper, if helpful.\\n\\nCityscapes\\t\\t Accuracy (mIoU)\\t Throughput (fps) Model notes\\nDFF [i]\\t\\t\\t 72.0\\t\\t\\t 3.0\\t\\t KI=3*\\nInter-BMV (us)\\t\\t\\t 72.5\\t\\t\\t 3.4\\t\\t KI=3\\n\\nClockwork (2016) [ii]\\t\\t 64.4\\t\\t\\t --\\t\\t Alternating (best)\\nYu et al. (2017)\\t\\t 70.9\\t\\t\\t --\\t\\t DRN-C-42 (best)\\nChen et al. (2017)\\t\\t 71.4\\t\\t\\t --\\t DL-101 (best)\\nLin et al. (2017)\\t\\t 73.6\\t\\t\\t --\\t\\t RN-101 (best)\\n\\nCamVid \\t\\t\\tAccuracy (mIoU)\\t Throughput (fps) Notes\\nDFF [i]\\t\\t\\t 67.4\\t\\t\\t 8.0\\t\\t KI=3\\nInter-BMV (us)\\t\\t\\t 68.7\\t\\t\\t 9.1\\t\\t KI=3\\n\\nGRFP (2018) [iii]\\t\\t 66.1\\t\\t\\t --\\t\\t D8+GRFP (best)\\nLinkNet (2017) [iv]\\t\\t 68.3\\t\\t\\t --\\t\\t LinkNet (best)\\nBilinski et al. (2018)\\t\\t 70.9\\t\\t\\t --\\t Single scale (best)\\n\\n*KI = keyframe interval\\n\\n[i] Zhu et al. Deep Feature Flow for Video Recognition. CVPR 2017. \\n[ii] Shelhamer et al. Clockwork Convnets for Video Semantic Segmentation. ECCV Workshops 2016. \\n[iii] D. Nilsson and C. Sminchisescu. Semantic video segmentation by gated recurrent flow propagation. CVPR 2018.\\n[iv] A. Chaurasia and E. Culurciello. LinkNet: exploiting encoder representations for efficient semantic segmentation. arXiv 2017.\\n\\n\\n(4) This is a good suggestion. We compare against Dilated ResNets (Yu et al. 2017) and LinkNet (Chaurasia et al. 2017) in the previous table.\\n\\nThanks once again for the taking the time to review our paper. We look forward to hearing back!\"}",
"{\"title\": \"Interesting idea but need further clarified\", \"review\": \"In this paper, the authors propose a novel segmentation scheme that combines the block motion vectors for feature warping, bi-directional propagation, and feature fusion. Experiments demonstrate its effectiveness compared with alternative methods. However, I still have several concern:\\n1. As the block motion vectors are generally rough estimation, it may damage the performance of the tasks. The authors should further clarify how the imperfect estimation influence the performance, e.g., the Blocking artifacts. \\n2. The features are actually abstract representation of an image while the motion vectors are actually obtained via the pixel comparison. The authors should further justify the motion estimation could be used to the latent feature directly. \\n3. The authors are expected to conduct more comprehensive experiments. Motion vectors are consistent in the current dataset. The authors are expected to demonstrate when the motion are chaotic.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper presents a feature interpolation strategy that has limited novelty\", \"review\": \"This paper presents a feature interpolation strategy for fast semantic segmentation in videos. They first compute features of keyframes, then interpolate intermediate frames based on block-motion vectors (BMV), and finally fuse the interpolated features as input to the prediction network. The experiments show that the model outperforms one recent, closely related work wrt inference time while preserving accuracy.\", \"positive\": \"1. Efficient inference. The strategy cuts inference time on intermediate frames by 53%, while achieves better accuracy and IOU compared to the one recent closely related work.\\n\\n2. The ablation study seems sufficient and well-designed. The paper presents two feature propagation strategies and three feature fusion methods. The experiments compare these different settings, and show that interpolation-BMV is indeed a better feature propagation.\", \"negative\": \"1. Limited novelty. The algorithm is close to the optical-flow based models Shelhamer et al. (2016) and Zhu et al. (2017). The main difference is that the optical-flow is replaced with BMV, which is a byproduct of modern cameras. \\n\\n2. Insufficient experimental comparison with other baselines. In experiments, the paper compares the proposed model with only one baseline Prop-flow, which is not a sufficient comparison to show that the paper really outperforms the state-of-art model. For example, the authors should also compare with \\u201cClockwork convnets for video semantic segmentation.\\u201d \\n\\n3. Some technical details are not clear. For example, in section 3.1, the paper mentions that the task network is built by concatenating three components but never clarifies them. Also, in algorithm 2, line 13 shows that F is a function with two entries, but line 8 indicates that F is a feature.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Encouraging results but main idea is not novel and some baselines are missing\", \"review\": \"# Paper summary\\nThis paper advances a method for accelerating semantic segmentation on video content at higher resolutions. Semantic segmentation is typically performed over single images, while there is un-used redundancy between neighbouring frames. The authors propose exploiting this redundancy and leverage block motion vectors from MPEG H.264 video codec which encodes residual content between keyframes. The block motion vectors from H264 are here used to propagate feature maps from keyframes to neighbouring non-keyframe frames (in both temporal directions) avoiding thus an additional full forward pass through the network and integrate this in the training pipeline. Experimental results on CamVid and Cityscapes show that the proposed method gets competitive results while saving computational time.\\n\\n\\n# Paper strengths\\n- This paper addresses a problem of interest for both academic and industrial purposes.\\n- The paper is clearly written and the authors argument well their contributions, adding relevant plots and qualitative results where necessary.\\n- The two-way interpolation with block motion vectors and the fusion of interpolated features are novel and seem effective.\\n- The experimental results, in particular for the two-way BMV interpolation, are encouraging.\\n\\n\\n# Paper weaknesses\\n\\n- The idea of using Block Motion Vectors from compressed videos (x264, xvid) to capture motion with low-cost has been previously proposed and studied by Kantorov and Laptev [i] in the context of human action recognition. Flow vectors are obtained with bilinear interpolation from motion blocks between neighbouring frames. Vectors are then encoded in Fisher vectors and not used with CNNs as done in this paper. In both works, block motion vectors are used as low-cost alternatives to dense optical flow. I would suggest to cite this work and discuss similarities and differences.\\n\\n\\n- Regarding the evaluation of the method, some recent methods dealing with video semantic segmentation, also using ResNet101 as backbone, are missing, e.g. low latency video semantic segmentation[ii]. Pioneer Clockwork convnets are also a worthy baseline in particular in terms of computational time (results and running times on CityScapes are shown in [ii]). It would be useful to include and compare against them.\\n\\n- In Section 4.1.2 page 7 the authors mention a few recent single-frame models ((Yu et al. (2017); Chen et al. (2017); Lin et al. (2017); Bilinski & Prisacariu (2018)) as SOTA methods and the current method is competitive with them. However I do not see the results from the mentioned papers in the referenced Figures. Is this intended?\\n\\n- On a more general note related to this family of approaches, I feel that their evaluation is usually not fully eloquent. Authors compare against similar pipelines for static processing and show gains in terms of computation time. The backbone architecture, ResNet-101 is already costly for high-resolution inputs to begin with and avoiding a full-forward pass brings quite some gains (though a part of this gain is subsequently attenuated by the latency caused by the batch processing of the videos). There are recent works in semantic segmentation that focus on architectures with less FLOPs or memory requirements than ResNet101, e.g. Dilated ResNets [iii], LinkNet[iv]. So it could be expected that image-based pipelines to be getting similar or better performance in less time. I expect the computational gain on such architectures when using the proposed video processing method to be lower than for ResNet101, and it would make the decision of switching to video processing or staying with frame-based predictions more complex. \\nThe advantage of static image processing is simpler processing pipelines at test time without extra parameters to tune. It would be interesting and useful to compare with such approaches on more even grounds.\\n\\n\\n# Conclusion \\nThis paper takes on an interesting problem and achieves interesting results. The use of Block Motion Vectors has been proposed before in [i] and the main novelty of the paper remains only the interpolation of feature maps using BMVC. The experimental section is missing some recent related methods to benchmark against.\\nThis work has several strong and weak points. I'm currently on the fence regarding my decision. For now I'm rating this work between Weak Reject and Borderline \\n\\n# References\\n\\n[i] V. Kantorov and I. Laptev, Efficient feature extraction, aggregation and classification for action recognition, CVPR 2014\\n[ii] Y. Li et al., Low-Latency Video Semantic Segmentation, CVPR 2018\\n[iii] F. Yu et al., Dilated Residual Networks, CVPR 2017\\n[iv] A. Chaurasia and E. Culurciello, LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation, arXiv 2017\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJlJ-2CqtX | Success at any cost: value constrained model-free continuous control | [
"Steven Bohez",
"Abbas Abdolmaleki",
"Michael Neunert",
"Jonas Buchli",
"Nicolas Heess",
"Raia Hadsell"
] | Naively applying Reinforcement Learning algorithms to continuous control problems -- such as locomotion and robot control -- to maximize task reward often results in policies which rely on high-amplitude, high-frequency control signals, known colloquially as bang-bang control. While such policies can implement the optimal solution, particularly in simulated systems, they are often not desirable for real world systems since bang-bang control can lead to increased wear and tear and energy consumption and tends to excite undesired second-order dynamics. To counteract this issue, multi-objective optimization can be used to simultaneously optimize both the reward and some auxiliary cost that discourages undesired (e.g. high-amplitude) control. In principle, such an approach can yield the sought after, smooth, control policies. It can, however, be hard to find the correct trade-off between cost and return that results in the desired behavior. In this paper we propose a new constraint-based approach which defines a lower bound on the return while minimizing one or more costs (such as control effort). We employ Lagrangian relaxation to learn both (a) the parameters of a control policy that satisfies the desired constraints and (b) the Lagrangian multipliers for the optimization. Moreover, we demonstrate policy optimization which satisfies constraints either in expectation or in a per-step fashion, and we learn a single conditional policy that is able to dynamically change the trade-off between return and cost. We demonstrate the efficiency of our approach using a number of continuous control benchmark tasks as well as a realistic, energy-optimized quadruped locomotion task. | [
"reinforcement learning",
"continuous control",
"robotics",
"constrained optimization",
"multi-objective optimization"
] | https://openreview.net/pdf?id=rJlJ-2CqtX | https://openreview.net/forum?id=rJlJ-2CqtX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syl53KTgg4",
"rJlwa-f0JE",
"ryeVSh2pkE",
"r1e9cU3tyV",
"B1lIiZ19Am",
"HkgBKL6FA7",
"SJgudF2KCX",
"HkgX7K2FA7",
"rkeKAunKAX",
"ryxZT_2YAX",
"HJxn1OnFAQ",
"B1ed4mXbCm",
"BkxWG9hlTQ",
"SkggB79t2X",
"B1g4z20Vs7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544767921764,
1544589758556,
1544567867574,
1544304274089,
1543266717807,
1543259773247,
1543256431545,
1543256346742,
1543256272956,
1543256249226,
1543256036473,
1542693680063,
1541618184576,
1541149496275,
1539791884406
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1134/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1134/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1134/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1134/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1134/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1134/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1134/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1134/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1134/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1134/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1134/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1134/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1134/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1134/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1134/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths: The paper introduces a novel constrained-optimization method for RL problems.\\nA lower-bound constraint can be imposed on the return (cumulative reward), \\nwhile optimizing one or more other costs, such as control effort. \\nThe method learns multiple \\nThe paper is clearly written. Results are shown on the cart-and-pole, a humanoid, and a realistic Minitaur \\nquadruped model. AC: Being able to learn conditional constraints is an interesting direction.\", \"weaknesses\": \"There are often simpler ways to solve the problem of high-amplitude, high-frequency\\ncontrols in the setting of robotics. \\nThe paper removes one hyperparameter (lambda) but then introduces another (beta), although beta\\nis likely easier to tune. The ideas have some strong connections to existing work in \\nsafe reinforcement learning.\", \"ac\": \"Video results for the humanoid and cart-and-pole examples would have been useful to see.\", \"summary\": \"The paper makes progress on ideas that are fairly involved to explore and use\\n(perhaps limiting their use in the short term), but that have potential, \\ni.e., learning state-dependent Lagrange multipliers for constrained RL. The paper is perfectly fine\\ntechnically, and does break some new ground in putting a particular set of pieces together. \\nAs articulated by two of the reviewers, from a pragmatic perspective, the results are not \\nyet entirely compelling. I do believe that a better understanding of working with constrained RL,\\nin ways that are somewhat different than those used in Safe RL work. \\n\\nGiven the remaining muted enthusiasm of two of the reviewers, and in the absence of further\\ncalibration, the AC leans marginally towards a reject. Current scores: 5,6,7.\\nAgain, the paper does have novelty, although it's a pretty intricate setup.\\nThe AC would be happy to revisit upon global recalibration.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"some novelty; muted endorsements; solid writing and results; revisit?\"}",
"{\"title\": \"response\", \"comment\": \"I have read the author response and my opinion remains the same.\\n\\n2) Hyperparameter selection\\n\\nThe authors did not remove the hyperparameter completely, but introduced a new hyperparameter and stated that hyperparameter could make the learning more robust. The robustness of the new hyperparameter has not been verified.\\n\\n3) Relation to safe reinforcement learning\\n\\nIt did some modification on the CMDP and stated the proposed approach could satisfy the desired constraints in the abstract. But in the rebuttal, the authors stated that \\\"there\\u2019s unfortunately no guarantee that the constraints will be satisfied at every moment during training\\\". It is confused whether the proposed method could guarantee the desired constraints and how to obtain \\\"a constant speed is more desirable than a fluctuating one\\\" as stated in section 3.3.\"}",
"{\"title\": \"Updated evaluation\", \"comment\": \"I thank the authors for their comments and revisions. I have updated my evaluation to \\\"6: marginally above acceptance threshold.\\\" The revision makes the paper's contributions clearer (which was my primary concern). I think that the paper makes contributions that are potentially interesting to researchers in reinforcement learning (but I still don't think that the contributions are exceptionally strong). I am still concerned about the issues I had raised before w.r.t. state-dependent lower bounds and I still think that many of the issues the paper tackles (i.e., mitigating bang-bang control) are often relatively easy to tackle with heuristic methods (like changing the cost function) -- this is what roboticists tend to do for hardware implementations of optimal controllers. These are the reasons for my evaluation.\"}",
"{\"title\": \"further input needed from R1 & R2\", \"comment\": \"The discussion period is ending soon (Dec 9?).\\nThe authors have replied in some detail.\\nFeedback from R1 & R2 on the replies would be particularly useful.\\nDo include a short summary of pros & cons from your perspective if you can.\\n\\nYour time is very much appreciated.\\n-- area chair\"}",
"{\"title\": \"Corrected error\", \"comment\": \"Thank you for updating your review and pointing out this error in the paper, we should have spotted this sooner! We have uploaded a corrected version.\"}",
"{\"title\": \"Thanks for the clarification\", \"comment\": \"Thanks for the clarification.\", \"further_comment\": \"(2) So the convex combination is between (Qr-Vr) and -Qc, not (Qr-Vr) and Qc (this is what is in the paper), please update the paper accordingly. I still don't get what is the invariance is here. \\n\\nOther than that, i think my concerns are addressed properly with the current version of the paper and believe it will be of interest to many reinforcement learning researchers and recommend acceptance.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for your comments. Please find below our response to your questions and concerns.\\n\\n1) Technical contributions\\nWe are glad that the reviewer agrees that we are tackling a long standing and important problem and acknowledge the fact that neither the definition of constrained MDPs nor the application of Lagrangian relaxation to solve these problems is novel by itself. We should have stated our exact technical contributions more clearly and have adapted the paper to do so. For completeness we will list these below:\\na) We introduce pointwise, per-state constraints to learn more consistent behavior compared a single global constraint, and regress the resulting state-dependent Lagrangian multipliers using a neural network to exploit generalization across similar states.\\nb) Instead of recombining the reward and cost directly on the environment side and learning a single value estimate, we train a critic network to output both return and penalty value estimates as well as the Lagrangian multipliers themselves, effectively providing more structure to the critic. We only combine the different terms appropriately for the actor update.\\nc) We show that we can train a single, bound-conditional policy that can optimize penalty across a range of bounds and can be used to dynamically trade off reward and penalty.\\n\\n2) Comparison with the original benchmark reward\\nWe have extended the results on Cartpole to include the original reward as defined in the DM Control Suite (incl. bonus for low control). We found that compared to the original setting, our method is able to reduce the average control norm by over 50% across the entire episode, and by over 80% after the swingup phase, without significant reduction in the average return as measured without control bonus.\\n\\n3) Claims about bang-bang control in continuous RL\\nThe reviewer is right in that the claim of RL often leading to bang-bang control is too strongly worded. This is only the case when the objective function is not well-designed and one is naively optimizing for success only. Designing a proper objective function is however often not trivial and more of an art, requiring several iterations to achieve the desired behavior. This work tries to remove some of the complexities in designing such a function.\\n\\n4) State-dependent lower bound\\nDefining a state-dependent bound is indeed not trivial and requires knowledge of what is feasible in the system, and as such we leave this up to future work. In this paper we have made the approximation that the state distribution is stationary and the discount is large enough to assume that the value is more or less constant. While this holds for locomotion tasks, this does not apply in e.g. the swingup phase of the cartpole task and as a result the penalty is completely ignored during this phase.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for your comments. Please find below our response to your questions and concerns.\\n\\n1) Pseudocode\\nWe apologise that the optimization procedure was unclear. We have added pseudocode of the general optimization procedure in Appendix A.\\n\\n2) Hyperparameter selection\\nThe reviewer is completely right that we are removing one hyperparameter by introducing another. However, there are two reasons why this might still be beneficial: one is that the penalty coefficient is now effectively dynamic and can change during training, ensuring higher chances of finding a good solution. Second, by elevating the hyperparameter one level up, we hope that the learning is indeed less sensitive to its specific setting. Indeed, we found in practice that we get similar results for \\\\beta within some orders of magnitude, which requires significantly less tuning compared to a fixed \\\\alpha.\\n\\n3) Relation to safe reinforcement learning\\nIt is indeed the case that constrained MDPs are often considered in safe RL. In those cases there is generally an upper bound on a penalty function that should never be exceeded, including during training itself. These algorithms generally restrict policy updates to remain within the constraint-satisfying regime. While our approach can similarly be applied to upper bounds on penalties, there\\u2019s unfortunately no guarantee that the constraints will be satisfied at every moment during training, but only at convergence. As such it is not clear how these methods would apply to our specific experimental setups.\"}",
"{\"title\": \"Author response\", \"comment\": \"4) Comparing the constrained with unconstrained optimization and the optimality of bang-bang control\\nWe agree that comparing the resulting control signal of the constrained with the unconstrained case as is, is unfair. The goal here was solely to illustrate the issue that unless some form of penalty is included in the optimization perspective, agents often learn bang-bang control. In this work, we want to automatically tune the magnitude of this penalty based on some bound on the success rate. Specifically for cartpole, we show that we can greatly reduce the average control norm without sacrificing task performance. We have added additional results to reflect this, and have also added a comparison with the reward as originally defined in the DM Control Suite.\\nRegarding the optimality of bang-bang control, we meant to refer to the swingup phase itself, not after, apologies for the inclarity. In order to swing up the cartpole as quickly as possible, applying maximum control is indeed the best thing to do. This is related to minimum-time optimal control, where, based on Pontryagin's maximum principle, the optimal control value to reach a certain state in the minimum amount of time will always be an extreme value within the admissible range of controls. As to why we still observe bang-bang control after the swingup phase, this is not clearly understood. Perhaps the minimum-time optimal control principle still holds here, as the policy is generally never able to exactly match the perfect balancing state. Other plausible reasons are that this is a result of exploration noise. As the reviewer notes, bang-bang control is only a solution out of a possibly large set. We however find in practice that without any additional objective, more often than not policies learn this style of control.\\n\\n5) Velocity error\\nThe reviewer is correct in that the term \\u201cerror\\u201d is badly chosen in this case, as it is indeed not necessarily required to stick to close to the return bound in order to optimize the penalty. A more appropriate term is \\u201covershoot\\u201d, and we have adapted the paper to use this wording. In the context of this experiment however, the reward and cost are strictly antagonistic, so the smaller the overshoot with the bound the better.\\n\\n6) Episode termination on large penalty\\nEnding the episode when the constraint is violated would indeed be an alternative to solving the constrained optimization problem in some cases. There are two conditions however: the constraint has to be put on the penalty and has to be satisfied at the start of learning (or each episode will terminate instantly), and the agent does not have to learn to recover from situation where it can not satisfy the constraint. For example, in the case of an extreme disturbance, the agent might have to output more power than the constraint would allow.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for your comments. Please find below our response to your questions and concerns.\\n\\n1) Definition of desired trade-off and practical optimization\\nUsing \\u201ctrade-off\\u201d in Section 3.2 is indeed an unfortunate choice of words. What we intended to explain was that when the gradient with respect to \\\\lambda is zero, either \\\\lambda itself is zero (and optimizing for the penalty does not worsen the return), or the average value is exactly the bound and we satisfy the constraint, while still minimizing the penalty.\\nMore broadly, this work situates itself in multi-objective optimization,where the objectives are counteracting at least part of the time, meaning that in order to gain in one objective, one has to lose in the other. It is here that this \\u201ctrade-off\\u201d comes into play. Different ratios of the objective will (generally) lead to different results. It is however a priori not trivial to define the right ratio for the desired behavior (e.g. certain minimum speed, or maximum power usage). Formulating the problem in terms of constraints on either objective is often more intuitive.\\nAs noted correctly by the reviewer, a full optimization of \\\\lambda in Equation (4) in the inner loop would either lead to \\\\lambda being zero or infinity. In practice, however, one generally updates \\\\lambda only incrementally before optimizing the policy w.r.t the updated \\\\lambda. Ideally, one would optimize the policy until convergence before updating \\\\lambda, as one can effectively switch the inner and outer optimization step, however we found that in practice this is not necessary and instead perform one update to \\\\lambda for each policy update.\\nWe have added pseudocode for the exact optimization procedure in appendix to make this more clear.\\n\\n2) Convex combination\\nIf we define L_1 = Q_r-V_r* and L_2 = -Q_c, then L_1 and L_2 are two objectives that we are trying to maximize simultaneously, but in different ratios depending on the value of \\\\lambda\\u2019. By changing base to L_1 and L_2 we do effectively get a convex combination of L_1 and L_2. As such, Q_\\\\lambda\\u2019 is always smaller or equal to max(L_1, L_2). If we don\\u2019t add this normalization step Q_\\\\lambda can become much larger, with increasing \\\\lambda, than either L_1 or L_2, which we have found can lead to stability problems in the policy update. Another way to look at this is that in order to optimize the return, we want to be able to suppress the penalty until we meet the lower bound.\\nIt is indeed correct that the optimization objective does change when optimizing Equation (6) as is. What should do instead is only consider the gradient w.r.t. \\\\lambda\\u2019 coming from the first term in the numerator. The way we implemented this in practice ensured this implicitly, and it was an oversight of us not to mention this in the original submission, our apologies.\\n\\n3) Benefit of automatic trade off\\nIt is indeed the case that during training the constraint can still be violated. Moreover, in the way we formulated it with a lower bound on the return, this will most definitely be the case. It is only at convergence, when the gradient w.r.t. \\\\lambda is 0, that the constraint is strictly satisfied.\\nThe main benefit of doing this trade-off automatically is that one can specify the desired behavior in terms of a value in one of the objectives, instead of trying out different ratios and verifying the result. Moreover, there is the added flexibility of the ratios changing during training itself, which may help to overcome issues with exploration when the penalty dominates the reward too much at the start of learning.\"}",
"{\"title\": \"Author response\", \"comment\": \"We would first of all like to thank the reviewers for their insightful comments and greatly appreciate the feedback. Our apologies for the unclarities raised and we will try to answer all questions as clear as possible in response to each individual review.\\n\\nBased on the reviewers\\u2019 comments, the main changes to the paper are the following:\\n1) We explicitly report average reward and control penalty for the cart-pole swingup task, and compare the constrained and unconstrained cases with a policy trained with the original task reward that included a fixed control penalty. We show that we can achieve the same return for a significantly less penalty.\\n2) We have added additional results on two other, more challenging, continuous control benchmark tasks: humanoid stand and walk. For the stand task, we observe a similar trend as for cart-pole, where the constraint-based approach is able to achieve a much lower penalty without sacrificing task performance. For the walk task, we satisfy the imposed lower bound.\\n3) Stated the technical contributions more explicitly in Section 1.\\n4) Added pseudocode for the optimization procedure in Appendix A.\"}",
"{\"title\": \"authors -- chance to respond?\", \"comment\": \"Thanks for all the reviews.\\nIf the authors wish to respond, this would be a good time.\\n-- area chair\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes a model free reinforcement learning algorithm with constraint on reward, with demonstration on cartpole and quadruped locomotion.\", \"strength\": \"(1) challenging examples like the quadruped.\\n (2) result seems to indicate the method is effective\", \"there_are_several_things_i_would_like_the_authors_to_clarify\": \"(1) In section 3.2, why is solving (4) would give a \\\"exactly the desired trade-off between reward and cost\\\"? First of all, how is the desired trade-off defined? And how is (4) solved exactly? If it is solved iteratively, i.e, alternating between the inner min and outer max, then during the inner loop, wouldn't the optimal value for \\\\lambda be infinity when constrained is violated (which will be the case at the beginning)? And when the constrained is satisfied, wouldn't \\\\lambda = 0? How do you make sure the constrained will still be satisfied during the outer loop since it will not incurred penalty(\\\\lambda=0). Even if you have a lower bound on \\\\lambda, this is introducing additional hyperparameter, while the purpose of the paper is to eliminate hyperparamter?\\n(2) In section 3.2, equation 6. This is clearly not a convex combination of Qr-Vr and Qc, since convex combination requires nonnegative coefficients. The subtitle is scale invariance, and I cannot find what is the invariance here (in fact, the word invariance\\\\invariant only appears once in the paper). By changing the parametrization, you are no longer solving the original problem (equation 4), since in equation (4), the only thing that is related to \\\\lambda is (Qr-Vr), and in (6), you introduce \\\\lambda to Qc as well. How is this change justified?\\n(3)If I am not mistaken, the constrained can still be violated with your method. While from the result it seems your method outperforms manually selecting weights to do trade off, I don't get an insight on why this automatic way to do tradeoff is better. And this goes back to \\\"exactly the desired trade-off between reward and cost\\\" in point(1), how is this defined?\\n(3) The comparison in the cartpole experiment doesn't seem fair at all, since the baseline controller is not optimized for energy, there is no reason why it would be comparable to one that is optimized for energy. And why would a controller \\\" switch between maximum and minimum actuation is indeed the optimal solution\\\" after swingup? Maybe it is \\\"a\\\" optimal solution, but wouldn't a controller that does nothing is more optimal(assuming there is no disturbance)?\\n(4)For Table I, the error column is misleading. If I understand correctly, exceeding the lower bound is not an error (If I am wrong, please clarify it in the paper). And it is interesting that for target=0.3, the energy consumption is actually the lowest.\\n(5)Another simple way to impose constrained would be to terminate the episode and give large penalty, it will be interesting to see such comparison.\", \"minor_points\": [\"is usually used for optimal value, but is used in the paper as a bound.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"value constrained model-free continuous control\", \"review\": \"This paper uses constrained Markov decision processes to solve a multi-objective problem that aims to find the correct trade-off between cost and return in continuous control. The main technique is Lagrangian relaxation and experiments are focus on cart-pole and locomotion task.\", \"comments\": \"1) How to solve the constrained problem (8) is unclear. It is prefer to provide detailed description or pseudocode for this step.\\n\\n2) In equation (8), lambda is a trade-off between cost and return. Optimization on lambda reduces burdensome hyperparameter selection, but a new hyperparameter beta is introduced. How do we choose a proper beta, and will the algorithm be sensitive to beta?\\n\\n3) The paper only conducts comparison experiments with fixed-alpha baselines. The topic is similar to safe reinforcement learning. Including the comparison with safe reinforcement learning algorithms is more convincing.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes an approach for mitigating issues associated with high-frequency/amplitude control signals that may be obtained when one applies reinforcement learning algorithms to continuous control tasks. The approach taken by the paper is to solve a constrained optimization problem, where the constraint imposes a (potentially state-dependent) lower bound on the reward. This is done by using a Lagrangian relaxation that learns the parameters of a control policy that satisfies the desired constraints (and also learns the Lagrange multipliers). The presented approach is demonstrated on a cart-pole swing-up task as well as a quadruped locomotion task.\", \"strengths\": [\"The paper is generally clear and readable.\", \"The simulation results for the Minitaur quadruped robot are performed using a realistic model of the robot.\"], \"major_concern\": [\"My biggest concern is that the technical contributions of the paper are not clear at all. The motivation for the work (avoiding high amplitude/frequency control inputs) is certainly now new; this has always been a concern of control theorists and roboticists (e.g., when considering minimum-time optimal control problems, or control schemes such as sliding mode control). The idea of using a constrained formulation is not novel either (constrained MDPs have been thoroughly studied since Altman (1999)). The technical approach of using a Lagrangian relaxation is the standard way one goes about handling constrained optimization problems, and thus I do not see any novelty there either. Overall, the paper does not make a compelling case for the novelty of the problem or approach.\"], \"other_concerns\": [\"For the cart-pole task, the paper states that the reward is modified \\\"to exclude any cost objective\\\". Results are then presented for this modified reward showing that it results in high-frequency control signals (and that the proposed constrained approach avoids this). I don't think this is really a fair comparison; I would have liked to have seen results for the unmodified reward function.\", \"The claim made in the first line of the abstract (applying RL algorithms to continuous control problems often leads to bang-bang control) is very broad and should be watered down. This is the case only when one considers a poorly-designed cost function that doesn't take into account realistic factors such as actuator limits.\", \"In the last paragraph of Section 3.3, the paper proposes making the lower-bound on the reward state-dependent. However, this can be tricky in practice since it requires having an estimate for Q_r(s,a) as a function of the state (in order to ensure that the state-dependent lower bound can indeed be satisfied).\"], \"typos\": [\"Pg. 5, Section 3.4: \\\"...this is would achieve...\\\"\", \"Pg. 6: ...thedse value of 90...\\\"\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkeyZhC9F7 | Learning Heuristics for Automated Reasoning through Reinforcement Learning | [
"Gil Lederman",
"Markus N. Rabe",
"Edward A. Lee",
"Sanjit A. Seshia"
] | We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We focus on backtracking search algorithms for quantified Boolean logics, which already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For challenging problems, the heuristic learned through our approach reduces execution time by a factor of 10 compared to the existing handwritten heuristics. | [
"reinforcement learning",
"deep learning",
"logics",
"formal methods",
"automated reasoning",
"backtracking search",
"satisfiability",
"quantified Boolean formulas"
] | https://openreview.net/pdf?id=HkeyZhC9F7 | https://openreview.net/forum?id=HkeyZhC9F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gcvUSlxV",
"BJlrwT-cR7",
"Hke2JfcoT7",
"ryxELVAF6m",
"Skx240VFpQ",
"r1ggEpNKaX",
"Bye14hEKpQ",
"SklWGYIq2X",
"BklL0lZq3X",
"ryxJA8gF2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544734305711,
1543277917381,
1542328803989,
1542214731879,
1542176307550,
1542176039735,
1542175783456,
1541200137333,
1541177549687,
1541109447297
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1133/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1133/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1133/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1133/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1133/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1133/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1133/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1133/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1133/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1133/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes the use of reinforcement learning to learn heuristics in backtracking search algorithm for quantified boolean formulas, using a neural network to learn a suitable representation of literals and clauses to predict actions. The writing and the description of the method and results are generally clear. The main novelty lies in finding a good architecture/representation of the input, and demonstrating the use of RL in a new domain. While there is no theoretical justification for why this heuristic should work better than existing ones, the experimental results look convincing, although they are somewhat limited and the improvements are dataset dependent. In practice, the overhead of the proposed method could be an issue. There was some disagreement among the reviewers as to whether the improvements and the results are significant enough for publication.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper\"}",
"{\"title\": \"Additional experiments\", \"comment\": \"As promised, we added additional experiments to the appendix. Please find them in Appendix E and Appendix F.\"}",
"{\"title\": \"Thanks for the interesting pointer!\", \"comment\": \"We were not aware of this work, and will discuss it in our related work section. There are several key differences compared to our work: Khalil et al. present an approach to learn to predict an existing heuristic called SB using SVMs, while we attempt to learn an entirely new heuristics using deep reinforcement learning. Further, they learn within a single run of the solver, while we learn from executions on a set of formulas. In some sense, the approaches are quite orthogonal, and not necessarily competing against each other. It is unclear to us, if there is a meaningful way to compare the methods experimentally.\"}",
"{\"title\": \"Related Work\", \"comment\": \"The most relevant paper to the work you're proposing here is probably \\\"Learning to Branch in Mixed Integer Programming\\\", https://dl.acm.org/citation.cfm?id=3015920.\"}",
"{\"title\": \"Clarification about QBFEVAL and additional data sets\", \"comment\": \"Thank you for the detailed comments.\\n\\n>>> The authors note the difficulty of finding suitable benchmarks and restrict the set of instances\\n>>> they use for evaluation to formulae where the proposed method is likely to achieve improvements. \\n>>> This skews the evaluation in favor of the proposed method; in particular, the 90% improvement\\n>>> figure mentioned in the abstract is not representative of the general case. Indeed, on \\n>>> another set of instances the proposed method falls significantly short of the performance of\\n>>> a state-of-the-art heuristic that does not employ learning.\\n\\nOur claim is that training a model on several hundred formulas greatly improves the performance of the logic solver on formulas from the same distribution. In our paper, we only support this claim by experiments on the Reductions benchmark. But, in fact, we have confirmed the same results on several datasets of artificially synthesized formulas (encoding random bit-level and word-level circuits). These additional datasets can be found with the published code, and we will provide more details about them in the appendix.\\n\\nIt is natural to ask how a model trained on one distribution performs on a different dataset. Since QBFEVAL is an important data set in the formal methods community, we used it to test the transferability of the heuristics with only partial success. (Training a model directly on QBFEVAL does not seem to be possible at the moment, because of the small size of the dataset, leading to overfitting.)\\n\\nLastly, we want to point out that most other works on ML for formulas only consider sets of random formulas (in particular, formulas synthesized by the authors themselves). In comparison, the Reduction benchmark is a well-known data set from the literature and generated independently from our work. In this way, we believe that we avoid skewing the results in our favor and set a higher bar than related work.\\n\\n>>> A drawback of the paper is that there is no comparison to related work. I\\n>>> realize that this is difficult to achieve because other approaches are in\\n>>> related, but different areas and may be difficult to adapt for this case, but a\\n>>> general comparison to the improvements other approaches achieve would be\\n>>> helpful.\\n\\nWe would love to learn about (and compare to) related work, but we are not aware of any we could meaningfully compare to. Could you point us to any works you are aware of?\\n\\nCompared to the typical improvements through progress in hand-written heuristics, the 1000x improvement in the number of steps needed is enormous.\"}",
"{\"title\": \"Some remarks about the concerns raised\", \"comment\": \"We thank the reviewer for the detailed feedback.\\n\\n>>> No theoretical justification about why this heuristic should work better than the existing ones.\\n\\nThis is a very interesting question, but surprisingly hard to answer. Even for the simpler question of why CDCL for SAT solvers is so unreasonably effective for a wide range of applications, there is no concrete theoretical explanation - despite two decades of research! When there is no satisfactory theoretical explanation, we suggest that it is better to learn the heuristics based on the data itself.\\n\\n>>> Doesn't solve QBF formulas in general, but only 2QBF.\\n\\nOur approach could be easily applied to general QBF as well. The limitation to 2QBF is also due to the underlying tool. But keep in mind that most applications of QBF, e.g. in verification and program synthesis, can be encoded with just one quantifier alternation, so we believe that we captured the most interesting cases of QBF.\\n\\n>>> It is not clear whether the range of formulas that can be solved using this approach is \\n>>> greater than that of existing solvers.\\n\\nOur experiments demonstrate that we can solve significantly more formulas when given enough formulas from a single source (=distribution). We do not claim that the learned models generalize to formulas far away from that distribution. The question whether it is possible to learn models that apply to a wide \\u201crange of formulas\\u201d is indeed an open one.\\n\\n>>> Having a substantial amount of formulas that produce incomplete episodes, as it might be\\n>>> the case in real world scenarios, hinders learning, so the dataset has to be manually\\n>>> adjusted.\", \"we_believe_that_this_the_inherent_challenge_of_problem_solving\": \"how can we learn to solve problems that we have never solved? The assumption underlying this paper is that learning how to solve simpler problems faster, helps us to solve harder problems, too. Our experiments demonstrate that this is indeed possible for problems sets containing many related formulas of different hardness levels.\"}",
"{\"title\": \"We believe the work contains insights for the ML community, too.\", \"comment\": \"We thank the reviewer for the insightful comments.\\n\\n>>> [...] the novelty from a ML and RL point of view remains limited [...]\", \"we_see_contributions_to_two_lines_of_work_published_in_iclr_and_related_conferences\": \"The first concerns the representation of formulas to facilitate learning [1, 2, 3], and the second concerns leveraging reinforcement learning in combinatorial search algorithms [5, 6].\\n\\nCompared to [1, 2, 3], we show how to address the problem of scale. Previous works suggested tree-encoders [2], possible worlds [1], and top-down tree encoders [3]. These approaches seem to be limited to formulas with tens of variables, which would be considered tiny in the verification/formal methods community. To scale up to realistic formulas, orders of magnitude larger of what has been considered before, we suggest to exploit the graph representation of formulas in conjunctive normal form and apply GNNs. While GNNs generally scale well, this is also a conceptual shift: Previous works needed to learn a fixed embedding for variables \\u201ca\\u201d, \\u201cb\\u201d, \\u201cc\\u201d, etc., even though variable \\u201ca\\u201d has no shared meaning across different formulas. GNNs enable us to embed variables based only on the context of their occurrences in the current formula.\\n\\nCompared to [5, 6], our work represents a big step towards practicality. While interesting from a learning perspective, their methods do not come even close to the state-of-the-art in specialized algorithms. We demonstrate that the tight integration of deep learning and combinatorial search algorithms can actually improve the performance of complex and (relatively) large-scale applications of combinatorial search. The main challenge here was the significant performance cost of neural networks. Our work shows that this cost can be outweighed by the dramatically better decisions neural networks suggest (1000x fewer steps needed to solve hard formulas).\\n\\nWe acknowledge that we need to state these points more clearly, and will improve the paper accordingly.\\n\\n[1] \\\"Can Neural Networks Understand Logical Entailment?\\\", in ICLR 2018\\n[2] \\\"Learning Continuous Semantic Representations of Symbolic Expressions\\\", in ICML 2017\\n[3] \\\"Top-down neural model for formulae\\\", under submission to ICLR 2019\\n[4] \\\"Learning a SAT Solver from Single-Bit Supervision\\\", under submission to ICLR 2019\\n[5] \\\"Learning Combinatorial Optimization Algorithms over Graphs\\\", in NIPS 2017\\n[6] \\\"Neural Combinatorial Optimization with Reinforcement Learning\\\", in ICLR 2017\"}",
"{\"title\": \"Interesting application of reinforcement learning and GNN over a specific decision problem\", \"review\": \"The paper is proposing to use reinforcement learning as a method for implementing heuristics of a backtracking search algorithm or Boolean Logic. While I'm not familiar with this specific topic, Section 2 is didactic and clear. The challenges of the tackle problem are clearly explained in this section.\\n\\nThe Graph neural network architecture proposed in Section 4 to compute literals of the formula is an original idea. The experimental results look convincing and suggest this approach should be more deeply investigated.\\n\\nMy main concern is that the novelty from a machine learning and reinforcement learning point of view remains limited while the application seems original and promising. So I will not be strongly opposed to the publication if this work in ICLR venue while I remain unsure it is the best one.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"needs some improvement\", \"review\": \"The aim of this paper is to learn a heuristic for a backtracking search algorithm utilizing Reinforcement learning. The proposed model makes use of Graphical Neural Networks to produce literal and clauses embeddings, and use them to predict the quality of each literal, through a NN, which in turn decides the probability of each action.\\n\\nPositives\\nA new approach on how to employ Machine learning techniques to Automated reasoning problems. Works with any 2QBF solver.\\nThe learned heuristic seems to perform better than the state of the art in the presented experiments.\\n\\nNegatives\\nNo theoretical justification about why this heuristic should work better than the existing ones.\\nDoesn't solve QBF formulas in general, but only 2QBF.\\nIt is not clear whether the range of formulas that can be solved using this approach is greater than that of existing solvers.\\nHaving a substantial amount of formulas that produce incomplete episodes, as it might be the case in real world scenarios, hinders learning, so the dataset has to be manually adjusted.\\n\\nConclusion\\nThe proposed framework is an interesting addition to existing techniques in the field and the idea is suitable for further exploration and refinement. The experimental results are promising, so the direction of the work is worth pursuing. However, some of the foundations and overall nature of the work needs some improvement and maturity.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting application of deep learning with interesting results.\", \"review\": \"The paper proposes an approach to automatically learning variable selection\\nheuristics for QBF using deep learning. The evaluation presented by the authors\\nshows the promise of the method and demonstrates significant performance\\nimprovements over a variable selection heuristic that does not use machine\\nlearning.\\n\\nIn practice, the overhead of the proposed method is likely to be a major\\nobstacle in its adoption. The authors note the difficulty of finding suitable\\nbenchmarks and restrict the set of instances they use for evaluation to formulae\\nwhere the proposed method is likely to achieve improvements. This skews the\\nevaluation in favor of the proposed method; in particular, the 90% improvement\\nfigure mentioned in the abstract is not representative of the general case.\\nIndeed, on another set of instances the proposed method falls significantly\\nshort of the performance of a state-of-the-art heuristic that does not employ\\nlearning.\\n\\nA drawback of the paper is that there is no comparison to related work. I\\nrealize that this is difficult to achieve because other approaches are in\\nrelated, but different areas and may be difficult to adapt for this case, but a\\ngeneral comparison to the improvements other approaches achieve would be\\nhelpful.\\n\\nNevertheless, the work is interesting and presents a new angle on using machine\\nlearning to speed up combinatorial problem solving. While several issues hinder\\npractical adoption, this is likely to lead to interesting follow-up work that\\nwill improve problem solving in practice.\\n\\nThe description of the method (Section 4.1) is short and not detailed enough to\\nreproduce the approach the authors are proposing. However, the code is\\navailable.\\n\\nIn summary, I feel that the paper can be accepted.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1l1b205KX | Unsupervised Disentangling Structure and Appearance | [
"Wayne Wu",
"Kaidi Cao",
"Cheng Li",
"Chen Qian",
"Chen Change Loy"
] | It is challenging to disentangle an object into two orthogonal spaces of structure and appearance since each can influence the visual observation in a different and unpredictable way. It is rare for one to have access to a large number of data to help separate the influences. In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner. We address this problem in a two-branch Variational Autoencoder framework. For the structure branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge. This encourages the branch to distill geometry information. Another branch learns the complementary appearance information. The two branches form an effective framework that can disentangle object's structure-appearance representation without any human annotation. We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesis and real-world data. We are able to generate photo-realistic images with 256*256 resolution that are clearly disentangled in structure and appearance. | [
"disentangled representations",
"VAE",
"generative models",
"unsupervised learning"
] | https://openreview.net/pdf?id=B1l1b205KX | https://openreview.net/forum?id=B1l1b205KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgdmSBxlN",
"BJeGI4xR37",
"Bylmo-s237",
"BkgHZE592X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544733983746,
1541436489835,
1541349787474,
1541215228876
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1132/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1132/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1132/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1132/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"With an average review score of 4.67 and a short review for the one positive review it is just not possible to accept the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The aggregate assessment of reviewers is just not strong enough to warrant acceptance\"}",
"{\"title\": \"Interesting work\", \"review\": \"This paper proposes an unsupervised method to disentangle the latent code of VAE. Overall, it is novel and well written. The experiment has shown good performance of the proposed method.\", \"i_have_some_concerns_as_follows\": \"1. In Eq.(1), y is assumed to follow a Gaussian distribution. Is it possible that y follows a multinomial distribution? Then, this model can be used for clustering.\\n\\n2. In section 2.3, the concatenation between z and y is used to learn a complementary of y. Why does the concatenation encourage to learn the complementary of y? More explanations are needed. Additionally, some experiments are needed to verify this claim.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A conditional deep generative model that lacks in comparison with the state of the art and some experimental results are not convincing\", \"review\": \"In this paper, a conditional deep generative model is proposed for disentangling structure (more precisely shape) and appearance. The architecture of the proposed system is very similar to that in [A], however, in this paper different applications are considered. The paper is relatively well-written and a number of experiments are presented. However, they are that convincing.\\n\\nI have two main concerns regarding this paper.\\n\\n1)\\tThe authors have not taken into account recently proposed deep generative models for disentangling shape and appearance along other modes of visual variations. A non-exhaustive list is as follows:\\n\\n*GAGAN: Geometry-Aware Generative Adversarial Networks\\n*Geometry-Contrastive GAN for Facial Expression Transfer\\n*Cross-View Image Synthesis using Conditional GANs\\n*Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance\\n*Neural Face Editing with Intrinsic Image Disentangling\\n\\n\\nThe authors should discuss how the proposed method is different from the above-mentioned ones and compare the performance of the proposed model against that obtained by GAGAN and Deforming Autoencoders, which are very relevant to the proposed one models.\\n\\n2)\\tSome of the experimental results are not convincing. For example, in Fig. 6 it seems to me that all the chairs produced by the proposed method are identical. In the same figure, Jakab\\u2019s method seems to produce more meaningful results than the proposed method which appears to implement texture style transfer, rather than shape transfer.\\n\\nConsidering all the above, I believe the paper needs substantial improvement prior to being considered for publication.\\n\\n\\nReference\\n\\n[\\u0391] Tomas Jakab, Ankush Gupta, Hakan Bilen, and Andrea Vedaldi. Conditional image generation for\\nlearning the structure of visual objects. NIPS, 2018.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Paper does not justify its modifications to the VAE formulation\", \"review\": \"Summary: The authors present an autoencoding strategy for disentangling structure and appearance in image data. They achieve this using a learned, spatial prior in the VAE framework.\", \"writing\": \"The paper contains grammatical errors and I found Section 2 (2.1 and 2.2, specifically) to be a bit confusing as many ideas were described with words when they could have been outlined more precisely mathematically.\", \"major_comments\": \"The paper takes ideas from Zhang et al and Jakab et al and put them in a VAE context. The paper, however, constructs the ELBO in such a way that distance it from many key ideas of the VAE. Particularly, the paper decomposes the ELBO into three terms and proceed to define these terms as they wish. In this way, the novel contributions of this paper are left unclear and many decisions left unjustified.\\n\\n- Incorporating landmark/spatial information into autoencoders is not a new idea. Zhang et al and Jakab et al both train autoencoders with disentangled structure, and Finn, 2016 [1] uses a similar spatial landmark strategy when learning representations. Incorporating structural information as a prior distribution (as in this paper) is an interesting idea. However, it is not clear that the defined prior log p(y) is a proper density (integrates to 1). Given the importance of a prior distribution in VAEs, this choice should be precisely justified (maybe relate it to beta-VAE or just use a properly normalizable distribution)\\n\\n- The paper chooses variational distributions in such a way that removes the entropy term from the KL divergences. Specifically, both q(z | x, y) and p(z | y) have fixed, identity covariance, resulting in the KL divergences equating to L_2 distance. An important part of the VAE is the explicit incorporation of uncertainty by means of learned variances. Although this can sometimes be problematic (see beta-VAE), neglecting to include these variances at all removes an important aspect of VAE.\\n\\n- The paper includes a likelihood model which also throws away key ideas from VAE. Although some neural likelihood models as in the VAE have issues with blurriness, the reasons are still not completely understood. However, there are well explored alternatives to what the authors propose. For example, many image-based VAEs use Bernoulli likelihoods [2, 3] or autoregressive likelihoods [4]. Autoregressive models, especially, can produce sharp images. The authors introduce a unnormalizable likelihood which combines L1 loss with a function that incorporates L1 distance in VGG space. Using VGG in the likelihood model is unjustified and seems unnecessarily complicated, given the existence of powerful decoders that already exist in VAE literature.\\n\\n- The authors incorporate various connections and concatenations between neural networks and distributions that further complicate the variational lower bound. For example, p(z | y) is concatenated to q(z | x, y) in addition to acting as a prior on q(z | x, y). This introduces a dependency between the likelihood model and the variational posterior which normally does not exist. Furthermore skip connections are introduced between E_\\\\theta and D_\\\\theta, which complicate the ELBO further. The authors should explicitly write out the loss function they are optimizing at this point or describe how they are modifying ELBO to justify these chocise.\\n\\nOverall, I find it difficult to call this a variational autoencoder given the liberal modifications to the evidence lower-bound. However, even if I was to interpret this work as an autoencoder with a custom loss function, this model ends up very similar to that in Zhang et al with the main differences being the inclusion of an equivariance constraint in Zhang that is not present in this paper and that Zhang et al use a feature map that\\u2019s multiplied by landmarks to incorporate appearance information whereas this paper uses the z representation as appearance information. \\n\\nThe qualitative results of this paper, although good looking, are very similar to qualitative results in Zhang et al and Jakab et al. A quick comment: in Figure 6, you should use the same celebrity faces when comparing Jakab against your own work. The quantitative results only compare to the same model trained without the KL loss term; this, in my mind, is more of a sanity check than a fair baseline. The authors should be comparing against alternate strategies that incorporate spatial information, such as Zhang et al and Jakab et al.\\n\\n[1] Finn, Chelsea, et al. \\\"Deep spatial autoencoders for visuomotor learning.\\\" 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016.\\n[2] Chen, Xi, et al. \\\"Variational lossy autoencoder.\\\" arXiv preprint arXiv:1611.02731 (2016).\\n[3] http://ruishu.io/2018/03/19/bernoulli-vae/\\n[4] van den Oord, Aaron, et al. \\\"Conditional image generation with pixelcnn decoders.\\\" Advances in Neural Information Processing Systems. 2016.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkxJ-309FQ | Hallucinations in Neural Machine Translation | [
"Katherine Lee",
"Orhan Firat",
"Ashish Agarwal",
"Clara Fannjiang",
"David Sussillo"
] | Neural machine translation (NMT) systems have reached state of the art performance in translating text and are in wide deployment. Yet little is understood about how these systems function or break. Here we show that NMT systems are susceptible to producing highly pathological translations that are completely untethered from the source material, which we term hallucinations. Such pathological translations are problematic because they are are deeply disturbing of user trust and easy to find with a simple search. We describe a method to generate hallucinations and show that many common variations of the NMT architecture are susceptible to them. We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques, showing that data augmentation significantly reduces hallucination frequency. Finally, we analyze networks that produce hallucinations and show that there are signatures in the attention matrix as well as in the hidden states of the decoder. | [
"nmt",
"translate",
"dynamics",
"rnn"
] | https://openreview.net/pdf?id=SkxJ-309FQ | https://openreview.net/forum?id=SkxJ-309FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gYxYIzgN",
"HJe5CxUsAX",
"HJg0Vyj50m",
"ryxxt8I9RX",
"rkg3VT4q0Q",
"r1e3aqVqRQ",
"rJedhZhYA7",
"H1lf5bhKA7",
"ryeoD08d0X",
"rkxlNRL_C7",
"rygelC8dR7",
"Byg0UoU_RX",
"SyeoGAwkAX",
"BJxe2Ed9nQ",
"HkgrJLDq3m",
"rJgDGTh_2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544870128948,
1543360722084,
1543315253925,
1543296631538,
1543290163737,
1543289540515,
1543254447939,
1543254410377,
1543167587365,
1543167527943,
1543167464250,
1543166805535,
1542581779366,
1541207208316,
1541203421129,
1541094671136
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1131/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1131/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1131/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1131/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1131/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1131/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": [\"Strengths\", \"Hallucinations are a problem for seq2seq models, esp trained on small datasets\", \"Weankesses\", \"Hallucinations are known to exists, the analyses / observations are not very novel\", \"The considered space of hallucinations source (i.e. added noise) is fairly limited, it is not clear that these are the most natural sources of hallucination and not clear if the methods defined to combat these types would generalize to other types. E.g., I'd rather see hallucinations appearing when running NMT on some natural (albeit noisy) corpus, rather than defining the noise model manually.\", \"The proposed approach is not particularly interesting, and may not be general. Alternative techniques (e.g., modeling coverage) have been proposed in the past.\", \"A wider variety of language pairs, amounts of data, etc needed to validate the methods. This is an empirical paper, I would expect higher quality of evaluation.\", \"Two reviewers argued that the baseline system is somewhat weak and the method is not very exciting.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"sufficiently solid but not particularly exciting\"}",
"{\"title\": \"Thank you for reading! We're happy to expand:\", \"comment\": \"Thank you for taking an interest in our work! We're happy to answer your questions.\\n\\n1. Before we embarked on this project, we had no idea how easy it was to generate so many hallucinations. This is an easy test for practitioners to perform on their models to compare the ease of generating hallucinations with different models. We expect practitioners to modify the algorithm to suit their needs. For example, they could change the criteria for what is a hallucination and/or perturb with every token to discover if there were tokens that more reliably cause hallucinations.\\n\\n2. We discovered that you don\\u2019t always need the reference translation! We found that the attention matrix and initial state of the decoder when performing a hallucination are significantly different than a normal translation. You can monitor your attention matrix and norm of the initial state of the decoder to help identify hallucinations. However, we didn\\u2019t know, before doing this experiment, that these markers existed. We keep using the reference translations in algorithm 1 because it requires as much access to the models training pipeline (for most models) as the attention matrix or initial state of the decoder and is simple to reason about. In models with other architectures, perhaps these markers will also change. \\n\\n3. Yes, please see figure 5 and the description on page 8 of the process we used to identify warped attention matrices.\\n\\n4. We see no reason why not (though the work is yet to be done). Using a word based MT system would be similar in practice. The words would be indexed and a sentence converted to a sequence of indices. Then, perturbing with tokens would mean adding an index to that sequence and performing the translation.\"}",
"{\"title\": \"Thanks for your quick response\", \"comment\": \"Thanks for your quick response.\\n\\nI'm surprised that the paper is still comparing with a two-years-ago model and call it SOTA, given that NMT area has achieved remarkable progress in recent two years. \\n\\nActually, it is not difficult to get 38+ BLEU using Transformer_big, without ensemble/back-translation.\\n\\nAs an analysis paper, it is better to investigate SOTA models.\"}",
"{\"title\": \"Yes, we will.\", \"comment\": \"The transformer_big are too large to feasibly run many experiments on. However, at your request, we are currently finishing a batch of transformer_base models (each takes about a week). We will include the results of these models in our manuscript.\"}",
"{\"title\": \"Transformer\", \"comment\": \"I suggest to use Transformer_big configuration, which should be much much better than the tiny config.\"}",
"{\"title\": \"the goal and the results\", \"comment\": \"1. I think a better goal is to quantify the phenomenon of hallucinations of STOA NMT models or systems, instead of a very simple (and maybe out-of-date) model. Even for the updated number, 25.6, it is still far more less than the best results (40.2) two years ago.\\n\\n2. What are the configurations and the BLEU numbers of Transformer model?\"}",
"{\"title\": \"pt 2.\", \"comment\": \"- Why is your baseline so weak?\\n\\nWe have chosen to use small models for the sake of large scale analysis - note each data point comes from 10 separately trained NMT models. Additionally, we are interested in the recurrence of machine translation models and use a dynamical systems approach in one section of the paper. Jacobian computations are notoriously computationally expensive, and it would be technically infeasible to compute Jacobians on larger models (we would run out of memory). Even still, a single Jacobian computation takes around 20 minutes on CPU with Autograd. To compute the Jacobian for a larger model for 2999 sentences, even after parallelizing, would make scientific exploration infeasible. By keeping our models small, we can test over a thousand of models and explore more hypotheses. That being said, for the size of our models, our baseline BLEU score (25.6 with beam search) is reasonable as models above 30 BLEU very quickly become complicated due to the use of hard-to-analyze techniques such as model ensembling, etc. \\n\\nWe aren\\u2019t sure what you mean by \\\"present results on the clean dataset with your baseline.\\\" Our baselines are using the original clean WMT En-De train, dev and test sets. \\n\\nDifferent NMT systems have different values. In production, one might value stability of translations and knowing when the model is hallucinating and balance that tradeoff with accuracy. We study both how to stabilize models and detect, via the attention matrix, when the model is hallucinating. \\n\\n- Your algorithm is very brute-forcey.\", \"yes_it_is_simple\": \"-). Our perturbation pipeline is favorable because of its simplicity and ease of adaptability. Yes, our perturbation pipeline could alter the meaning of sentences. However, one would not expect a completely different translation by adding a word, such as \\u2018and\\u2019, and this is precisely what we document. Previous work shows reastic perturbations (like typos) (such as: https://arxiv.org/abs/1711.02173) that do not semantically alter the meaning of sentences, but the example perturbed translations given in those texts are not as drastically different as we show. Further, we have provided examples of perturbations that do not alter semantic meaning, but still result in hallucinations (for example: all punctuation added to the beginning or end of a sentence, or adding \\u201cund\\u201d (German for \\u201cand\\u201d) as shown above). We aim to provide a framework and categorize a phenomenon that can help improve robustness of translation systems through identifying and understanding where the model slips up.\\n\\nAs you may have different values for the model or system you\\u2019re building, we welcome you to try other types of perturbations (as you seem to have) and even modify the criteria for what is a hallucination based on your own understanding of your models (shift the threshold, include perplexity, change the weights of the adjusted BLEU score). We provide a simple setup, and make a case for the community to explore stability, hallucinations, attention, and the decoder\\u2019s dynamics, as you have already begun to do.\"}",
"{\"title\": \"Thank you for your feedback. pt 1.\", \"comment\": \"Thank you for your interest and feedback. With this paper, we introduce and document a novel phenomenon. We hope the community will do as you have done and explore stability, hallucinations, and the internals of their models.\\n\\nRespectfully, we strongly reject the implication that our methodology is flawed and are confident in our findings. There may be any number of reasons why your results differ from ours, from minor bugs to simple hyper parameter differences. We find the phenomenon of hallucinations to be robust over all our experiments, which include over a thousand models with many hyperparameter settings, random seeds, and architectural variants.\\n\\nBelow, we answer each question:\\n\\n- Why did you use BPE tokens?\\n\\nWe trained all our models with BPE/WPM because it is the common standard in NMT research and production [Google NMT (Wu et al. 2016) uses word-piece model, Transformer (Vasvani et al. 2017) and its derivatives for WMT-17, 18 are using byte-pair encoding (Bojar et al. 2017-18).]. We chose to also use BPE for perturbations to stay consistent with the model (we don\\u2019t re-tokenize the sentence after perturbing it), and allows us to test a mixture of word and character perturbations. That being said, our methodology is much closer to perturbing models with full words as the majority of tokens we perturb with are either full words, punctuation or single characters. Of the tokens we chose as perturbing tokens, 75% of all common tokens, and 37% of all rare tokens are full words. The vast majority of rare tokens (~80%) are single Chinese, Korean, or Arabic characters. In our text, we give examples of realistic perturbations with both full words, for instance inserting \\u201cund,\\u201d the German word for \\u201cand\\u201d (taken from figure 5 (attention matrix), and punctuation. Here are two examples:\", \"original_input\": \"Gauselmann w\\u00fcnscht sich , dass die Mitgliedschaft im Schachclub und auch freundschaftliche Kontakt zum Tennisclub " Rot-Wei\\u00df " als Ausdruck seiner Verbundenheit mit der Kurstadt gesehen wird .\", \"original_translation\": \"Gauselmann wants to see that membership in the chess club and also friendly contact with the tennis club " Rot-Wei\\u00df " is seen as an expression of his commitment to the city city .\", \"reference\": \"Gauselmann wants his membership of the chess club as well as his friendly contact with the " Red-white " tennis club to be seen as an expression of his ties with the spa town .\", \"perturbed_input\": \". Gauselmann w\\u00fcnscht sich , dass die Mitgliedschaft im Schachclub und auch freundschaftliche Kontakt zum Tennisclub " Rot-Wei\\u00df " als Ausdruck seiner Verbundenheit mit der Kurstadt gesehen wird .\", \"translated_perturbed\": \"The Memory of the Science of the Science of the Science of the Science of the Science of the Town Square , the Cathedral , is a new and most popular place .\\n\\nWe give further examples of in section 8.3 of the appendix. \\n\\n\\n- Using BPE tokens.\\n\\nWe append, prepend, replace, etc. as tokens appear in the vocabulary. The vocabulary includes words, subword tokens, and characters. \\\"und\\\" for example, appears as both \\\"und\\\" (word) and \\\"und@@\\\" (subword) in the vocabulary. We're sorry this was misleading. \\nFurther, since we do not re-tokenize after adding perturbations, the NMT model will always see \\u201cund@@ Guten morgen\\u201d and never \\u201c<UNK> morgen.\\u201d In this case, whether a token is a subword or a full word token should not be more informative of how likely a sentence is to hallucinate than the stability of that particular token.\"}",
"{\"title\": \"Thank you for your feedback.\", \"comment\": \"Thank you for your feedback! We're glad you find this exploration interesting.\\n\\nWe've given some thought to how hallucinations compare to adversarial examples. Like adversarial examples, hallucinations illustrate a form of instability in models which can be useful to understand why the model behaves a particular way and help propose ideas for improving stability and generalizability of models. One difference is that we aren't looking for worst case perturbations (or to perturb an input to a particular result), nor do we use gradient based methods. We show it is simple to find a perturbation that causes such a divergent hallucination. So we have similar motivations, but go about it in different ways.\"}",
"{\"title\": \"pt 2\", \"comment\": \"It is difficult to perform exactly algorithm 1 on translation systems like Google Translate. Our analysis requires knowing the vocabulary the model used during training, but production systems are typically trained on datasets that aren't publicly available. We invite researchers who train and serve production systems to test their systems with our methodology.\\n\\nTo explain further why we study a smaller neural network module in insolation, we first agree that production systems output better translations than research systems. Competition submissions also output better translations than research systems. However, our goal is to analyze and quantify a phenomenon we observed. Successful translation products deploy additional safeguards to reduce malformed outputs that are sometimes part of the model (as described above) and sometimes software, including overwriting and fixing outputs that have bad-publicity or are generally malicious. For Google Translate in particular, many examples have been logged/blogged (eg. https://motherboard.vice.com/en_us/article/j5npeg/why-is-google-translate-spitting-out-sinister-religious-prophecies and https://twitter.com/hashtag/neuralempty?src=hash We cite the former in our paper). Competition submissions also employ auxiliary techniques on top of the base model which complicates training and decoding in ways that are not well understood. For example, why does back-translation help so much? What does it change in the trained model? Thus, to hope to study the phenomenon, we remove these additional techniques from the bare NMT model. Our paper quantifies this study and documents our attempts to reduce and understand it. We believe that today's NMT systems, at the core of translation products and competition submissions are prone to hallucinations.\\n\\nStudying smaller models allowed us to tractably study many variants of our canonical model. For the size we investigated, the models used in our experiments achieve a competitive, average BLEU score of 25.6 (we previously reported the greedy BLEU score and not beam search, which the community typically reports). An RNN based NMT model (with 4x larger vocabulary and 4x bigger dimensionality compared to our models) is expected to reach 28 BLEU score ball-park as indicated here (https://github.com/tensorflow/nmt#wmt-german-english, on which our implementations are based) on the particular test set (newstest16) we\\u2019ve used. Since the WMT challenge does not require a single model, all systems with a BLEU score of 30+ incorporate additional techniques on top of the bare NMT architecture. For instance, 2016 WMT German-English winning system with a 38.2 BLEU score, uses back-translation (a data augmentation technique for MT), model ensembles, and rescores with massive Language Models on top of large Neural Networks. These additional techniques would have made studying hallucinations overly complex and masks attempts to tease out root causes of this phenomenon.\\n\\nSmaller models also allowed us to explore a dynamical systems perspective. We were unable to compute the Jacobian of the hidden states of the decoder on a larger model because it simply could not fit in memory. At this size, we can feasibly compute the Jacobian, dh(t)/dh(s), but it still takes around 20 minutes per Jacobian we wish to compute. Even after parallelization, computing Jacobians for at least 2999 sentences pushes us to the edge of reasonable scientific exploration. With a model larger in any dimension, we would not have been able to do this analysis.\\n\\nYour suggestion to study coverage is interesting. While the coverage method proposed by Tu et al. ACL'16 is nontrivial to incorporate into our systems and to test, we did a separate coverage test by providing a coverage penalty during beam search decoding (as used in Google NMT) and found that they hallucinated on average 49.4%. For reference, the canonical model decoded with beam search hallucinates on average 48.2% and greedy decoding hallucinates on average 73.3%. It appears that adding coverage did not decrease hallucinations. However, adding coverage to beam search did impact the BLEU score and lowered the BLEU score from 25.6 to 22.3.\\n\\nThank you for giving us your feedback. We hope you will consider our goals and motivations while evaluating our work.\"}",
"{\"title\": \"Thank you for your feedback. We've run additional experiments to address your questions. pt 1.\", \"comment\": [\"Thank you for your feedback. As per your suggestions, we added the following additional models and experiments, resulting in the following changes.\", \"The BLEU scores we reported in the paper are with greedy decoding. Since the NMT community frequently reports BLEU with beam search, we have updated our paper to reflect this. Our canonical model achieves a competitive BLEU score of 25.6 on newstest16 (https://github.com/tensorflow/nmt#wmt-german-english). We now report this in the paper.\", \"We added a Transformer model to our results.\", \"We perturbed the Transformer model to hallucinate and found that it hallucinates on average (over ten random seeds) 16.6% of the time (there exists a token such that 16.6% of source sentences can be made to a hallucinate). We expand on and give a discussion of the Transformer model we used in the paper.\", \"You are right that coverage would be an interesting model variant to look at. We ran a coverage study and found that coverage with beam search hallucinates on average 49.4% of the time, whereas beam search hallucinates 48.2% of the time and greedy decoding hallucinates 73.3% of the time. We expand more on why we chose this version of coverage below.\", \"Finally, we correlated BLEU score to perturbation percentages and did not see a decrease in perturbation percentage as BLEU score increased. We have added an additional figure to show this. The data shows that there is a correlation coefficient of 0.33 between BLEU score and hallucination percentage. This correlation should be interpreted cautiously because we haven\\u2019t exhaustively explored the full space of models.\", \"Our paper is an analysis paper. Our goals are to quantify the phenomenon of hallucinations and explore what this tells us about training and using NMT models. To make these goals technically feasible, we extract the core NMT neural network model from the layers of techniques and fail-safes in production systems and SoTA-level competition entries. To make these goals technically tractable, we scale down the bare model which allows us to study as many hyperparameters, architectural variants, and random seeds over the thousand+ models we studied. Results we find on small models are not irrelevant. Our canonical models train to an average BLEU score of 25.6, competitive for its size, and we show that an increase in BLEU score does not correlate to a decrease in the percentage of hallucinations. In the next comment, we'll expand further on our decisions.\"]}",
"{\"title\": \"Thank you for your feedback! We've made some clarifications.\", \"comment\": \"Thank you! At your request, we have updated figure 4 to make it more clear what each part represents. We've also added to the caption to explain what the differences between the two attention matrices are. Below, we've expanded more on the questions you've raised.\\n\\n1. We chose subword tokens (segmented with byte pair encoding) from our source language (German) vocabulary so we never have a noisy word that\\u2019s unseen in the training set. We\\u2019ve described how we develop our source, subword vocabulary in section 3 at the bottom of page 3. We chose these tokens as representative of the distribution of tokens: The specific tokens we\\u2019ve chosen are based on one of four types of subword tokens: common, rare, mid-frequency, and punctuation tokens. We first sorted our vocabulary of subword tokens by frequency, then formed the following groups:\\n a. Common tokens: the 100 most common tokens\\n b. Rare tokens: the 100 least common tokens\\n c. Mid-frequency tokens: After removing common and rare tokens from our sorted vocabulary of subword tokens, we sample 100 random tokens.\\n d. Punctuation tokens: All punctuation marks that exist in the vocabulary.\\n (This selection process is described in the first paragraph of section 4.)\\n Since we use BPE encoding, which segments words into sub-word units depending on their frequencies (character level co-occurrences to be precise), unseen words are never treated as UNK tokens. If a word does not appear in the training set, the BPE algorithm will segment it into the sub-words or characters that appear in our final vocabulary instead of using the UNK token.\\n\\n2. Here is a further explanation of the difference in the upper right of figure 4 compared to the upper left.\\nThe attention matrix shows the attention weight applied to each input token in the source sentence (x-axis) as the model decodes and outputs the translated sentence (y-axis). On the upper left, we show the attention matrix of an unperturbed translation. We see weight is applied to most of the input source tokens. On the upper right, we show the attention matrix of the same source sentence, but with a perturbation at the beginning (\\u2018und\\u2019) that causes the translation to hallucinate. We observe that weight is applied to very few input source tokens throughout translation, which is highly atypical and indicative of a broken translation.\"}",
"{\"comment\": \"Hi there,\\n\\nWhen attempting to reproduce your work, we identified a couple of issues that raise some important concerns. I've enumerated our concerns below:\\n\\n- Why did you use BPE tokens?\\n\\nThere's two issues with doing this. The first is that this does not make sense from a real world perspective: if you had used a character model and inserted characters, it resembles human typos, or a world model and inserted words, it resembles errors typically seen in spoken translation systems. However, the use of BPE is somewhere in between -- without any real world basis for this task.\\n\\nThe second, and much more concerning issue, is the way with which you used BPE tokens. It appears, from your examples, that you do not append a BPE token as a separate word (e.g., 'und' and not 'und@@') which would lead the model to treat the added word as part of the next/surrounding words. This is a big issue and significantly weakens the implications of your analysis. We've tried to reproduce your work with a word level model and have failed (we think it's most likely because of the second issue with how you inserted BPE tokens). \\n\\n- Why is your baseline so weak?\\n\\nYou need a stronger baseline. Furthermore, you need to present results on the clean dataset with your baseline and with your improved models. It's meaningless to have a model which has fewer hallucinations, if it means significantly degraded performance on the original dataset.\\n\\n- Your algorithm is very brute-forcey. \\n\\nThe fact that you define your hallucination accuracy to be, whether one of 100 words (in various positions) caused a hallucination, overstates the significance of the hallucination issue. It could, very well be, that whichever perturbation ends up causing a mis-translation has, in fact, significantly altered the meaning of the sentence.\", \"title\": \"Flaws in your approach?\"}",
"{\"title\": \"interesting analysis\", \"review\": [\"I think this paper conducts several interesting analysis about MT hallucinations and also proposes several different ways of reducing this effect. My questions are as follows:\", \"I am very curious about how do you decide the chosen noisy words. I am also wondering what is the difference if you do choose different noisy words. Another thing, if the noisy words are unseen in the training set, will it be treated as \\\"UNK\\\"?\", \"Can you highlight what is changed in the upper right side of fig.4? It would be great if you include gloss in the figure as well.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"about the models\", \"review\": \"My major concern about the work is that the studied model is quite weak.\\n\\t\\\"All models we present are well trained with a BLEU score of at least 20.0 on the test set, a reasonable score for 2-layer models with 256 hidden units.\\\" \\n\\t\\\"We then used the WMT De!En 2016 test set (2,999 examples) to compute the hallucination percentage for each model.\\\"\", \"i_checked_the_wmt_official_website_http\": \"//matrix.statmt.org/matrix. It shows that the best result was a BLEU score of 40.2, which was obtained at 2016. The models used in this work are about 20.0, which are much less than the WMT results reported two years ago. Note that neural machine translation has made remarkable progress in recent two years, not to mention that production systems like Google translator perform much better than research systems. Therefore, the discoveries reported in this work are questionable. I strongly suggest the authors to conduct the studies base on the latest NMT architecture, i.e., Transformer.\\n\\t\\n\\tFurthermore, I checked the examples given in introduction in Google translator and found no hallucination. So I'm not sure whether such hallucinations are really critical to today's NMT systems. I'd like to see that the study on some production translation systems, e.g., applying Algo 1 to Google translator and check its outputs, which can better motivate this work.\\n\\t\\n\\tFor the analysis in Section 6.1, if attention is the root cause of hallucinations, some existing methods should have already address this issue. Can you check whether the model trained by the following work still suffers from hallucinations?\\nModeling Coverage for Neural Machine Translation, ACL 16.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Adversarial examples in NMT\", \"review\": \"The authors introduce hallucinations in NMT and propose some algorithms to avoid them.\\nThe paper is clear (except section 6.2, which could have been more clearly described) and the work is original. \\nThe paper points out hallucination problems in NMT which looks like adversarial examples in the paper \\\"Explaining and Harnessing Adversarial Examples\\\". So, the authors might want to compare the perturbed sources to the adversarial examples.\\nIf analysis is provided for each hallucination patten, that would be better.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Ske1-209Y7 | Probabilistic Model-Based Dynamic Architecture Search | [
"Nozomu Yoshinari",
"Kento Uchida",
"Shota Saito",
"Shinichi Shirakawa",
"Youhei Akimoto"
] | The architecture search methods for convolutional neural networks (CNNs) have shown promising results. These methods require significant computational resources, as they repeat the neural network training many times to evaluate and search the architectures. Developing the computationally efficient architecture search method is an important research topic. In this paper, we assume that the structure parameters of CNNs are categorical variables, such as types and connectivities of layers, and they are regarded as the learnable parameters. Introducing the multivariate categorical distribution as the underlying distribution for the structure parameters, we formulate a differentiable loss for the training task, where the training of the weights and the optimization of the parameters of the distribution for the structure parameters are coupled. They are trained using the stochastic gradient descent, leading to the optimization of the structure parameters within a single training. We apply the proposed method to search the architecture for two computer vision tasks: image classification and inpainting. The experimental results show that the proposed architecture search method is fast and can achieve comparable performance to the existing methods. | [
"architecture search",
"stochastic natural gradient",
"convolutional neural networks"
] | https://openreview.net/pdf?id=Ske1-209Y7 | https://openreview.net/forum?id=Ske1-209Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJetruXgeV",
"ryeWhXFa0m",
"Bkl7FniF0m",
"rJxkP2jFCm",
"HkxSJ3oKC7",
"rylZqklgTX",
"ByxqzZ6jnX",
"rJlLArz9hX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544726593174,
1543504809404,
1543253114850,
1543253079514,
1543252957225,
1541566344849,
1541292305998,
1541182925534
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1130/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1130/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1130/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1130/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1130/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1130/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1130/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1130/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents an architecture search method which jointly optimises the architecture and its weights. As noted by reviewers, the method is very close to Shirakawa et al., with the main innovation being the use of categorical distributions to model the architecture. This is a minor innovation, and while the results are promising, they are not strong enough to justify acceptance based on the results alone.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"insufficient novelty\"}",
"{\"title\": \"Good work but it could be improved\", \"comment\": \"I'd like to first thank the authors for their reply. They have tried conscientiously to improve the paper.\\n\\nHowever, in its current form, I believe the paper still has two shortcomings, namely the similarity to the work of Shirakawa et al (2018) and its comparison to ENAS. I think with a bit of thinking, you may find that there are tuning problems better fitted to PDAS than ENAS (eg entire architecture instead of a recurrent module). I strongly encourage you to pursue this. It is good work, but if improved, it will be more convincing and have far more impact. I strongly encourage the authors to continue working on it and resubmit soon.\"}",
"{\"title\": \"Reply to reviewer 3\", \"comment\": \"We thank reviewers for reviewing this paper and appreciate their pointing out important aspects. We reply to the reviewer\\u2019s comments.\\n\\nOur method is an extension of Shirakawa et al. (2018), and the resulting simple algorithm has fewer hyper-parameters than other architecture search methods but can reach state-of-the-art performance with low computational cost. We derived the algorithm for categorical distribution and made it possible to apply the framework to architecture search spaces represented by categorical variables. To the best of our knowledge, the natural gradient of categorical distribution has not been introduced in the context of stochastic natural gradient methods.\\n\\nWe believe that the simplicity of architecture search methods is an important aspect. It is helpful to reduce the effort of the hyper-parameter tuning of the architecture search method itself. The experimental result implies that the architecture search method does not have to be complicated. \\n\\nOur experiments consist of CIFAR-10 classification and inpainting. The CIFAR-10 classification is a well-studied and relatively simple task, but inpainting is considered a complicated task. As the reviewer pointed out, applying PDAS to the ImageNet dataset or translation task is an important direction. Regarding the ImageNet dataset, as the work of Liu et al. (2018) showed, the architecture obtained in the CIFAR-10 can be transferred to the ImageNet dataset and works well. Since our architecture search space is almost the same as in Liu et al. (2018), the discovered architecture by PDAS can be transferred to ImageNet.\\n\\nHanxiao Liu, Karen Simonyan, and Yiming Yang, \\u201cDARTS: Differentiable Architecture Search,\\u201d arXiv preprint:1806.09055 (2018)\\n\\n\\nAlso, we did our best to fix the English typos in the revised paper to improve the readability.\"}",
"{\"title\": \"Reply to reviewer 2\", \"comment\": \"We thank reviewers for reviewing this paper and appreciate their pointing out important aspects. We reply to the reviewer\\u2019s comments.\\n\\n\\n(1) Trade-off between the number of distribution parameters and performance\\nIn general, we need a large number of iterations when we optimize a large number of distribution parameters. The previous study (Shirakawa et al. 2018) shows that the method could solve the problem of 3,000 bits in 1M iterations. We note that the number of bits corresponds to the total dimension of the one-hot vectors in our categorical distribution case. Considering the modern deep learning setting, the number of iterations for network training is up to about 1M. In summary, we can propose the guideline that PDAS should be applied to the problem with up to 3,000 dimensions.\\n\\n\\n(2) Comparison with E-CAE in inpainting task\\nAs the reviewer pointed out, the performance of E-CAE is superior to our method, while the running time (GPU time) of E-CAE is about 14 times slower than PDAS. We note that PDAS can be parallelized for the sample size and mini-batch size. It means that PDAS users can try another setting and tune several hyper-parameters by using this spare time. One of the important points for the final performance is the search space design, e.g., the design of modules and the encoding scheme of the architecture. Since we propose the architecture search method in this paper, we applied the proposed method to the same search space used in the previous studies. However, practical users can spend the spare time to try other search spaces.\\n\\nWe have added the explanation about this point to Section 4.\\n\\n\\n(3) Sample size\\nWe used just two samples to estimate the gradients. From the viewpoint of stochastic approximation theory, the small learning rate has a similar effect of the large sample size\\u3000(Nishida et al. 2018). Therefore, even if the sample size equals two, we can optimize the distribution parameters properly with the appropriate small learning rate. In fact, we have observed that the learning rate decreases by the learning rate adaptation mechanism. In the CIFAR-10 case, the learning rate starts from 0.0845 and decreases to about 0.003. In the work of Nishida et al. (2018), the sample size adaptation mechanism for a fixed learning rate is also introduced. One possible future direction is to investigate the effect of the sample size adaptation instead of the learning rate adaptation.\\n\\nWe have added the explanation about this point to Section 2.\"}",
"{\"title\": \"Reply to reviewer 1\", \"comment\": \"We thank reviewers for reviewing this paper and appreciate their pointing out important aspects. We reply to the reviewer\\u2019s comments.\\n\\nFirst of all, we believe that it is worthwhile that the simple proposed method can achieve competitive performance with complicated architecture search methods. This implies that the previous methods are overcomplicated and simple probabilistic modeling is sufficient for architecture search. This aspect helps us to reduce the effort in tuning the hyper-parameters of the architecture search method itself. Also, the proposed method is the fastest among the existing architecture methods.\\n\\n\\n(1) Novelty and contribution\\nAs the reviewer pointed out, this paper is an extension of Shirakawa et al. (2018). The previous work only derived the algorithm for Bernoulli distribution and only applied it to simple tasks, e.g., layer selection and connection pruning. The contribution of this paper is as follows:\\n\\n(i) We derived the algorithm for categorical distribution and made it possible to apply the framework to architecture search spaces represented by categorical variables. To the best of our knowledge, the natural gradient of categorical distribution has not been introduced in the context of stochastic natural gradient methods.\\n\\n(ii) We showed PDAS, which has fewer hyper-parameters than ENAS, is fast and can reach state-of-the-art performance. The intrinsic hyper-parameters of PDAS are the sample size and the learning rate, but the learning rate can be adaptive.\\n\\nWe have added the explanation about the novelty and contribution to Section 1.\\n\\n\\n(2) Theoretical justifications\\nWe split the dataset into two datasets with the same number of items for updating the weight and distribution parameters. As each dataset is sampled from the original one, the losses of mini-batch samples from both datasets approximate the original loss of all of the data if the dataset size is sufficiently large. Therefore, even if we use split datasets, we can view that the losses in the equations of (2) and (3) approximate the original loss. Of course, we can formulate the update rules with different datasets by starting from different original objectives for the weight and distribution parameters as done in ENAS. We have added the sentence about this point to Section 2.\\n\\n\\n(3) Relation to ENAS\\nAs the reviewer pointed out, the main difference between PDAS and ENAS is the probabilistic model of architectures, i.e., PDAS uses the categorical distribution, and ENAS uses the LSTM network. We make this point clear by adding the sentence to Section 3.\\n\\nAs the search space of CIFAR-10 is the same in ENAS and PDAS, we assume that both methods can find quasi-optimum architectures. We would like to emphasize that PDAS can provide competitive performance with lower computational cost compared to ENAS though it is simple. The simple modeling of architecture makes it possible to derive the natural gradient in PDAS. As ENAS uses the LSTM network as the controller, it cannot derive the analytical natural gradient. The natural gradient method can update the model parameters to the steepest direction with respect to KL-divergence and has a big advantage in the optimization of a probabilistic model.\\n\\nThe intrinsic hyper-parameters of PDAS are the sample size and the learning rate. In addition, the learning rate can be adaptive by the method in Nishida et al. (2018). Meanwhile, ENAS has more hyper-parameters, e.g., the sample size, the learning rate, the regularization coefficient, the number of units in LSTM, and the architecture design of the controller. Moreover, ENAS tuned such hyper-parameters depending on the tasks, e.g., they used the regularization coefficients of 0.0001 for PTB and 0.1 for CIFAR-10. However, our method did not change the hyper-parameters for the image classification and inpainting tasks. Our method has such attractive properties because it is designed based on the stochastic natural gradient method, which is theoretically well studied, for example, in Akimoto and Ollivier (2013). We would like to emphasize that our method is handier than ENAS in practice because PDAS showed the decent performance without problem-specific hyper-parameter tuning as ENAS does.\\n\\nY. Akimoto and Y. Ollivier, \\u201cObjective Improvement in Information-Geometric Optimization,\\u201d Foundations of Genetic Algorithms XII (FOGA XII) (2013).\\n\\nWe have added the sentence about this point to Section 3.\\n\\n\\n(4) Minor issues\\nAs the reviewer pointed out, it is meaningless to report the \\\"best\\\" test error. We have removed it from Table 1.\\n\\nAlso, the proposed method is not really \\u201cparameterless.\\u201d Specifically, our method is parameter-adaptive or pseudo parameterless. We would like to say that our method can achieve decent performance with low computational cost without special hyper-parameter tuning. We have modified the explanation about this point in Section 1 and Section 5.\"}",
"{\"title\": \"Simple and effective method with limited novelty\", \"review\": \"The authors propose to formulate the neural network architecture as a collection of multivariate categorical distributions. They further derive sample-based gradient estimators for both the stochastic architecture and the deterministic parameters, which leads to a simple alternating algorithm for architecture search.\", \"pros\": [\"Intuitions and formulations are easy to comprehend.\", \"Simpler to implement than most prior methods.\", \"Appealing results (on CIFAR-10) as compared to the state-of-the-art.\"], \"cons\": [\"Limited technical novelty. The approach is a straightforward extension of Shirakawa et al. 2018. The main algorithm is essentially the same except minor differences in gradient derivations.\", \"Lack of theoretical justifications. It seems all the derivations at the beginning of Section 2 assume the architecture is optimized wrt the training set. However, the authors ended up splitting the dataset into two parts in the experiments and optimize the architecture wrt a separate validation set instead. This would invalidate all the previous derivations.\", \"The method is a degenerated version of ENAS. A closer look at eq (2) and (3) suggests the resulting iterative algorithm is almost the same as that in ENAS, where the weights are optimized using GD wrt the training set and the architecture is optimized using the log-derivative trick wrt the validation set. The only distinction are (i) using a degenerated controller/policy formulated as categorical distributions (ii) using the validation loss instead of the validation accuracy as the reward (according to eq. (3)). This is also empirically reflected in Table 1, which shows the proposed PDAS is similar to ENAS both in terms of efficiency and performance. The mathematical resemblance with ENAS is not necessarily bad, but the authors need to make it more explicit in the paper.\"], \"minor_issues\": [\"I'm not sure whether it's a good practice to report the \\\"best\\\" test error among multiple runs in Table 1.\", \"The method is not really \\\"parameterless\\\" as claimed in the introduction. For example, a suitable learning rate adaptation rule can be task-specific thus requires manual tuning/design. The method also consists of some additional hyperparameters like the \\\\lambda in the utility transform.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Proposes an architecture search technique, easy to read. Not confident about the baselines and how this is compared to the literature.\", \"review\": \"This paper proposes an architecture search technique in which the hyperparameters are modeled as categorical distribution and learned jointly with the NN. The paper is written well. I am not an expert of the literature in this domain so will not be able to judge the paper regarding where it is located in the related work field.\", \"pros\": \"-This is a very important line of research direction that aims to make DNNs practical, easy to deploy and cost-effective for production pipelines. \\n-The categorical distribution for hyperparameters makes sense, and the derivation of the joint training seems original idea. I liked the fact that you need to train the NN just twice (the second one only to fine tune with optimized parameters) \\n-Two very different problems (inpainting/encoding-decoding + CNN/classification) have been demonstrated.\\n-Existing experiments have been explained with enough detail except for minor points.\", \"cons\": \"-I speculate that there is a trade-off between the number of different parameters and whether one training is good enough to learn the architecture distribution. i.e., When you have huge networks and many parameters, how well this method works? I think the authors could provide some experimental study suggesting their users what a good use case of this algorithm is compared to other techniques in the literature. In what type of network and complexity this search method works better than others?\\n-E-CAE for in-painting seems to be working significantly better than the proposed technique. Regarding results, I was expecting more insights into why this is the case. As above, at what type of a problem one should pick which algorithm? If the 7hours vs. 3days GPU difference negligible for a client, should one pick E-CAE? \\n-In theory, there has been shown lambda samples (equation 2 and 3). However, the algorithm seems to be using just 2? If I didn't miss, this is not discussed thoroughly. I speculate that this parameter is essential as the categorical distribution gets a bigger search space. Also the reliability of the model and final performance, how does it change concerning this parameter?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lacking novelty, but cool results\", \"review\": \"This paper presents a joint optimization approach for the continuous weights and categorical structures of neural networks. The idea is the standard stochastic relaxation of introducing a parametrised distribution over the categorical parameters and marginalising it. The method then follows by alternating gradient descent on the weights and the parameters of the categorical distribution.\", \"this_exact_approach_was_proposed_in_https\": \"//arxiv.org/abs/1801.07650 by Shirakawa et al. The only innovation in this work is that it uses categorical distributions with more than two values. This is a minor innovation.\\n\\nThe experiments are however interesting as the paper compares to the latest hyper-parameters optimization strategies for neural nets on simple tasks (eg CIFAR10) and gets comparable results. However, given that this is the biggest contribution of the paper, it would have been nice to see results in more complex tasks, eg imagenet or translation.\\n\\nI very much enjoyed the simplicity of the approach, but the question of innovation is making wonder whether this paper makes the ICLR bar of acceptance. The paper is also hard to read because of many English typos.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJeRg205Fm | Neural Network Regression with Beta, Dirichlet, and Dirichlet-Multinomial Outputs | [
"Peter Sadowski",
"Pierre Baldi"
] | We propose a method for quantifying uncertainty in neural network regression models when the targets are real values on a $d$-dimensional simplex, such as probabilities. We show that each target can be modeled as a sample from a Dirichlet distribution, where the parameters of the Dirichlet are provided by the output of a neural network, and that the combined model can be trained using the gradient of the data likelihood. This approach provides interpretable predictions in the form of multidimensional distributions, rather than point estimates, from which one can obtain confidence intervals or quantify risk in decision making. Furthermore, we show that the same approach can be used to model targets in the form of empirical counts as samples from the Dirichlet-multinomial compound distribution. In experiments, we verify that our approach provides these benefits without harming the performance of the point estimate predictions on two diverse applications: (1) distilling deep convolutional networks trained on CIFAR-100, and (2) predicting the location of particle collisions in the XENON1T Dark Matter detector. | [
"regression",
"uncertainty",
"deep learning"
] | https://openreview.net/pdf?id=BJeRg205Fm | https://openreview.net/forum?id=BJeRg205Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skl_Aig2kN",
"H1leTlOyAQ",
"B1lBOWKbpX",
"rJgovF952X",
"SJx6p1W53m",
"SyehQhujcm"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544453071601,
1542582456138,
1541669228918,
1541216610867,
1541177285111,
1539177508173
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1129/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1129/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1129/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1129/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1129/AnonReviewer2"
],
[
"~Andrey_Malinin1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes to quantify the uncertainty of neural network models with Beta, Dirichlet and Dirichlet-Multinomial likelihood. This paper is clearly written with a sound main idea. However, it is a common practice to model different types of data with different likelihood, although the proposed distributions are not usually used for network output. All the reviewers therefore considered this paper to be of limited novelty. Reviewer 2 also had a concern about the mixed experimental results of the proposed method.\\n\\nReviewer 3 raised the concern that this paper did not model the uncertainty of prediction from the uncertainty of the model parameters. It is a common consideration in a Bayesian approach and I encourage the authors to discussed different sources of uncertainty in future revisions.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited novelty\"}",
"{\"title\": \"Relationship to variational autoencoder models\", \"comment\": \"Thank you for taking the time to review our paper.\\n\\nFor each input, the proposed model provides a distribution over the possible target values, not just a point estimate. A variational autoencoder is able to model more complex output distributions by replacing a fixed output distribution with a neural network, but it is fundamentally doing the same thing --- it is just another parameterized model trained to maximize the conditional likelihood of the targets. The models described in this paper are simpler and have practical advantages over variational autoencoder models: 1) training can be performed using the true gradients rather than approximations, 2) the form of the output (posterior) distribution is easy to interpret, and 3) it is easy to integrate the output distribution over the target space.\"}",
"{\"title\": \"Unoriginal and unfortunately unfocused contributions\", \"review\": \"The authors use neural networks to parameterize conditional probability distributions. This is well-known and has been applied in the literature since extensions to generalized linear models beyond their canonical link function in the 70s. Their transformation from real-valued network output to, say, strictly positive concentration parameters in a Dirichlet are worth studying; but they don't analyze this in any detail.\\n\\nIn addition, while lacking novelty may be fine in and of itself, the purpose of applying these ideas doesn't have a focused purpose. For example, the authors argue in the abstract this quantifies uncertainty. That's only true if you care about data noise, but the end-result is still point estimation for the parameters with uncalibrated probabilities. In the rest of the paper, they write primarily about simplex-valued outputs (i.e., soft one-hot labels).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"No novelty, conceptually problematic, and exceeding the page limit\", \"review\": \"The paper shows how to model the outputs of neural networks via likelihoods other than commonly used ones. The likelihoods discussed include Beta, Dirichlet and Dirichlet-Multinomial. The paper introduces the gradient computation of these likelihoods and test them in several datasets.\\n\\nThis paper lacks novelty and has conceptual mistakes. It is a common practice, in Bayesian learning, to model different types of data with different likelihoods. The examples discussed in this paper are very basis and the gradient computation is standard. I do not see anything new. And the authors misunderstand that if you involve some likelihood in training, you can quantify the uncertainty. It is wrong. Uncertainty should be estimated in the posterior inference framework --- you need to integrate the posterior distribution of the (latent) random variables into the test likelihood to obtain the predictive distribution, from which you can identify the confidence levels. That\\u2019s why auto-encoding variational Bayes framework is useful and popular. \\nWhat the paper is doing is still the point estimation. \\n\\nBesides, the paper exceeds the 8-page limit for the content.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Reasonable proposal and well-written paper, but no new insights and inconclusive empirical results\", \"review\": \"This paper considers parameterizing Dirichlet, Dirichlet-multinomial, and Beta distributions with the outputs of a neural network. They present the distributions and gradients, discuss appropriate activation functions for the output layer, and evaluate this approach on synthetic and real datasets with mixed results. Overall, I found the writing very clear, the main idea sound, and paper generally well executed, but I have serious concerns about the significance of the contributions that lead me to recommend rejection. It would be very useful to me if the authors would provide a concise list of what they consider the main contributions to be and why they are significant. As I see it, the paper does three main things:\\n\\n1. In section 2, the authors consider parameterizing Dirichlet, Dirichlet-multinomial, and Beta distributions with the outputs of a neural network (Section 2). As the authors note, parameterizing an exponential family distribution with the outputs of a neural network is not a novel contribution (e.g. Rudolph et al. (2016) and David Belanger's PhD thesis (2017)) and though I have never personally seen the Dirichlet, Dirichlet-multinomial, and Beta distributions used, the conceptual leap required is small. Most of section 2 is dedicated to writing down, simplifying, and deriving gradient equations for these three distributions. The simplifications and gradient derivations are well known and appear in many places (e.g. http://jonathan-huang.org/research/dirichlet/dirichlet.pdf, https://arxiv.org/pdf/1405.0099.pdf) and should not be considered contributions in the age of automatic differentiation (see Justin Domke's blog post on autodiff).\\n\\n2. In section 3, the authors consider the unique challenges of using the proposed networks. They propose targeted activation functions that will improve the stability of learning. I found this to be the most interesting portion of the paper and the most significant contribution. Unfortunately, it is short on details and empirical results are referenced that do not appear in the paper (i.e. the second to last paragraph on page 5). If I were to rewrite this paper, I would focus on answering the question \\\"What are the unique challenges of parameterizing Dirichlet, Dirichlet-multinomial, and Beta distributions with the outputs of a neural network and how can we address them?\\\", replacing section 2 with an expanded section 3.\\n\\n3. In section 4, the authors evaluate the proposed networks on a collection of synthetic and real tasks. In the end, the results are mixed, with the Dirichlet network performing best on the XENON1T task and the standard softmax network performing best on the CIFAR-100 task. In general, I don't mind mixed results and I appreciate that the authors included both sets of experiments; however, it is important that there is a convincing argument for why one would prefer the proposed solution even when accuracy is the same (e.g. it is faster, it is interpretable, etc.). The authors briefly argue that the proposed methods are superior because they provide uncertainty estimates for the output distributions. This may be true, but they only perform evaluations on tasks where the primary goal is accuracy. If the main benefit of the proposed networks is proper uncertainty quantification, then the evaluations (even if they are qualitative) should reflect that.\\n\\nIn summary, I do not think the models proposed in section 2 are sufficiently novel to justify publication alone which means that the authors need to either: (1) evaluate novel methods that are critical for use of these models or (2) present a convincing evaluation that strongly motivates the proposed model's use or that provides some novel insight into the model's behavior. I think that the authors are on their way to achieving (1), but do not achieve (2). I would suggest finding an application that requires uncertainty estimates for the distribution and centering the paper around that application.\", \"minor_comments\": [\"Figure 2 (right) should include a y-axis label (e.g. \\\"parameter value\\\").\", \"In Figure 3 (right), it is not obvious what the \\\"Sigmoid\\\" line corresponds to.\", \"It is not clear what the authors are trying to show in section 4.1. The EL activation function is smooth and monotone and the likelihood is convex, so there should be no question that the distribution will concentrate around y.\", \"Section 4.4 was interesting, but would have been more convincing if paired with an evaluation on real data.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Hello!\\n\\nI find your investigation of the construction and training of models which parameterise the Dirichlet family of distributions to be relevant to our work, especially your investigation into improving the trainability and stability of such models. \\n\\nIn our (due to appear at NIPS 2018 - https://arxiv.org/pdf/1802.10501.pdf ) we parameterise a Dirichlet distribution using a DNN in order to derive measures of uncertainty from 'distributions over distributions' for detecting misclassifications and out-of-distribution inputs.\\n\\nI'm excited by your work and looking forward to any follow up :) .\\n\\nBest Regards,\\nAndrey Malinin\", \"title\": \"Related work\"}"
]
} |
|
rkx0g3R5tX | Partially Mutual Exclusive Softmax for Positive and Unlabeled data | [
"Ugo Tanielian",
"Flavian vasile",
"Mike Gartrell"
] | In recent years, softmax together with its fast approximations has become the de-facto loss function for deep neural networks with multiclass predictions. However, softmax is used in many problems that do not fully fit the multiclass framework and where the softmax assumption of mutually exclusive outcomes can lead to biased results. This is often the case for applications such as language modeling, next event prediction and matrix factorization, where many of the potential outcomes are not mutually exclusive, but are more likely to be independent conditionally on the state. To this end, for the set of problems with positive and unlabeled data, we propose a relaxation of the original softmax formulation, where, given the observed state, each of the outcomes are conditionally independent but share a common set of negatives. Since we operate in a regime where explicit negatives are missing, we create an adversarially-trained model of negatives and derive a new negative sampling and weighting scheme which we denote as Cooperative Importance Sampling (CIS). We show empirically the advantages of our newly introduced negative sampling scheme by pluging it in the Word2Vec algorithm and benching it extensively against other negative sampling schemes on both language modeling and matrix factorization tasks and show large lifts in performance. | [
"Negative Sampling",
"Sampled Softmax",
"Word embeddings",
"Adversarial Networks"
] | https://openreview.net/pdf?id=rkx0g3R5tX | https://openreview.net/forum?id=rkx0g3R5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJl0SgASgN",
"Byg02IJ_a7",
"rygqyL8mT7",
"r1llurUmpX",
"r1ldt8dJpm",
"H1xEBUXd2m"
],
"note_type": [
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545097286237,
1542088373812,
1541789153628,
1541789031693,
1541535359969,
1541056060223
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1127/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1127/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1127/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1127/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1127/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1127/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers agree that the paper is not quite ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"not above threshold\"}",
"{\"title\": \"Interesting idea, need more clarification and detail, not sure if language modeling is good application\", \"review\": \"The mutually exclusive assumption of traditional softmax can be biased in case negative samples are not explicitly defined. This paper presents Cooperative Importance Sampling towards resolving this problem. The authors experimentally verify the effectiveness of the proposed approach using different tasks including applying matrix factorization in recommender system, language modeling tasks and a task on synthetic data.\\n\\nI like this interesting idea, and I agree with the authors that softmax does exist certain problem especially when negative samples are not well defined. I appreciate the motivation of this work from the PU learning setting. It would be interested to show more results in PU learning setting using some synthetic data. I am interested to see the benefit of this extension of softmax with respect to different amount of labeled positive samples.\\n\\nHowever, I am not completely convinced that the proposed method would be a necessary choice for language modeling tasks.\\n--To me, the proposed method has close connection to 2-gram language model. \\n--But for language tasks, and other sequential input, we typically make prediction based on representation of very large context. Let\\u2019s say, we would like to make prediction for time step t given the context of word_{1:t} based on some recurrent model, do you think the proposed softmax can generally bring sizable improvement with respect to traditional choices. And how?\\n\\nBy the way, I think the proposed method would also be applicable in the soft-label setting.\\n\\nFor the experiments part, maybe put more details and discussions to the supplementary material.\\nA few concrete questions.\\n-- In some tables and settings, you only look at prec@1, why? I expect the proposed approach would work better in prec@K.\\n-- Can you provide more concrete analysis fortable 6? Why proposed methods does not work well for syntactic. \\n-- Describe a little bit details about MF techniques and hyper-parameters you used.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Explanations on Experiments and Problem formulation\", \"comment\": \"Dear reviewer,\\n\\nFirst of all, thank you for the precise feedback. We 'll try and answer all the different points made above.\", \"on_experiments\": \"\", \"q1\": \"The multivariate-Bernouilli formulation has been removed. We prefer to say that we model the data with a n-dimensional random vector of Bernoulli random variables.\", \"q2\": \"The support set Si is defined as all the potential targets j that can possibly co-occur with i. Si is therefore defined as a subset of J. The definition has been clarified in the paper.\", \"q3\": \"We did not compare with the NS formulation of Mikolov paper, Distributed Representations of Words and Phrases and their Compositionality, as we are not using the same loss. However, NS as defined by Mikolov does not try to fit a generative model and therefore does not fall within the scope of our PME Softmax. Further experiments could try our CIS sampling scheme to Mikolov's NS loss to see if it improves the performance.\\n\\nHope these details improved the understanding of our work,\\nBest regards.\", \"q4\": \"For all our experiments, we hold out 20% of the dataset for test time. Indeed, we ran experiments on an implicit task with positive only data. The MPR refers to the standard Mean Percentile Rank and, Prec@k refers to the fraction of instances in the test set for which the target falls within the top-k predictions (we added definitions for both metrics).\\nRegarding the movie datasets creation, we only kept movies ranked over respectively 4 and 4.5 stars. From these, we create positive only co-occurence datasets from all the possible pair combinations of items.\\nIn terms of performance, no experiments have been done to compare sampling based methods and plain implicit-matrix factorization baselines on this dataset. However, many papers in the recent years have underlined the fact that sampling schemes methods can be interpreted as implictly factorizing a word context matrix (Neural Word Embedding as Implicit Matrix Factorization, Levy et al). All these details have been made clearer in the current version of the paper.\", \"on_problem_formulation\": \"\"}",
"{\"title\": \"Clarification of the drawbacks of the softmax formulation and the advantages of PMES\", \"comment\": \"Dear reviewer,\\n\\nThank you for your detailed feedback, please find our answers below:\\n\\nTo begin, as a general answer to your feedback, we would like to say that indeed, one can see our PMES as new negative sampling scheme. It enables us to sample true negatives, close to the decision boundary, that will be informative in terms of gradients. Therefore, instead of choosing random and easy negatives as with Uniform Sampling or just all the potential targets as with Softmax, we now have a better strategy for sampling negatives.\\n\\nHowever, the difference with previous negative sampling approaches is that we are not trying to approximate full softmax which is the case of all prior work since the time that Sampled Softmax was introduced by Bengio et al in \\u201cQuick Training of Probabilistic Neural Nets by Importance Sampling,\\u201d where the estimator has been seen as a biased approximation of the full softmax.\\n\\nIn our case, we argue that sampled softmax is ideal because it relaxes the mutual exclusivity constraint and with a good sampling can outperform the full softmax. \\nIn the case of multi-class and single-label tasks it is natural to use the softmax formulation. However, when it comes to language modelling and sequence prediction, most of the tasks fall in the multi-labeled settings. For a given context, one does not observe one target item j, but rather a subset of target items{j1,...,jk}. For example, in a word2vec setting, the subset of targets is defined by the sliding window parameter. \\nInspired from textual analysis, Blei et al. (2003) (Latent Dirichlet Allocation) suggested that words in sequences can be regarded as a mixture of distributions related to each of the different categories ofthe vocabulary, such as \\\"sports\\\" and \\\"music\\\". Building upon this example, we effectively search over an enlarged class of models to better represent the multiplicity of the data. We now train a product of independent distributions to learn this set generative process.\\nTo clarify our point, we added a new paragraph in our paper.\\n\\nNow, going through the different points raised:\", \"q1\": \"The multivariate-Bernoulli formulation has been removed. We prefer now to say that we model the data with a n-dimensional random vector of Bernoulli random variables.\", \"q2\": \"Thanks for your comment on this section, the \\\"intuition\\\" paragraph has been edited for clarity.\", \"q3\": \"Notations have been clarified in the paper. In the PME Softmax model, one tries to fit parameters of Bernoulli distributions. In section 3.3, i and j refer indeed to binary Bernoulli random variables with parameter P(j|i).\", \"q4\": \"A short description of each baseline has now been added to the paper. For the popularity sampling, we used a log uniform distribution as used in the TensorFlow implementation.\\n\\nTo be noted the relation to GAN as reduced, at least in the Related Work section. As mentioned earlier, both the generator and the discriminator work in a cooperative setting rather than an adversarial one.\\n\\nHope these details improved the understanding of our work,\\nMany regards\"}",
"{\"title\": \"missing critical details in formulation and evaluation\", \"review\": \"This paper proposed PMES to relax the exclusive outcome assumption in softmax loss. The proposed methods is motivated from PU settings. The paper demonstrate its empirical metrit in improving word2vec type of embedding models.\\n\\n- on experiment: \\n-- word2vec the window size = 1 but typically a longer window is used for NS. this might not reflect the correct baseline performance. is the window defined after removing rare words? what's the number of NS used? how stop words are taken care of? \\n-- would be good to elaborate how CIS in word similarity task were better than full softmax. Not sure what;s the difference between the standard Negative sample objective. Can you provide some quantitative measure? \\n-- what is the evaluation dataset for the analogy task? \\n\\n-- MF task: the results/metrics suggests this is a implicit [not explicit (rating based)] task but not clearly defined. Better to provide - embedding dimensions, datasets positive/negative definition and overall statistics (# users, movies, sparsity, etc), how the precision@K are calculated, how to get a positive label from rating based dataset (movielens and netflix), how this compares to the plain PU/implicit-matrix factorization baseline. How train/test are created in this task?\\n\\n\\n- on problem formulation:\\nin general, it is difficult to parse the technical contribution clearly from the current paper. \\n-- in 3.3., the prob. distribution is not the standard def of multi-variate bernoulli distribution.\\n-- (6) first defined the support set but not clear the exact definition. what is the underlying distribution and what is the support for a sington means?\\n-- it is better to contrast against the ns approximation in word2vec paper and clarify the difference in term of the mathematical terms.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, writing needs a lot of improvement\", \"review\": \"This paper presents Partially Mutual Exclusive Softmax (PMES), a relaxation of the full softmax that is commonly used for multi-class data. PMES is designed for positive-unlabeled learning, e.g., language modeling, recommender systems (implicit feedback), where we only get to observe positive examples. The basic idea behind PMES is that rather than considering all the non-positive examples as negative in a regular full softmax, it instead only considers a \\\"relevant\\\" subset of negatives. Since we actually don't know which of the negatives are more relevant, the authors propose to incorporate a discriminator which attempts to rate each negative by how hard it is to distinguish it from positives, and weight them by the predicted score from the discriminator when computing the normalizing constant for the multinomial probability. The motivation is that the negatives with higher weights are the ones that are closer to the decision boundary, hence will provide more informative gradient comparing to the negatives that are further away from the decision boundary. On both real-world and synthetic data, the authors demonstrate the PMES improves over some other negative sampling strategies used in the literature.\\n\\nOverall the idea of PMES is interesting and the solution makes intuitive sense. However, the writing of the paper at the current stage is rather subpar, to the extend that makes me decide to vote for rejection. In details:\\n \\n1. The motivation of PMES from the perspective of mutual exclusivity is quite confusing. First of all, it is not clear to me what exactly the authors mean by claiming categorical distribution assumes mutual exclusivity -- does it mean given a context word, only one word can be generated from it? Some further explanation will definitely help. Further more, no matter what mutual exclusive means in this context, I can hardly see that PSME being fundamentally different given it's still a categorical distribution (albeit over a subset).\\n\\nThe way I see PMES from a positive-unlabeled perspective seems much more straight-forward -- in PU learning, how to interpret negatives is the most crucial part. Naively doing full softmax or uniform negative sampling carry the assumption that all the negatives are equal, which is clearly not the right assumption for language modeling and recommender systems. Hence we want to weight negatives differently (see Liang et al., Modeling user exposure in recommendation, 2016 for a similar treatment for RecSys setting). From an optimization perspective, it is observed that for negative sampling, the gradient can easily saturate if the negative examples are not \\\"hard\\\" enough. Hence it is important to sample negatives more selectively -- which is equivalent to weighting them differently based on their relevance. A similar approach has also been explored in RecSys setting (Rendle, Improving pairwise learning for item recommendation from implicit feedback, 2014). Both of these perspectives seem to offer more clear motivation than the mutual exclusivity argument currently presented in the paper.\\n\\nThat being said, I like the idea of incorporating a discriminator, which is something not explored in the previous work. \\n\\n2. The rigor in the writing can be improved. In details:\\n\\n* Section 3.3, \\\"Multivariate Bernoulli\\\" -> what is presented here is clearly not multivariate Bernoulli\\n\\n* Section 3.3, the conditional independence argument in \\\"Intuition\\\" section seems no difference from what word2vec (or similar models) assumes. The entire \\\"Intuition\\\" section is quite hand-wavy.\\n\\n* Section 3.3, Equation 4, 5, it seems that i and j are referred both as binary Bernoulli random variables and categorical random variables. The notation here about i and j can be made more clear. Overall, there are ambiguously defined notations throughout the paper. \\n\\n* Section 4, the details about the baselines are quite lacking. It is worth including a short description for each one of them. For example, is PopS based on popularity or some attenuated version of it? As demonstrated from word2vec, a attenuated version of the unigram (raised to certain power < 1) works better than both uniform random, as well as plain unigram. Hence, it is important to make the description clear. In addition, the details about matrix factorization experiments are also rather lacking. \\n\\n3. On a related note, the connection to GAN seems forced. As mentioned in the paper, the discriminator here is more on the \\\"cooperative\\\" rather than the \\\"adversarial\\\" side.\", \"minor\": \"1. There are some minor grammatical errors throughout. \\n\\n2. Below equation 3, \\\"\\\\sigma is the sigmoid function\\\" seems out of the context.\\n\\n3. Matt Mohaney -> Matt Mahoney\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1x0enCcK7 | Automatic generation of object shapes with desired functionalities | [
"Mihai Andries",
"Atabak Dehban",
"Jose Santos-Victor"
] | 3D objects (artefacts) are made to fulfill functions. Designing an object often starts with defining a list of functionalities that it should provide, also known as functional requirements. Today, the design of 3D object models is still a slow and largely artisanal activity, with few Computer-Aided Design (CAD) tools existing to aid the exploration of the design solution space. The purpose of the study is to explore the possibility of shape generation conditioned on desired functionalities. To accelerate the design process, we introduce an algorithm for generating object shapes with desired functionalities. We follow the principle form follows function, and assume that the form of a structure is correlated to its function. First, we use an artificial neural network to learn a function-to-form mapping by analysing a dataset of objects labeled with their functionalities. Then, we combine forms providing one or more desired functions, generating an object shape that is expected to provide all of them. Finally, we verify in simulation whether the generated object possesses the desired functionalities, by defining and executing functionality tests on it. | [
"automated design",
"affordance learning"
] | https://openreview.net/pdf?id=B1x0enCcK7 | https://openreview.net/forum?id=B1x0enCcK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlL2Dz-gV",
"SklwdVBR0X",
"S1eyNLnt0Q",
"BJeNbLnYAQ",
"r1xYErhtRQ",
"SyeGKeoanX",
"rJxXwY25nX",
"H1eIA9Eqhm"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544787885807,
1543554158748,
1543255591432,
1543255548040,
1543255344785,
1541415034048,
1541224795258,
1541192398402
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1126/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1126/AnonReviewer2"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1126/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1126/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1126/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a novel problem formulation, that of generating 3D object shapes based on their functionality. They use a dataset of 3d shapes annotated with functionalities to learn a voxel generative network that conditions on the desired functionality to generate a voxel occupancy grid. However, the fact that the results are not very convincing -resulting 3D shapes are very coarse- raises questions regarding the usefulness of the proposed problem formulation.\\nThus, the problem formulation novelty alone is not enough for acceptance. Combined with a motivating application to demonstrate the usefulness of the problem formulation and results, would make this paper a much stronger submission. Furthermore, the authors have greatly improved the writing of the manuscript during the discussion phase.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"questions regarding usefulness of the problem formulation given non impressive empirical outcomes\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your reply and revision. I keep my original rating.\"}",
"{\"comment\": \"Thank you for the time you took to review our paper.\\nWe appreciate the reviewer's insight in summarising our paper and addressing the main points.\\n\\nFollowing your remarks, we have reorganised the article, extruding and placing the part about the neural network architecture into an appendix.\\n\\nRegarding the first concern on the structure of the paper, the initial purpose of describing the details of the architecture was to make the paper self-sufficient. However, according with the feedback we received from the community, we agree with the reviewer that some details are verbose. The part on the neural network architecture was removed from the main body of the paper, and left for an optional appendix.\\n\\nRegarding the second concern on the employed terminology, we re-read the paper by Lipton and Steinhardt, and renamed the term \\\"functional essence\\\" into \\\"functional form\\\" of an object.\\nWe preferred this term, as it denotes the purpose of this operation.\\nWe also did not name it \\\"averaging of latent vectors\\\" because there may be multiple methods for extracting this \\\"functional form\\\" (one is presented by us, another one from [2] is cited).\\nWe thus preferred to use the term \\\"functional form extraction\\\" for a family of algorithms performing this task.\\n\\nThe term \\\"functional arithmetic\\\" makes use of the analogy with the term \\\"shape arithmetic\\\" [1]. Following this parallel, we manipulate latent vectors corresponding to _functionalities_ (as opposed to _shapes_ in the cited reference). Thus, we argue to maintain this term.\\n\\nRegarding the computations being motivated by the idea \\\"form follows function\\\":\\nWe follow the principle form follows function, and assume that the form of an object is correlated to its function. Moreover, since we extract shape features from a dataset of objects designed by humans for humans, it is reasonable to assume that the employed shapes are close to optimal for performing their intended function. We included this mention in the paper.\", \"regarding_the_third_concern_on_the_results\": \"We would like to point out that (to the best of our knowledge) there are no alternative methods for shape generation conditioned on desired functionalities.\\nHence, it would be misleading to state that they are not at the level of the state-of-the-art.\\n\\nAs this research is based on exploring new concepts, detailed quality of the reconstructions is not the major contribution of our work. Rather, we try to formulate a new problem for generating shapes based on functionalities. For instance, the generated bathtub-workdesk does provide the desired functionalities, but its aesthetics should improve with further research.\\n\\nRegarding the evaluation of the importance weighting described in Section 3.3.2, we added images of the combination of toilet and bathtub functional forms, to show the interpolation spectrum (see Fig. 11 in the Appendix).\\nWe admit that a rigorous evaluation would have required an evaluation of all the shapes generated using different combination parameters, in order to choose the parameters that consistently provide best results. This will become meaningful once we will have a bigger set of affordance/functionality tests.\\nFor the moment, we decided to view this multitude of solutions as design proposals, leaving the final choice for the human designer.\\nHopefully, these new additions brought the paper closer to the desired rigour standard.\", \"regarding_the_remaining_concerns\": \"Indeed, the employed architecture is not conceptually novel (it is still a 3D autoencoder). However, the same paper by Lipton and Steinhardt [3] mentioned above also states that \\\"empirical advances often come about [...] through clever problem formulations [...] or by applying existing methods to interesting new tasks.\\\" We consider the formulation of the problem of shape design conditioned on desired functionalities/affordances valuable in itself.\\n\\nTo summarise, we used the advice from the reviewers and revised accordingly the paper (changes are highlighted in yellow). We hope that the quality of the writing has improved.\\n\\nThank you again for the time invested into this review.\", \"references\": \"[1] Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling.\\nIn D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 82\\u201390. Curran Associates, Inc., 2016.\\n\\n[2] Larsen, Anders Boesen Lindbo, et al. \\\"Autoencoding beyond pixels using a learned similarity metric.\\\" arXiv preprint arXiv:1512.09300 (2015).\\n\\n[3] Lipton, Zachary C., and Jacob Steinhardt. \\\"Troubling trends in machine learning scholarship.\\\" arXiv preprint arXiv:1807.03341 (2018).\", \"title\": \"Thank you for the review\"}",
"{\"comment\": \"Thank you for the time you took to review our paper.\\nThe summary represents well the contributions of our paper.\\n\\nRegarding the statement about the incremental contribution of the paper, we would like to emphasise once more that the main contribution of the paper is the formulation of the shape design problem conditioned on desired functionalities/affordances.\\nTo the best of our knowledge, this problem formulation is novel, and no other method exists which could compete with our method to solve the problem of generating 3D shapes based on functionality requirements.\\nIn this respect, we believe the problem formulation is novel and cannot be considered an \\\"incremental contribution\\\".\", \"regarding_the_fact_that_the_paper_does_not_propose_a_new_model\": \"Indeed, the employed architecture is not conceptually novel (it is still a 3D autoencoder). However, Lipton and Steinhardt [3] state that \\\"empirical advances often come about [...] through clever problem formulations [...] or by applying existing methods to interesting new tasks.\\\" We consider the formulation of the problem of shape design conditioned on desired functionalities/affordances valuable in itself.\\n\\nRegarding the \\\"ad-hoc\\\" definition of new concepts:\\nWe renamed the term \\\"functional essence\\\" into \\\"functional form\\\" of an object.\\nWe preferred to keep this term, as it denotes the purpose of this operation.\\n\\nWe believe multiple methods would be capable to extract the \\\"functional form\\\" of an object category, of which the \\\"averaging of the latent vector\\\" is only one method, which has served reasonably well in the particular case of our study. To prove this point, we cite another method with a similar purpose, employed in a context of extracting facial features [2].\\nWe thus preferred to use the term \\\"functional form extraction\\\" for the task that can be performed by potentially an entire family of algorithms.\\nTo prevent future readers from falling into the same pitfall, we have explained our reasoning behind the introduction of this vocabulary in the paper.\\n\\nRegarding the \\\"ad-hoc\\\" aspect of the importance vector:\\nThe problem of deciding which features are important in an object description is raised when two object descriptions have to be combined into a single new one. This motivates the use of the term \\\"importance vector\\\".\\n\\nRegarding the evaluation of the importance weighting described in Section 3.3.2, we added images of the combination of toilet and bathtub functional forms, to show the interpolation spectrum (see Fig. 11 in the Appendix).\\n\\nThe motivation behind using these two KL divergences for ranking the variables is to identify (1) which variables make the shape different from a void volume, to capture the filled voxels of the model, and (2) which variables distinguish the shape description from that of a Gaussian prior. This mention was added to the manuscript.\", \"regarding_results\": \"We bring to the attention of the reviewer that the \\\"normal\\\" aspect (e.g. smooth shapes) of objects was not the purpose of this study, and that the generated objects fulfill their functions as verified by our tests, even though they look rugged. Their shapes are sub-optimal for the selected function (some object parts are useless), but this paper does not claim that they are optimal.\\nIn this respect, we find that the evaluation of the quality of the generated objects is subjective, and not based on their ability to perform the desired function.\\nThis being said, the control over the style of object surfaces (smooth vs rugged, plain vs decorated) is an ongoing work, but is not in the scope of this paper.\\nIt would also be incorrect to state that the generated objects are below the state-of-the-art, as (to our knowledge) no other methods exist for object generation conditioned on desired functionalities.\", \"regarding_the_literature_review\": \"We will certainly look into the latest literature on geometry modeling in the computer graphics and vision community, as these are research areas that considerably overlap with our work. At the same time, if the reviewer is aware of any related literature, we would appreciate it if it could be shared with us.\\n\\nAs suggested by the reviewer, we emphasised the main idea at the beginning of the Methodology section. We also added an image of an automatically generated object on the first page of the paper.\\n\\nRegarding the quality of writing, (as stated above) we used the advice from the reviewers and revised accordingly the paper (changes are highlighted in yellow). As suggested, we moved the detailed description of the network architecture into the (optional) appendix. We hope that the quality of the writing has improved.\\n\\nThank you again for the time invested into this review.\\n\\n[2] Larsen, Anders Boesen Lindbo, et al. \\\"Autoencoding beyond pixels using a learned similarity metric.\\\" arXiv preprint arXiv:1512.09300 (2015).\\n\\n[3] Lipton, Zachary C., and Jacob Steinhardt. \\\"Troubling trends in machine learning scholarship.\\\" arXiv preprint arXiv:1807.03341 (2018).\", \"title\": \"Thank you for the review\"}",
"{\"comment\": \"Thank you for the time you took to review our paper.\\nThe summary represents well the contributions of our paper.\", \"regarding_the_presentation\": \"We added an image of a generated object to the front page of the paper, to show the reader from the beginning what type of models we generate. We also emphasised the main idea at the beginning of the Methodology section.\\n\\nWe agree with the comment regarding the consistent choice of lexicon, and we have purified the employed vocabulary to avoid ambiguities in the meanings of words.\\n\\\"Affordance\\\" and \\\"functionality\\\" do not mean the same thing, as affordances also include the actor performing the action, being represented as a tuple (actor, action, object, effect). However, we make the connection between the two concepts, as the literature on the relationship between object features and functionalities can be found using the keywords \\\"affordance learning and recognition\\\".\\n\\nThe paper clearly states that \\\"functionality\\\" and \\\"affordance\\\" are used interchangeably. For all the other ambiguous words, we have improved the manuscript by consistently choosing the same words for the same concepts.\\nFot better comprehension, we have replaced all the instances of the \\\"class\\\" word with \\\"category\\\".\\n\\nRegarding the \\\"feature\\\" vs \\\"shape\\\" comment:\", \"different_types_of_features_exist\": \"shape, colour, edges, interest points, etc.\\nWe focus on shape features, since the network only decides which voxels should be filled/empty. We emphasise this by always mentioning that we work with \\\"shape features\\\".\\n\\nThe purpose of the study is to explore the possibility of shape generation conditioned on the desired functionalities.\", \"the_main_working_hypotheses_are\": \"- objects providing the same functionality have common form/shape features\\n- averaging over multiple shapes that provide the same functionality will extract a form providing that functionality, that we call \\\"functional form\\\".\\nFeatures that are frequently observed inside an object category will pass the selection threshold to be included in this \\\"functional form\\\". Features that are rarely observed are considered non-relevant for performing the function, and are left out.\\n- parametric interpolation between samples can generate novel shapes providing the combined functionalities of those samples.\\nThis last assumption is contentious, as we cannot yet predict the behaviour of functionalities when combining their underlying shapes.\\nFor this reason, we verify the presence of these functionalities in simulation.\\nWe added this mention in the beginning of the \\\"Methodology\\\" section.\", \"regarding_the_choice_of_3d_representation\": \"We do not address the question of which 3D representation is better for representing shapes (mesh representations, voxel grids, point clouds, superquadrics, shape primitives, etc.). We chose a voxelgrid representation because it allowed to have fixed-size 3D models as inputs for an autoencoder.\\nDuring functionality testing, the voxelgrid models were converted into mesh models only because the employed simulator (Gazebo) required mesh models.\\nThis mention was added in the third paragraph of the Methodology section.\", \"regarding_the_choice_between_ontology_methods_vs_probabilistic_methods\": \"We do not argue against any particular method in this paper.\\nWe attempt to automate ontology methods (for example [4]), that involved decision making by a human designer on how to combine multiple shapes where each provides some functionality. This shape combination process is automated using a neural network.\", \"regarding_figures_5a_and_7a\": \"Figure 5a illustrates the average latent vector over the seven displayed \\\"table\\\" samples.\\nFigure 7a illustrates the average latent vector over all the samples inside the \\\"table\\\" category (~400 samples), augmented with their rotations by 90 degrees.\\n\\nThank you again for the time invested into this review.\\n\\n[4] Kurtoglu, Tolga, and Matthew I. Campbell. \\\"Automated synthesis of electromechanical design configurations from empirical analysis of function to form mapping.\\\" Journal of Engineering Design 20.1 (2009): 83-104.\", \"title\": \"Thank you for the review\"}",
"{\"title\": \"Interesting work that would deserve a better focus of experiment design\", \"review\": \"This paper is addressing several research challenges as a method to generate objects with desired functionalities, a method to extract form-to-function mapping, a method to operationally support a functionality arithmetic. The work illustrated in this paper is really interesting and is addressing relevant and open problems in the domain of product design.\\n\\nNevertheless the manuscript has a couple of weaknesses, one concerned with the presentation and another related to the design of the study. \\n\\nThe lack of a consistent choice for the lexicon is sometimes misleading. It is not always clear whether the use of different terms is addressing synonyms or to discriminate between two distinct concepts. For example let consider the following pairs: functionality versus affordance, function versus functional, class versus category, feature versus shape. \\n\\nThe study addresses several questions. Not always is clear what is the purpose or better the research questions that are driving the design of the experiments. While in the manuscript the are many repetition of the objectives of the study, less attention is devoted to explain what are the working hypothesis underlying the proposed methods. For example, one of the objective is a method to generate objects with desired functionalities. Only in the final Section there is a brief mention of the dichotomy between meash-based versus voxel-based. As reported in Section 2 there are in literature other works but there is not a claim on what is the specific purpose of the present study. The contrast of voxel versus mesh looks like a motivation but it only a speculation. A similar comment might address the dichotomy deterministic (ontology) versus probabilistic (autoencoder). In this case the experiment design should provide some empirical evidence about this contrast.\\n\\nA minor comment. Figure 7a is illustrating the functional essence of table. According to the caption Figure 5a is illustrating the same functional essence for the same category/class table. Should the pictures look the same?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An ad-hoc method for shape generation\", \"review\": \"This paper proposed a 3D shape generation model. The model is essentially an auto-encoder. The authors explored a new way of interpolation among encoded latent vectors, and drew connections to object functionality.\\n\\nThe paper is, unfortunately, clearly below the bar of ICLR in many ways. It\\u2019s technically incremental: the paper doesn\\u2019t propose a new model; it instead suggests new way of interpolating the latent vectors for shape generation. The incremental technical innovation is not well-motivated or justified, either: the definitions of new concepts such as \\u2018functional essence\\u2019 and \\u2018importance vector\\u2019 are ad-hoc. The results are poor, much worse compared with the state-of-the-art shape synthesis methods. The writing and organization can also be improved. For example, the main idea should be emphasized first in the method section, and the detailed network architecture can be saved for a separate subsection or supplementary material. \\n\\nIt\\u2019s good that the authors are looking into the direction of modeling shape functionality. This is an importance area that is currently less explored. I suggest the authors look into the rich literature of geometry modeling in the computer graphics and vision community, and improve the paper by drawing inspiration from the latest progress there.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Cool idea, but not well written, and not sufficiently evaluated\", \"review\": \"This paper presents a method for generating 3D objects. They train a VAE to generate voxel occupancy grids. Then, they allow a user to generate novel shapes using the learned model by combining latent codes from existing examples.\", \"pros\": [\"The idea of linking affordances to 3D object generation is interesting, and relevant to the machine learning and computer vision communities.\", \"They propose to evaluate the quality of the shape based on a physical simulation (Section 4.4.3), which is an interesting idea.\"], \"cons\": [\"This paper is not well written. The method is described in too much detail, and the extra length (10 pages) is unnecessary. Cross entropy, VAEs, and many of the CNN details can usually just be cited, instead of being described to the reader.\", \"The paper uses suggestive terminology, like \\\"functional essence\\\" and \\\"functional arithmetic\\\" for concepts that are fairly mundane (see Lipton and Steinhardt, 2018 for an extended discussion of this issue). For example, the \\\"functional essence\\\" of a class is essentially an average of the VAE latent vectors (Section 3.3.1). The paper claims, without sufficient explanation, that this is computation is motivated by the idea that \\\"form follows function\\\".\", \"The results are not very impressive. There is no rigorous evaluation. They propose several nice metrics to use (eg. affordance simulation), but the results they present for each metric are quite limited. The qualitative results are also not particularly compelling.\", \"The paper should more thoroughly evaluate the importance weighting that is described in Section 3.3.2.\", \"The technical approach (combining VAE vectors to make new shapes) is not particularly novel[\"], \"overall\": \"The paper should not be accepted in its current form, both due to the confusing writing, and the lack of careful evaluation.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyxCxhRcY7 | Deep Anomaly Detection with Outlier Exposure | [
"Dan Hendrycks",
"Mantas Mazeika",
"Thomas Dietterich"
] | It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance. | [
"confidence",
"uncertainty",
"anomaly",
"robustness"
] | https://openreview.net/pdf?id=HyxCxhRcY7 | https://openreview.net/forum?id=HyxCxhRcY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gmudJa7E",
"HJgG_An0J4",
"ByehoqpU1E",
"SygTG-v2Am",
"r1xNRARj0X",
"ryg5L0no0Q",
"HkeMqN7qCm",
"rJl4Tt7MC7",
"H1g7uFXf07",
"HklI1QZGA7",
"HJgkrz7_pX",
"Skl4qxWN67",
"Bye5XYYT37",
"H1lavG27nQ",
"B1xOlGHljQ",
"rkgIONKsq7",
"ByxTzKbkqX"
],
"note_type": [
"official_comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment"
],
"note_created": [
1548707947460,
1544633961533,
1544112804163,
1543430421143,
1543397067919,
1543388753796,
1543283850515,
1542760892167,
1542760811248,
1542750942187,
1542103606803,
1541832843939,
1541409058041,
1540764260989,
1539490288488,
1539179629821,
1538361621158
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1125/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1125/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1125/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1125/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1125/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
],
[
"~Andrey_Malinin1"
],
[
"ICLR.cc/2019/Conference/Paper1125/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Paper and Code Now Available\", \"comment\": \"We have put up a de-anonymized version of the paper. Unlike the draft from the reviewing cycle, this draft shows OE can also work on large-scale images (Places365). Code for most of the experiments, including the NLP experiments, has been made available: https://github.com/hendrycks/outlier-exposure\"}",
"{\"metareview\": \"The paper proposes a new fine-tuning method for improving the performance of existing anomaly detectors.\\n\\nThe reviewers and AC note the limitation of novelty beyond existing literature.\\n\\nThis is quite a borader line paper, but AC decided to recommend acceptance as comprehensive experimental results (still based on empirical observation though) are interesting.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Limited novelty, but interesting results\"}",
"{\"comment\": \"Thanks! Of course, we will be happy to cite your work on the first occasion.\", \"title\": \"Related work\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Thank you for your reply and good questions. Due to space limitations, in Appendix A we list the results for each D_out^test distribution, and we give the full descriptions of the D_out^test distributions. The test D_out^test distributions consist in Gaussian Noise, Rademacher Noise, Bernoulli Noise, Blobs, Icons-50 (emojis), Textures, Places365, LSUN, ImageNet (the 800 ImageNet-1K classes not in Tiny ImageNet and not in D_out^OE), CIFAR-10/100, and Chars74K anomalies. For NLP we use SNLI, IMDB, Multi30K, WMT16, Yelp, and various subsets of the English Web Treebank. We therefore test our models with approximately double the number of D_out^test image distributions compared to prior work; we also test in NLP, unlike nearly all other recent work in OOD detection.\\n\\nYour read is correct that 80 Million Tiny Images are used for SVHN, CIFAR-10, CIFAR-100; these images are too low-resolution (32x32x3) for Tiny ImageNet, so for that we use ImageNet-22K (minus ImageNet-1K). For NLP, we use WikiText-2, but in the discussion we note using the Project Gutenberg corpus also works, so the dataset choice has flexibility even in NLP. Thanks to your comment, we will add a link to Appendix A in the caption of Table 1 for the full results and make the interactions between D_in, D_out^OE, D_out^test clearer.\\n\\nAs for accuracy, the fixed coefficient of lambda = 0.5 for the vision experiments leads to slight degradation when tuning with OE, like other approaches. For example, a vanilla CIFAR-10 Wide ResNet has 5.16% classification error, while with OE tuning it has 5.27% error. This degradation can be further reduced by training from scratch (Appendix E). We will look into ``negative transfer.'' Thank you.\"}",
"{\"title\": \"Additional clarification\", \"comment\": \"Thanks for the clarification. Yes, it makes a big difference that the \\\"training\\\" outliers are from different datasets than the \\\"test\\\" outliers -- I'm happy I was mistaken in my previous understanding.\\n\\nI'll study the paper some more, but after quickly rereading some key sections, I don't understand exactly what combinations of D_in, D_out^OE, and D_out^test were used, e.g., in Table 1. From the row labels, I can figure out what D_in is. From Section 4.2.2., it sounds like you used 80 Million Tiny Images as D_out^OE for SVHN, CIFAR10, and CIFAR-100. Was ImageNet-22K used as D_out^OE for Tiny ImageNet? The text is ambiguous. And then, what was used for D_out^test?\\n\\nIn general, the effectiveness of these techniques will rely heavily on the nature of the datasets used. With some combinations, we should expect OE to reduce the accuracy of anomaly detection, much like the \\\"negative transfer\\\" phenomenon in transfer learning. I didn't see much discussion of this point, but perhaps I missed it.\"}",
"{\"title\": \"Interesting Task\", \"comment\": \"This is an interesting segmentation task, and we will be sure to try Outlier Exposure on this task in the future. We intend to include a citation to your work after submission deanonymization.\"}",
"{\"title\": \"On Not Accessing Test Data\", \"comment\": \"Reviewer 1, we have added more emphasis that the Outlier Exposure data and the test sets are disjoint in the revised draft.\"}",
"{\"title\": \"Uploaded Draft Adding the Suggestions\", \"comment\": \"Thank you for your careful analysis of our paper.\\n\\nWe have uploaded a new draft incorporating your suggestions.\\n\\nTo improve clarity, we have added two paragraphs to the preface of Section 4 summarizing our experiments and novel discoveries. We found it difficult to import several specific details from individual experiments to Section 3, so we opted to instead improve the clarity of several experimental sections as they appear, and to improve the clarity of the discussion section. We also restructured the calibration section.\\n\\nRegarding your second and third points, we added the reference for the original GAN paper, and we added definitions for BPP, BPC, and BPW to Section 4.4. Thank you for these suggestions.\\n\\nThe baseline numbers in Table 3 differ from those in Table 1 because in Table 3 we use the training regime from the publicly available implementation of DeVries et al. to create an accurate comparison. The difference is that they use a different learning schedule than the models from Table 1.\"}",
"{\"title\": \"Clarifying the Problem Setup\", \"comment\": \"Thank you for your thoughtful feedback and willingness to question the premises behind submitted works.\\n\\nWe believe there may be a misunderstanding of our experimental setup. In the setup you describe, out-of-distribution data is available during training, and data from that same distribution is encountered at test time. We agree that such a setup has issues, and we intentionally avoided that setup. We do not assume access to the test distribution, but this confusion is understandable as many recent OOD papers assume this. In particular, we took great care to keep datasets disjoint in our experiments, and the only out-of-distribution dataset examples we use at training time come from the realistic, diverse Outlier Exposure datasets described in Section 4.2.2. We ensured that these OE datasets were disjoint with the out-of-distribution data evaluated at test time. For instance, in the NLP experiments, we used WikiText-2 as the OE dataset, and none of the NLP OOD datasets evaluated on at test time were collected from Wikipedia.\\n\\nOne of our contributions is that training on the OE datasets which we identified leads to generalization to novel forms of anomalies. Concretely, with SVHN as the in-distribution, we found that OE improved OOD detection on the Icons-50 dataset of emojis, even though the OE dataset consisted in natural images and did not contain any emojis. Thus, training with OE does help with generalization to new anomalies, and it does not simply teach the detector a particular, narrow distribution of outliers.\"}",
"{\"title\": \"Comparison Given and ODIN Results Added\", \"comment\": \"Thank you for your detailed feedback.\\n\\n1.\\nLee et al. [2] propose training against GAN-generated out-of-distribution data, and they use a confidence loss for anomaly detection with multiclass classification as the original task. By contrast, we consider a broader range of original tasks, including density estimation and natural language settings, and we show how to incorporate Outlier Exposure for each scenario.\\n\\nAnother crucial difference between our work and [2] is that we demonstrate that realistic, diverse data is significantly more effective than GAN-generated examples, and is scalable to complex, high-resolution data that everyday GANs have difficulty generating. Likewise, GANs are currently not capable of generating high-quality text. Finally, Lee et al. [2] state in Appendix B, \\u201cFor each out-of-distribution dataset, we randomly select 1,000 images for tuning the penalty parameter \\u03b2, mini-batch size and learning rate.\\u201d Thus some of their hyperparameters are tuned on OOD test data, which is not the case in our work. Hence, our work is in a different setting from Lee et al. [2]. In our paper we show how to use real data to _consistently_ improve detection in a host of settings. In essence, our some of our multiclass experiments are built on the seminal work of Lee et al. [2] by using real and diverse data. \\n\\nOur primary contribution is that real data from a diverse source can be used to train anomaly detectors which generalize to anomalies from new and different distributions, so there is no need to use GANs or assume access to the test distributions. We demonstrate this in a variety of settings, showing that this technique is general and consistently boosts performance.\\n\\nSecondary sources of novelty in our paper include the margin loss for OOD detection with density estimators, the cross entropy OOD score instead of MSP (Appendix G), posterior rescaling for confidence calibration in the presence of OOD data (Appendix C), and our observation that a cutting-edge CIFAR-10 density model unexpectedly assigns higher density to SVHN images than to CIFAR-10 images. The latter contribution forms the basis for a concurrent submission by different authors, which can be found here: https://openreview.net/forum?id=H1xwNhCcYm Since that work is concurrent, it does not detract from our paper\\u2019s novelty. We should note that we not only reveal that density estimates are unreasonable on out-of-distribution points, but we also ameliorate it with Outlier Exposure.\\n\\n2.\\nWe have added a section comparing to ODIN [3] (Appendix I). We will incorporate the results into the main paper if you think we should.\\n\\n3.\\nThank you for pointing out these related works. The works of [4] and [5] are ECCV 2018 and NIPS 2018 papers, both of which are for conferences occurring after the submission deadline of this paper. We have a working implementation of [4] and will incorporate it into the paper it once we are sure that it is a faithful reproduction. We think that our comparisons on multiclass OOD detection (including the baseline [1], Lee et al. [2], DeVries et al., Liang et al. [3]), density estimation OOD detection, and confidence calibration on vision and NLP datasets are sufficient to demonstrate our method.\", \"edit\": \"Thank you very much for taking the time to read this response and update your score.\"}",
"{\"comment\": \"Hi,\", \"we_have_a_complementary_out_of_distribution_detection_paper_currently_under_review\": \"\", \"https\": \"//openreview.net/forum?id=H1x1noAqKX\\nWe detect OOD samples on a pixel level. We also find that using outliers during training is effective for detecting OOD samples.\", \"title\": \"Related work\"}",
"{\"title\": \"Research topic is interesting, but the paper needs improvement.\", \"review\": \"I have read authors' reply. In response to authors' comprehensive reply and feedback. I upgrade my score to 6. As authors mentioned, the extension to density estimators is an original novelty of this paper, but I still have some concern that OE loss for classification is basically the same as [2]. I think it is better to clarify this in the draft.\\n\\nSummary===\\n\\nThis paper proposes a new fine-tuning method for improving the performance of existing anomaly detectors. The main idea is additionally optimizing the \\u201cOutlier Exposure (OE)\\u201d loss on outlier dataset. Specifically, for softmax classifier, the authors set the OE loss to the KL divergence loss between posterior distribution and uniform distribution. For density estimator, they set the OE loss to a margin ranking loss. The proposed method improves the detection performance of baseline methods on various vision and NLP datasets. While the research topic of this paper is interesting, I recommend rejections because I have concerns about novelty and the experimental results.\\n\\nDetailed comments ===\\n\\n1. OE loss for softmax classifier\\n\\nFor softmax classifier, the OE loss forces the posterior distribution to become uniform distribution on outlier dataset. I think this loss function is very similar to a confidence loss (equation 2) proposed in [2]: Lee et al., 2017 [2] also proposed the loss function minimizing the KL divergence between posterior distribution and uniform distribution on out-of-distribution, and evaluated the effects of it on \\\"unseen\\\" out-of-distribution (see Table 1 of [2]). Could the authors clarify the difference with the confidence loss in [2], and compare the performance with it? Without that, I feel that the novelty of this paper is not significant.\\n\\n2. More comparison with baselines\\n\\nThe authors said that they didn\\u2019t compare the performance with simple inference methods like ODIN [3] since ODIN tunes the hyper-parameters using data from (tested) out-of-distribution. However, I think that the authors can compare the performance with ODIN by tuning the hyper-parameters of it on outlier dataset which is used for training OE loss. Could the authors provide more experimental results by comparing the performance with ODIN? \\n\\n3. Related work\\n\\nI would appreciate if the authors can survey and compare more baselines such as [4] and [5]. \\n\\n[1] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. International Conference on Learning Representations, 2017. \\n[2] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. International Conference on Learning Representations, 2018. \\n[3] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. International Conference on Learning Representations, 2018. \\n[4] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. In NIPS, 2018.\\n[5] Apoorv Vyas, Nataraj Jammalamadaka, Xia Zhu, Dipankar Das, Bharat Kaul, and Theodore L. Willke. Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers, In ECCV, 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An outlier detection method that assumes access to the outlier distribution?\", \"review\": \"This paper describes how a deep neural network can be fine-tuned to perform outlier detection in addition to its primary objective. For classification, the fine-tuning objective encourages out-of-distribution samples to have a uniform distribution over all class labels. For density estimation, the objective encourages out-of-distribution samples to be ranked as less probability than in-distribution samples. On a variety of image and text datasets, this additional fine-tuning step results in a network that does much better at outlier detection than a naive baseline, sometimes approaching perfect AUROC.\\n\\nThe biggest weakness in this paper is the assumption that we have access to out-of-distribution data, and that we will encounter data from that same distribution in the future. For the typical anomaly detection setting, we expect that anomalies could look like almost anything. For example, in network intrusion detection (a common application of anomaly detection), future attacks are likely to have different characteristics than past attacks, but will still look unusual in some way. The challenge is to define \\\"normal\\\" behavior in a way that captures the full range of normal while excluding \\\"unusual\\\" examples. This topic has been studied for decades.\\n\\nThus, I would not classify this paper as an anomaly detection paper. Instead, it's defining a new task and evaluating performance on that task. The empirical results demonstrate that the optimization succeeds in optimizing the objective it was given. What's missing is the justification for this problem setting -- when is it the case that we need to detect outliers *and* have access to the distribution over outliers?\\n\\n--------\", \"update_after_response_period\": \"My initial read of this paper was incorrect -- the authors do indeed separate the outlier distribution used to train the detector from the outlier distribution used for evaluation. Much of these details are in Appendix A; I suggest that the authors move some of this earlier or more heavily reference Appendix A when describing the methods and introducing the results. I am not well-read in the other work in this area, but this looks like a nice advance.\\n\\nBased on my read of the related work section (again, having not studied the other papers), it looks like this work fills a slightly different niche from some previous work. In particular, OE is unlikely to be adversarially robust. So this might be a poor choice for finding anomalies that represent malicious behavior (e.g., network intrusion detection, adversarial examples, etc.), but good for finding natural examples from a different distribution (e.g., data entry errors).\\n\\nMy main remaining reservation is that this work is still at the stage of empirical observation -- I hope that future work (by these authors or others) can investigate the assumptions necessary for this method to work, and even characterize how well we should expect it to work. Without a framework for understanding generalization in this context, we may see a proliferation of heuristics that succeed on benchmarks without developing the underlying principles.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A comprehensive study of an intuitive idea for anomaly detection. Can benefit from a restructuring of the writing.\", \"review\": \"This paper proposes fine-tuning an out-of-distribution detector using an Outlier Exposure (OE) dataset. The novelty is in proposing a model-specific rather than dataset-specific fine-tuning. Their modifications are referred to as Outlier Exposure. OE includes the choice of an OE dataset for fine-tuning and a regularization term evaluated on the OE dataset. It is a comprehensive study that explores multiple datasets and improves dataset-specific baselines.\", \"suggestions_and_clarification_requests\": [\"The structure of the writing does not clearly present the novel aspects of the paper as opposed to the previous works. I suggest moving the details of model-specific OE regularization terms to section 3 and review the details of the baseline models. Then present the other set of novelties in proposing OE datasets in a new section before presenting the results. Clearly presenting two sets of novelties in this work and then the results. If constrained in space, I suggest squeezing the discussion, conclusion, and 4.1.\", \"In the related work section Radford et al., 2016 is references when mentioning GAN. Why not the original reference for GAN?\", \"Maybe define BPP, BPC, and BPW in the paragraphs on PixelCNN++ and language modeling or add a reference.\", \"Numbers in Table 3 column MSP should match the numbers in Table 1, right? Or am I missing something?\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for bringing your NIPS 2018 paper to our attention. We think decoupling uncertainty into \\\"data\\\" and \\\"OOD\\\" uncertainty is an interesting avenue, and we will cite your work accordingly.\"}",
"{\"comment\": \"Hello! :) Interesting work. You may find our work on predictive uncertainty estimation to be relevant relevant.\", \"https\": \"//arxiv.org/pdf/1802.10501.pdf\", \"title\": \"Related Work\"}",
"{\"title\": \"Parallel Work\", \"comment\": \"In Section 4.3 we observe that a cutting-edge CIFAR-10 density model unexpectedly assigns higher density to SVHN images than to CIFAR-10 images.\\nAs it happens, a concurrent submission is based on this observation. Their work can be found here: https://openreview.net/forum?id=H1xwNhCcYm\"}"
]
} |
|
HkxCenR5F7 | Variational recurrent models for representation learning | [
"Qingming Tang",
"Mingda Chen",
"Weiran Wang",
"Karen Livescu"
] | We study the problem of learning representations of sequence data. Recent work has built on variational autoencoders to develop variational recurrent models for generation. Our main goal is not generation but rather representation learning for downstream prediction tasks. Existing variational recurrent models typically use stochastic recurrent connections to model the dependence among neighboring latent variables, while generation assumes independence of generated data per time step given the latent sequence. In contrast, our models assume independence among all latent variables given non-stochastic hidden states, which speeds up inference, while assuming dependence of observations at each time step on all latent variables, which improves representation quality. In addition, we propose and study extensions for improving downstream performance, including hierarchical auxiliary latent variables and prior updating during training. Experiments show improved performance on several speech and language tasks with different levels of supervision, as well as in a multi-view learning setting. | [
"Representation learning",
"variational model"
] | https://openreview.net/pdf?id=HkxCenR5F7 | https://openreview.net/forum?id=HkxCenR5F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJl7auR214",
"r1xyhfI314",
"r1geNGL21V",
"S1e7BjGrAm",
"Bkx0K9MHC7",
"HkeyNwzH07",
"ByeZL0VCh7",
"H1gJO-4227",
"rkl07WUq3Q"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544509627512,
1544475302527,
1544475175547,
1542953787202,
1542953605940,
1542952742989,
1541455433284,
1541321062879,
1541198117608
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1124/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1124/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1124/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1124/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1124/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1124/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1124/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1124/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1124/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Agreed\", \"comment\": \"Thanks for the detailed and constructive review!\"}",
"{\"title\": \"Outstanding review\", \"comment\": \"As area chair I just wanted to comment that this is an outstandingly thorough, clear, and constructive review. Thank you.\"}",
"{\"metareview\": \"This paper heavily modifies standard time-series-VAE models to improve their representation learning abilities. However, the resulting model seems like an ad-hoc combination of tricks that lose most of the nice properties of VAEs. The resulting method does not appear to be useful enough to justify itself, and it's not clear that the same ends couldn't be pursued using simpler, more general, and computationally cheaper approaches.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Many modifications to VAEs with little justification\"}",
"{\"title\": \"On clarifying contribution and speed comparison\", \"comment\": \"Thank you for pointing out the missing speed comparison. RecRep is roughly twice faster in our implementation than StocCon when using batch size 4. We will include this in a revision. Regarding the degree of novelty, our main contribution is a practical approach to representation learning for sequences that improves performance on multiple downstream tasks. While prior work has largely focused on measuring the quality of recurrent models for generation, we focus on making them useful for representation learning.\"}",
"{\"title\": \"Clarification on stochastic generation and more explanations\", \"comment\": \"\", \"q1\": \"It feels as if the proposed method tries to be many things. First, it is used for finding unsupervised representations down stream. Then, it still tries to be a generative model \\\"of sorts\\\", which is the reason for the use of variational inference in the first place. Additionally, the approximate posterior necessary to evaluate the ELBO is simultaneously used as a feature extractor.\", \"ans\": \"We agree there is no special reason to favor variational or generative model for representation learning, other than that they work well and provide an intuitive way to reason about regularization. We note that generative models (e.g., HMMs) have been frequently used for non-generative tasks. From an optimization point of view, as mentioned above VAEs are \\u201cnoisy autoencoders'' with a KL regularization term, and have been more successful than other autoencoder-type models for non-sequence tasks. This motivates the application to sequence tasks.\", \"q2\": \"A \\u201cbad\\\" variational posterior is used because it is unclear how to get vectorial features otherwise.\\nThe \\u201cbadness'' of the posterior is perhaps in the eye of the beholder :) Our posterior has multiple advantages, both the ability to easily get features and avoidance of complex sampling procedures.\\n\\nQ3 on \\u201cstochastic generation\\u201d\", \"https\": \"//drive.google.com/file/d/1FomO05-wiLVFm4W5zNrpZW04_Gj0jwiM/view?usp=sharing\\nThere is a typo in Equation (6) in the current version, which should be revised to \\\\sum_{t=1}^T\\\\{ \\\\mathbb{E}_{q_{\\\\phi}(z_t|h_t)}\\\\big[ \\\\log \\\\sum_{k=1}^T\\\\alpha_{\\\\delta,T}^{t,k} p_{\\\\theta}(x_k|z_t) \\\\big] \\\\}. \\nc. In general, this is computable in closed form but in O(T^2) time for a length T sequence. Our approximation allows us to compute it in O(T)$ time for any distribution.\\nd. Yes, it is normalized. See our reply to bullet point (b) and the anonymized link.\\n\\nQ4 on ELBO change and prior updating\"}",
"{\"title\": \"Explanations on motivations, details and visualization\", \"comment\": \"Main concern on motivation\", \"ans\": \"Yes, the settings are the same. We will add the additional ablation studies mentioned above.\\nWe have visualized all of the variants in Table 4. Please see https://drive.google.com/file/d/1FomO05-wiLVFm4W5zNrpZW04_Gj0jwiM/view?usp=sharing\\nQualitatively, the StocCon visualization looks worse than StocCon with prior updating. Both RecRepVCCAP+H and RecRepVCCAP+P tend to form better clusters, and RecRepVCCAP+H+P is clearly better than either +H or +P alone.\"}",
"{\"title\": \"Needs stronger motivation, better analysis would improve the paper\", \"review\": \"This is largely an experimental paper, proposing and evaluating various modifications of variational recurrent models towards obtaining sequence data representations that are effective in downstream tasks. The highlighted contribution is a \\\"stochastic generation\\\" training procedure in which the training objective evaluates the reconstruction of output sequence elements from individual latent variables independently. The main claim is that the resulting model, augmented with prior updating and/or hierarchical latent variables, improves results w.r.t. the baselines.\\n\\nMy main concern is that the various choices are not motivated well, e.g. with examples or detailed descriptions of the issues addressed and that the resulting implications are not discussed in detail (see detailed comments below). This could perhaps be alleviated during the rebuttal discussion.\\n\\nEmpirically, when used in conjunction with prior updating and/or hierarchical latent variables, the proposed \\\"stochastic generation\\\" approach improves upon the baselines, but not when used in isolation. This is OK, but it weakens the contribution since it's more unclear what the exact advantage \\\"stochastic generation\\\" is, how it takes advantage of prior updating, and so on. Could you maybe discuss this in the rebuttal? The fact that not all model variants considered are evaluated on all settings also contributes to this problem (again, see below).\", \"general_questions\": [\"\\\"dependence of observations at each time step on all latent variables\\\": Unfortunately, this means that the complexity of evaluating the model during training is O(n^2), where n is the sequence size, rather than linear in the standard case. Is that correct? I think this is what is alluded to on the top on page 4. Could you discuss this trade-off?\", \"regarding section 2.1.: Multi-modal marginal probabilities are also used due their increased modeling power, and this again seems like a potential limitation of the proposed approach w.r.t. the baseline, and is not discussed.\", \"\\\"the mean of z_t may have very small probability and thus may not be a good choice\\\": I think this statement requires more context. The mean of z_t can have low probability in both cases (e.g. if the posterior has a high variance). Are you suggesting that the low probability issue is exacerbated by to the sampling of previous z_{t-1}? Or are you comparing to the case where the mean z_{t-1} is used instead of sampling as well?\"], \"stochastic_generation\": [\"While I understand where it's coming from, the term \\\"stochastic generation\\\" is somewhat misleading, since stochasticity is already present in the generation process for VAEs;\", \"Stochastic generation is introduced as a way to approximate the generation process. However, when it's introduced, it's not clear what the generation process that needs to be approximated is. Introducing the model in eq. (6-7), motivating its use and then showing how it is obtained through stochastic generation second would improve the clarity of the paper.\", \"Related to the point above, the implications of using the model in eq. (6-7) are not discussed. The graphical model in Figure 1 suggests that x_k depends jointly on all the (z_t)_{t=1 ... sequence_size}. Instead, in eq. (6-7), each x_k is generated independently from each z_t (for t = 1 ... T, and k sampled from a distribution which depends on t). In particular, if I understand this correctly, the distribution p(x_k | z) = p(x_k | z_1 \\\\dots z_T) factorizes as p(x_k | z_1) p(x_k | z_2) ... p(x_k | z_T). Could you motivate this choice and its expected effect? It seems to me that this encourages each z_t to capture all the information needed to reconstruct each x_k in the corresponding window.\"], \"experimental_results\": [\"Table 2: I think this table since it includes most models, but it still misses RecRep (without delta = 0) and StocCon. Could you confirm whether StocCon vs. RecRep have the same setting except the use of recurrent stochastic connections in StocCon vs. using eq. (4) in RecRep with window size 1?\", \"In Table 4, the difference between line 5 and line 6 is interesting and I wish it was discussed more, maybe used in the visualization experiment to show how/why \\\"stochastic generation\\\" with a larger window improves performance.\", \"Figure 3, could it be that the use of hierarchical latent variables (H) accounts for the visual difference? Is a difference still observed when comparing lines 3 and 7 in Table 4, whose settings seem more comparable?\", \"-\"], \"minor_issues\": [\"the lack of parenthesis around citations makes the text hard to follow at times (maybe use \\\\citep whenever the citation mixes with the text?);\", \"typo: \\\"for use in a downstream tasks\\\"\", \"typo: \\\"with graphical model as described\\\" => \\\"with the/a graphical model as described\\\"\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Method that tries to be a feature extractor and a generative model at the same time.\", \"review\": \"(best read in typora)\\n\\nThe authors claim to propose a family of methods and generative models that are suited better for downstream tasks than previously proposed approaches.\\n\\n## Major points\\n\\nIt feels as if the proposed method tries to be many things. First, it is used for finding unsupervised representations down stream. Then, it still tries to be a generative model \\\"of sorts\\\", which is the reason for the use of variational inference in the first place. Additionally, the approximate posterior necessary to evaluate the ELBO is simultaneously used as a feature extractor.\", \"the_resulting_issues_are\": [\"A \\\"bad\\\" variational posterior is used because it is unclear how to get vectorial features otherwise.\", \"An adhoc likelihood function is used, which is not sufficiently well explored theoretically in the paper. Specifically,\", \"Stochastic generation is claimed to be \\\"more complex than simple Gaussian\\\"; the burden of proof is on the authors, as Gaussian density is closed under multiplication.\", \"It appears to be a Monte Carlo approximation to sth that is computable in closed form.\", \"It is not clear if that MC approximation is normalised and if the normalisation is the same at each optimisation step. Does this bias optimisation? What happens to the KL penalty weight?\", \"The ELBO change (prior updating) seems to make the claim that we still have a generative model (as written in the intro) invalid. My intuition is that the KL penalty vanishes for small step rates of the optimiser, reducing the model to that of a noisy auto encoder.\", \"## Summary\", \"The authors want to evaluate variational sequence models for feature extraction for downstream tasks. But why? What is the use of a generative inspired algorithm, when necessary ingredients are discarded? Both goals appear to be at conflict and I am not convinced that the variational ingredient is necessary.\", \"I do not cover the experimental section since the method itself has issues so severe that I don't consider it relevant.\", \"## Minor points\", \"Notation $\\\\mu_{\\\\phi_t}$ gives the impression that $\\\\phi$ is time dependent.\", \"Equations (9) and (11) are formatted badly.\", \"The approximate posterior used was used first in (Bayer & Osendorfer, \\\"Learning stochastic recurrent networks\\\", 2014) not (Chen 2018).\", \"Diagrams follow GM notation only half-heartedly.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Incremental contribution of variational recurrent models; big volume of extensions of the proposed method and experiments\", \"review\": \"This paper proposes a new variational recurrent model for learning sequences. Comparing to existing work, instead of having latent variables that are dependent on the neighbors, this paper proposes to use independent latent variables with observations that are generated from multiple latent variables.\\nThe paper further combined the proposed method with multiple existing ideas, such as the shared/prviate representation from VAE-CCAE, adding the hierarchical structure, and prior updating.\", \"pros\": \"The proposed method seems technical correct and reasonable. \\nThere are many extensions which are potentially useful for many applications \\nThere are many experimental results showing promising performance.\", \"cons\": \"The framework is very incremental. It is novel but limited. \\nThe paper claim that the main point to use the simpler variations distribution is to speed up the inference. But no speed comparisons are shown in the experiments section. \\nThe evaluation shows that prior updating (one extension) seems contributes to the biggest performance gain, not the main proposed method.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1xpe2C5Km | Trace-back along capsules and its application on semantic segmentation | [
"Tao Sun",
"Zhewei Wang",
"C. D. Smith",
"Jundong Liu"
] | In this paper, we propose a capsule-based neural network model to solve the semantic segmentation problem. By taking advantage of the extractable part-whole dependencies available in capsule layers, we derive the probabilities of the class labels for individual capsules through a recursive, layer-by-layer procedure. We model this procedure as a traceback pipeline and take it as a central piece to build an end-to-end segmentation network. Under the proposed framework, image-level class labels and object boundaries are jointly sought in an explicit manner, which poses a significant advantage over the state-of-the-art fully convolutional network (FCN) solutions. Experiments conducted on modified MNIST and neuroimages demonstrate that our model considerably enhance the segmentation performance compared to the leading FCN variant.
| [
"capsule",
"capsule network",
"semantic segmentation",
"FCN"
] | https://openreview.net/pdf?id=H1xpe2C5Km | https://openreview.net/forum?id=H1xpe2C5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xnMwx3yE",
"rJxVMMmU1E",
"HJenfKtBkE",
"HJgBEQYHJV",
"r1xxviOHyV",
"rke5zidHkE",
"BJgRIcuryN",
"B1xuej1kyE",
"BJxadp8URQ",
"BJxpgTL80X",
"r1gAtqIURX",
"B1gGFFLUAX",
"H1enWuL8AX",
"r1lglVjsnX",
"B1et6sL9hX",
"Hker6v3V37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544451860304,
1544069644473,
1544030483896,
1544028972810,
1544026967733,
1544026897899,
1544026710459,
1543596783613,
1543036277084,
1543036149177,
1543035525570,
1543035257820,
1543034883780,
1541284839841,
1541200833420,
1540831164670
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1123/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1123/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1123/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1123/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a method for tracing activations in a capsule-based network in order to obtain semantic segmentation from classification predictions.\\n\\nReviewers 1 and 2 rate the paper as marginally above threshold, while Reviewer 3 rates it as marginally below. Reviewer 3 particularly points to experimental validation as a major weakness, stating: \\\"not sure if the method will generalize well beyond MNIST\\\", \\\"I\\u2019m concerned that the results are not transferable to other datasets and that the method shines promising just because of the simple datasets only.\\\"\\n\\nThe AC shares these concerns and does not believe the current experimental validation is sufficient. MNIST is a toy dataset, and may have been appropriate for introducing capsules as a new concept, but it is simply not difficult enough to serve as a quantitative benchmark to distinguish capsule performance from U-Net. U-Net and Tr-CapsNet appear to have similar performance on both MNIST and the hippocampus dataset; the relatively small advantage to Tr-CapsNet is not convincing.\\n\\nFurthermore, as Reviewer 1 suggests, it would seem appropriate to include experimental comparison to other capsule-based segmentation approaches (e.g. LaLonde and Bagci, Capsules for Object Segmentation, 2018). This related work is mentioned, but not used as an experimental baseline.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview: insufficient experimental validation\"}",
"{\"title\": \"Thanks for your inputs to improve our model\", \"comment\": \"Response: Thanks for your comment. We will certainly add the explanations into the final version of the paper.\", \"the_explanation_of_the_averaging_is_as_following\": \"The equation P(Ck|i) = SUM_n Pn(Ck|i)/N is proposed to calculate P for a convolutional capsule in layer L (i.e. any capsule in the overlapping area of the two frames in the left of Figure 3) that receives feedbacks from capsules at more than one locations of layer L+1. Each capsule that provides a feedback is called a potential parent in the paper. For a [k*k] convolution, a capsule in layer L might have [k*k*M] potential parents in layer L+1 (M is the number of capsule types in layer L+1). \\n\\nCalculation of P(Ck|i) is a 2-step process. In the first step, each Pn(Ck|i) is calculated by summing over M potential parents at the same location in layer L+1 (equation 2). Totally [k*k] such Pn(Ck|i)s would be calculated in this step. As c_ij is used to route capsules in Layer L to capsules in Layer L+1 during the forward inference, c_ij is also involved in the traceback step between the same pair. In the second step, Pn(Ck|i)s are averaged/normalized to evaluate the final P(Ck|i). Each Pn(Ck|i) is given an equal weight in this step because inference of capsules at one location is independent of capsules at other locations in the same layer. The c_ij has been included, in step 1 (Eqn. 2).\"}",
"{\"title\": \"Thanks for the updates\", \"comment\": \"The added details has helped tremendously in clarifying the method and experiments. Thanks for adding the references as well. I would still suggest explaining in the paper why Chen et al is not comparable to this study.\\n\\nThe only part that I'm still confused is the reason behind the averaging of convolutional probabilities: P(Ck|i) = SUM_n Pn(Ck|i)/N. A convolutional instance of a capsule routes to one location of one of the types (the parent normalization should be over [types,kernel,kernel] and not just types since as you quoted each instance receives different feedbacks for the [kernelxkernel] instances of each type in next layer. For convolutional capsules you should have P(Ck | i,h,w): what is the probability of belonging to C_k for type i at position h & w. Also the Capsule routing gives you the routing factor for both positions and types: c_{ihw}{jh'w'}. \\nThen P(Ck| i,h,w) = sum_j,k,k' c_{i,h,w}{j,h+k,w+k'} P(Ck | j, h+k, w+k'). Should I assume you are sharing the c_ij for all positions and that's why you are averaging with the same weight rather than multiplying to their routing factors?\"}",
"{\"title\": \"PDF/latex updates\", \"comment\": \"The new updates (Dec. 5) we made to the latex/pdf file include the following:\\n\\n1) we separate Table 3 in the original pdf into two tables, for model selection of Tr-CapsNets and U-Nets respectively. \\n\\n2) The best Tr-CapsNet model (4x20) actually outperforms the best U-Net (3x7) in every fold of the final cross-evaluation. We therefore added a table (Table 4) to include the numbers, and updated the \\\"Overall average performance\\\" paragraph in section 4.2, as follows:\\n\\n{\\\\bf Overall average performance} Through the one-split validation, we identified the potentially best setups for both Tr-CapsNet and modified U-Net, which are Tr-CapsNet- 4\\u00d720 and U-Net-3\\u00d77, respectively (refer to APPENDIX A for detailed layer configurations). We then carried out a nine- fold cross-validation on the entire 108 data points. Table 4 shows the segmentation accuracies of Tr-CapsNet and U-Net in all the nine folds, followed by the averages. As evident, Tr-CapsNet obtain a higher Dice ratio in each fold. On average, the Dice ratio for Tr-CapsNet- 4\\u00d720 is 87.25 with a standard deviation 5.05, while U-Net-3\\u00d77 obtains 86.23 with a standard deviation 2.19. In other words, our model outperforms the best U-Net model.\\n\\n\\\\begin{wrapfigure}{R}{0.45\\\\textwidth}\\n\\\\begin{tabu}{c c c }\\n \\\\hline\\n \\\\hline\\n Fold & Tr-CapsNet & U-Net \\\\\\\\ \\n \\\\hline\\n 1 & \\\\makecell{ \\\\textbf{88.86} $\\\\pm$ 1.628} & \\\\makecell{88.41 $\\\\pm$ 1.707 } \\\\\\\\\\n 2 & \\\\makecell{ \\\\textbf{87.84} $\\\\pm$ 5.674} & \\\\makecell{86.03 $\\\\pm$ 3.347 } \\\\\\\\\\n 3 & \\\\makecell{ \\\\textbf{86.53} $\\\\pm$4.678} & \\\\makecell{ 85.42 $\\\\pm$ 3.005 } \\\\\\\\\\n 4 & \\\\makecell{\\\\textbf{85.73}$\\\\pm$ 7.352} & \\\\makecell{85.21 $\\\\pm$ 2.301 } \\\\\\\\\\n 5 & \\\\makecell{\\\\textbf{87.09}$\\\\pm$ 4.190} & \\\\makecell{86.53 $\\\\pm$ 2.159 } \\\\\\\\\\n 6 & \\\\makecell{\\\\textbf{83.56} $\\\\pm$ 1.257} & \\\\makecell{82.34$\\\\pm$ 1.369} \\\\\\\\\\n 7 & \\\\makecell{\\\\textbf{88.57} $\\\\pm$ 1.783} & \\\\makecell{ 87.14 $\\\\pm$ 1.374 } \\\\\\\\\\n 8 & \\\\makecell{\\\\textbf{88.23}$\\\\pm$ 4.506} & \\\\makecell{86.96 $\\\\pm$ 2.071 } \\\\\\\\\\n 9 & \\\\makecell{\\\\textbf{88.82}$\\\\pm$ 3.005} & \\\\makecell{88.03 $\\\\pm$ 2.333 } \\\\\\\\\\n \\\\hline\\n \\\\textcolor{red}{Average} & \\\\makecell{\\\\textbf{87.25} $\\\\pm$ 3.786} & \\\\makecell{86.23 $\\\\pm$ 2.193 } \\\\\\\\\\n \\\\hline\\n\\\\end{tabu}\\n\\\\captionof{table}{\\\\textcolor{red}{Hippocampus segmentation accuracies on nine-fold\\n cross-validaton.}}\\n\\\\label{tab:hippo}\\n\\\\end{wrapfigure}\"}",
"{\"title\": \"Revisions & thoughts (part 2)\", \"comment\": \">> Finally, you trained your MNIST model with uniform noise in the ratio [1-5]. Implicitly, I assume that you apply this noise level on the real image space uint8. I thought that you applied the noise to make the segmentation more challenging. But with this small noise level, even the disturbed images are easily segmentable. Hence, I don't wanna use these results to make any judgement about your method. MNIST in this setting is to easy as segmentation task.\", \"response\": \"We agree with the review that MNIST itself is not difficult to segment. Again, the main purpose of this experience to explore the functionality of the recognition component, which we believe has been very well demonstrated with the occlusion cases.\"}",
"{\"title\": \"Revisions & thoughts\", \"comment\": \">> Comment: I'm noticing that you changed a lot and the current version of your paper addresses a lot of my concerns positively.\", \"response\": \"we believe our presentation in section 4.2 caused the confusion. In this Hippocampus experiment, we actually conducted the analysis in two stages. The first stage is basically a model selection step, where we compare the different setups for Tr-CapsNets and U-Nets, respectively, trying to select the \\u201cbest\\u201d models within their respective groups. This step is carried out with only one split/fold of the data, with no intention to compare Tr-CapsNets and U-Nets. Our original presentation, which putting the results in one table, is confusing and misleading.\\n\\nWe since have clarified the experiment, and separate the results into 2 tables (in the PDF), to avoid confusion. \\n\\nThe second stage is about the head-to-head comparison of the best Tr-CapsNet (the 4x20 version) and U-Net (3x7). This comparisons were conducted with cross-validation, for 9 folds. In the original presentation, we only listed the average performance, where Tr-CapsNet-4x20 outperforms U-Net-3x7 (87.25 vs. 86.23). Actually, Tr-CapsNet did better than U-Net in every fold, which wasn\\u2019t mentioned in the original presentation. We have since added a new table (Table 4) to provide the details (as we cannot upload the PDF now, we include the latex of the table in the above \\\"PDF/latex updates\\\" comment). \\n \\nAs for the improvement made by Tr-CapsNet, 87.25 over 86.23 may not seem a lot, but we believe it is quite significant and promising. Here is why. Firstly, it\\u2019s a 7.41% reduction of the error rate. Secondly, Hippocampi are difficult to segment and the ground-truth has a lot of noise. Hippocampus is a small brain structure, with very indistinct boundaries with the surrounding areas. Being small makes the Dice ratio quite sensitive to any deviation from the ground-truth. The ground-truth masks were generated by the ADNI team, in a semi-manual way. A number of 3D salient points (totally 42 for each Hippocampus) were identified by human experts, followed by a 3D surface fitting, and the surfaces were then converted to binary masks. The noise in ground-truth makes it impossible or even meaningless to shoot for 100% accuracy. In addition, for different human raters, or even the same rater conducting the procedure twice, there will be quite amount of disparity. Actually there are studies to compare human performance, and ~87% Dice is about the disparity between experienced neurologist. Any number higher than it could be regarded as \\u201chuman level intelligence\\u201d. In other words, any improvement at this level would be quite difficult. Reduction of 7.41% in the error rate should not be taken slightly. \\n\\nWith all this said, this goal of this work is not to develop a most accurate Hippocampus segmentation solution. The contribution of our model lies more in the theoretical innovation, with a very sincere purpose to dig out the potential of Capsules. While we certainly agree with the reviewers that the system needs to demonstrate practical usefulness (which we certainly did), we hope we could view the significant of this work in a broader spectrum. Our trace-back pipeline, demonstrated in this work for segmentation, is a brand new idea and it would have many applications including detection, visualization and even detecting adversarials. \\n\\nThere are also very recent works (in NIPS\\u201918, PRCV\\u201918) demonstrating capsule nets can be used for large images. With the new capsule models and ideas, our trace-back idea may lead significant stride of developments along this capsule direction.\"}",
"{\"title\": \"new updates\", \"comment\": \"We revised the manuscript (pdf) with the following additions. However, the PDF cannot be uploaded again, so we mention the items here, and include the latex in next comment.\\n\\n8) added an illustration of the segmentation results in Appendix B. The ground-truth masks (green color), results from Tr-CapsNet (left column, red) and U-Net (right column, magenta) of two slices are included.\\n\\n9) Updated section 4.2: results on Hippocampus dataset. A table (Table 4) showing the segmentation accuracies of the best Tr-CapsNet and U-Net in all the folds is added.\"}",
"{\"title\": \"Thanks for your comments and your current update\", \"comment\": \"Thanks for your work to improve the quality of your contribution. I'm noticing that you changed a lot and the current version of your paper addresses a lot of my concerns positively. But still, I keep my current voting for your contribution. Let me explain why:\\n\\nFirst of all, the additional segmentation results are not online which makes it still hard for me to evaluate your contribution. \\n\\nSecond, your Table 2 confuses me. Since you are providing the results including the standard deviation, I assume that these are the results of a cross-validation. If this true, then your Table doesn't fit to your results in the text (see below the Table). Please correct me if I'm wrong. Moreover, a U-Net and your architecture is running at the same accuracy level, which makes me curious: what is now the benefit of your model? It could be clarified if you would present some images to compare the segmentation result of the U-Net and your network. Even if the performances are on the same level maybe the segmentation results are showing differences.\\n\\nFinally, you trained your MNIST model with uniform noise in the ratio [1-5]. Implicitly, I assume that you apply this noise level on the real image space uint8. I thought that you applied the noise to make the segmentation more challenging. But with this small noise level, even the disturbed images are easily segmentable. Hence, I don't wanna use these results to make any judgement about your method. MNIST in this setting is to easy as segmentation task.\"}",
"{\"title\": \"Thanks & our response (2/2)\", \"comment\": \">> 6.\\tWhat kind of noise is added to MNIST?\", \"response\": \"will certainly do. Thanks for your valuable comments.\"}",
"{\"title\": \"Thanks & our response (1/2)\", \"comment\": \">> The paper is well-written and well-explained. Nevertheless, I think it would be useful to have some illustrations about the network architecture. Some stuff which is explained in text could be easily visualized in a flow chart. For example, the baseline architecture and your Tr-CapsNet could be easily explained via a flow chart. With the text only, it is hard to follow.\", \"response\": \"In both capsule nets and our Tr-CapsNets, an iterative routing-by-agreement mechanism is used. In this mechanism each capsule chooses its parent capsule in the higher layer through an iterative routing procedure. This is certainly a downside comparing with CNNs. The average inference time for one Hippocampal slice, the U-Net (feature map 4 x 20) and Tr-CapsNet (4 x 20, 1) take roughly 0.3 ms and 0.6 ms, respectively.\"}",
"{\"title\": \"Thanks & our response\", \"comment\": \">> The writing could be tremendously improved if some background of the capsule networks is included.\", \"response\": \"we commented on LaLonde & Bagci\\u2019s work in Discussion and Related Work section. Additional thoughts on their work and our models have been added into this revised manuscript (on page 10). We also add two paragraphs to comment on capsule based solutions in general.\"}",
"{\"title\": \"Thanks & our response\", \"comment\": \">> The manuscript can benefit from a more clear description of the architecture used for each set of experiments. Specially how the upsampling is connected to the traceback layer.\", \"response\": \"We cite [1] and several more SOTA solutions in the revised manuscript. The main focus of this work is to explore the power of capsules in semantic segmentation, rather than an extremely accurate hippocampus segmentation solution. The 9 views in [1] needs to combined through an ensemble net, which is not the interest of this work.\"}",
"{\"title\": \"Thank all the reviewers & overall responses\", \"comment\": \"First\\u200b \\u200bof\\u200b \\u200ball,\\u200b \\u200bwe\\u2019d\\u200b \\u200blike\\u200b \\u200bto\\u200b \\u200bthank\\u200b \\u200ball\\u200b \\u200bthe\\u200b \\u200breviewers\\u200b \\u200bfor\\u200b your \\u200btime\\u200b \\u200band\\u200b \\u200bexpertise\\u200b \\u200bto\\u200b \\u200bidentify\\u200b \\u200bthe\\u200b \\u200bareas\\u200b \\u200bof our\\u200b \\u200bmanuscript\\u200b \\u200bthat\\u200b \\u200bneeds\\u200b \\u200bto\\u200b \\u200bbe\\u200b \\u200bcorrected\\u200b, clarified and improved.\\u200b \\u200bWe have carefully read through all your comments and implemented additional components to thoroughly address your concerns, which will be explained in the following paragraphs.\\n\\nWe\\u2019ve integrated many updates into the revised manuscript. To assist you to navigate through them, we summarize the major changes as follows:\\t\\n\\n1) All the sections been updated. The added & rewritten sentences are highlighted with red color.\\t\\n2) We renamed the traceback layer to traceback pipeline, to better reflect the nature of the operations. \\n3) Background section: added paragraphs to provide more details on capsule and capsule nets. Comparisons are also made with CNNs. \\n4) Architecture section: \\n 4.1) rewrote the description of our proposed Tr-CapsNet model, with an additional figure to illustrate the overall network \\n structure, as well as the traceback pipeline. \\n 4. 2) updated the text and caption of Figure 3,\\n5) Experimental results section: \\n 5.1) additional, more detailed description on the Hippocampus dataset.\\n 5.2) provide more implementation details.\\n 5.3) implemented and compared different trackback depths. \\n6) Discussion and related work section:\\n 6.1) conducted additional literature review on state-of-the-art solutions. \\n 6.2) discussion on CapsNets\\u2019 single-instance assumption, and its influence to the capacity of our model. Proposed \\n 6.3) directions for future enhancements. \\n7) Added an Appendix to provide the detailed network configurations.\"}",
"{\"title\": \"Original and interesting, requires further explanation of the architecture and experiment on multi-class segmentation\", \"review\": \"Authors present a trace-back mechanism to associate lowest level of Capsules with their respective classes. Their method effectively gets better segmentation results on the two (relatively small) datasets.\\n\\nAuthors explore an original idea with good quality of experiments (relatively strong baseline, proper experimental setup). They also back up their claim on advantage of classification with the horizontal redaction experiment. \\nThe manuscript can benefit from a more clear description of the architecture used for each set of experiments. Specially how the upsampling is connected to the traceback layer.\\nThis is an interesting idea that can probably generalize to CNNs with attention and tracing back the attention in a typical CNN as well.\", \"pros\": \"The idea behind tracing the part-whole assignments back to primary capsule layer is interesting and original. It increases the resolution significantly in compare to disregarding the connections in the encoder (up to class capsules). \\n\\nThe comparisons on MNIST & the Hippocampus dataset w.r.t the U-Net baseline are compelling and indicate a significant performance boost.\", \"cons\": \"Although the classification signal is counted as the advantage of this system, it is not clear how it will adopt to multi-class scenarios which is one of the major applications of segmentation (such as SUN dataset).\\n\\nThe assumption that convolutional capsules can have multiple parents is incorrect. In Hinton 2018, where they use convolutional Capsule layers, the normalization for each position of a capsule in layer below is done separately and each position of each capsule type has the one-parent assumption. However, since in this work only primary capsules and class capsules are used this does not concern the current experiment results in this paper.\\n\\nThe related work section should expand more on the SOTA segmentation techniques and the significance of this work including [2].\", \"question\": \"How is the traceback layer converted to image mask? After one gets p(c_k | i) for all primary capsules, are primary capsule pose parameters multiplied by their p(c_k |i ) and passed all to a deconv layer? Authors should specify in the manuscript the details of the upsampling layer (s) used in their architecture. It is only mentioned that deconv, dilated, bilinear interpolation are options. Which one is used in the end and how many is not clear.\", \"comments\": \"For the Hippocampus dataset, the ensemble U-Net approach used in [1] is close to your baseline and should be mentioned cited as the related work, SOTA on the dataset. Also since they use all 9 views have you considered accessing all the 9 views as well?\\n\\n\\n[1]: Hippocampus segmentation through multi-view ensemble ConvNets\\nYani Chen ; Bibo Shi ; Zhewei Wang ; Pin Zhang ; Charles D. Smith ; Jundong Liu\\n[2]: RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation\\nGuosheng Lin, Anton Milan, Chunhua Shen, Ian Reid\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A neat idea\", \"review\": \"This paper proposes a traceback layer for capsule networks to do semantic segmentation. Comparing to previous works that use capsule networks for semantic segmentation, this paper makes explicit use of part-whole relationship in the capsule layers. Experiments are done on modified MNIST and Hippocampus dataset. Results demonstrate encouraging improvements over U-Net. The writing could be tremendously improved if some background of the capsule networks is included.\\n\\nI have a question about the traceback layer. It seems to me that the traceback layer re-uses the learned weights c_{ij} between the primary capsules and the class capsules as guidance when \\u201cdistributing\\u201d class probabilities to a spatial class probabilistic heatmap. One piece of information I feel missing is the affine transformation that happens between the primary capsule and the class capsule. The traceback layer doesn\\u2019t seem to invert such a transformation. Should it do so? \\n\\nSince there have been works that use capsule networks for semantic segmentation, does it make sense to compare to them (e.g. LaLonde & Bagci, 2018) ?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good paper which seems technically correct. Not sure if the method will generalize well beyond MNIST.\", \"review\": \"Based on the CapsNet concept of Sabour the authors proposed a trace-back method to perform a semantic segmentation in parallel to classification. The method is evaluate on MNIST and the Hippocampus dataset.\\n\\nThe paper is well-written and well-explained. Nevertheless, I think it would be useful to have some illustrations about the network architecture. Some stuff which is explained in text could be easily visualized in a flow chart. For example, the baseline architecture and your Tr-CapsNet could be easily explained via a flow chart. With the text only, it is hard to follow. Please think about some plots in the final version or in the appendix. One question which is aligned to that: How many convolutional filters are used in the baseline model?\\n\\nAdditionally, think about a pseudo-code for improved understandability. \\n\\nSome minor concerns/ notes to the authors:\\n1.\\tAt page 5: You mentioned that the parameters lambda1 and lambda 2 are important hyper-parameters to tune. But in the results you are not explaining how the parameters were tuned. So my question is: How do you tuned the parameters? In which range do you varied the parameters?\\n2.\\tPage 6; baseline model: Why do you removed the pooling layers?\\n3.\\tI\\u2019m curious about the number of parameters in each model. To have a valid discussion about your model is better than the U-Net-6 architecture, I would take into account the number of parameters. In case that your model is noticeably greater, it could be that your increased performance is just due to more parameters. As long as your discussion is without the number of parameters I\\u2019m not convinced that your model is better. A comparison between models should be always fair if two models are architectural similar.\\n4.\\tWhy is the magnitude of lambda1 so different between the two dataset that you used?\\n5.\\tCould you add the inference times to your tables and discuss that in addition?\\n6.\\tWhat kind of noise is added to MNIST?\\n7.\\tWhat is the state-of-the-art performance on the Hippocampus dataset?\\n8.\\tWhat would be the performance in your experiments with a MaskRCNN segmentation network?\\n9.\\tI\\u2019m not familiar with the Hippocampus dataset. I missed a reference where the data is available or some explaining illustrations. \\n10.\\tFor both datasets, more illustrations about the segmentation performance would be fine to evaluate your method. At least in the appendix\\u2026\\n\\t\\nMy major concern is that both datasets are not dealing with real background noise. I\\u2019m concerned that the results are not transferable to other datasets and that the method shines promising just because of the simple datasets only. For example, due to the black background MNIST digits are well separated (if we skip that you added some kind of noise). So, from that point of view your results are not convincing and the discussion of your results appearing sparse and not complete.\\nTo make your results transparent you could think about to publish the code somewhere.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1l6e3RcF7 | A Walk with SGD: How SGD Explores Regions of Deep Network Loss? | [
"Chen Xing",
"Devansh Arpit",
"Christos Tsirigotis",
"Yoshua Bengio"
] | The non-convex nature of the loss landscape of deep neural networks (DNN) lends them the intuition that over the course of training, stochastic optimization algorithms explore different regions of the loss surface by entering and escaping many local minima due to the noise induced by mini-batches. But is this really the case? This question couples the geometry of the DNN loss landscape with how stochastic optimization algorithms like SGD interact with it during training. Answering this question may help us qualitatively understand the dynamics of deep neural network optimization. We show evidence through qualitative and quantitative experiments that mini-batch SGD rarely crosses barriers during DNN optimization. As we show, the mini-batch induced noise helps SGD explore different regions of the loss surface using a seemingly different mechanism. To complement this finding, we also investigate the qualitative reason behind the slowing down of this exploration when using larger batch-sizes. We show this happens because gradients from larger batch-sizes align more with the top eigenvectors of the Hessian, which makes SGD oscillate in the proximity of the parameter initialization, thus preventing exploration. | [
"sgd",
"walk",
"regions",
"deep network loss",
"training",
"stochastic optimization algorithms",
"different regions",
"loss surface",
"question",
"exploration"
] | https://openreview.net/pdf?id=B1l6e3RcF7 | https://openreview.net/forum?id=B1l6e3RcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxmhd7kgE",
"ryeCmbgA2Q",
"B1eM35A_h7",
"BylzRFgP2Q",
"BygeI1GRiQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1544661163166,
1541435686209,
1541102249901,
1540979145967,
1540394824003
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1122/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1122/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1122/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1122/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1122/Authors"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers agree that the paper needs significantly more work to improve presentation and is not fully empirically and conceptually convincing.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}",
"{\"title\": \"Important line of research and novel ideas that lack precision and is limited by mere presentation of observations\", \"review\": [\"The subject of how a given algorithm explores the landscape is still a poorly understood area in training neural networks. There is a large body of recent work that attempts to shed light on this puzzle, and each one tries to claim their share in the furthering of the understanding of the relationship between the geometry of the landscape and the dynamics that one chooses in optimization. The present paper is a fine addition to the literature with interesting observations and novel questions, however, it falls short in many core areas: An apparent work in progress that has a great potential.\", \"It is safe to say that \\\"A walk with SGD\\\" has an important single focus in mind: Does the SGD cross over barriers in the weight space of the underlying neural network? This question, at its heart, is intimately linked with the many properties that are attributed to the modern algorithm of the choice and the way it navigates a given non-convex landscape. The paper claims to provide an almost negative answer to the question and thereby busting several myths that are attributed to the \\\"trick\\\" part of SGD algorithm. As good as it sounds, unfortunately, the paper falls short of providing a convincing evidence (be it theoretical or empirical), and the way it tries to frame itself unique and different in relation to related works only indicate a lack of deep understanding of the existing literature. Therefore, I think there are several ways the paper should be improved before it is ready.\", \"A major question (that I hope will easily be addressed) is on the definition of the barrier itself. According to the text, a barrier is defined judging by the minima of two 1-dimensional segments that connect weights connecting three consecutive steps: if the minimum of the line segment defined by the latter step is larger than the former, then it declared that a barrier is crossed. In a low dimensional world, this makes total sense, however, I fail to understand what kind of barrier it implies on the geometry of the landscape: Can the 1-dimensional lines be on the sides of a valley? Can one find *another* 1-dimensional projection for which the inequality is broken? How do such dependencies change the understanding of the problem? And if one is indeed only interested in the flat line segments (since SGD is making discrete steps), then one can, in principle, observe barrier crossing in a convex problem, as well? Is there an argument for otherwise? Or if it is a notion that applies equally well in a convex case then how should we really think about the barrier crossing? On the opposite point of view, can one not imagine a barrier crossing that doesn't appear in this triangular inequality above?\", \"The paper is full of empirical evidence that is guided by a simple observable that is very intuitive, however, it lacks a comprehensive discussion on the new quantity they propose that I consider a major flaw, but that I think (hope) that the authors can fix very easily. Some minor points that would improve the readability and clarity for the reader:\", \"The figures are not very reader-friendly, this can be improved by better using the whitespaces in the paper but it can also be improved by finding further observables that would summarize the observations instead of showing individual consecutive line interpolations.\", \"What are the values of the y-axis in Figure 5 and 6? Are they the top eigenvalues of the Hessian?\", \"In the models that are compared in Figure 7, what are their generalization properties (early stopping and otherwise)?\", \"The interpretation at the end of p. 6 may be a good motivation for the reader if it had been introduced earlier for that section.\", \"Finally, Section 5 reads very strangely, I have hard times understanding why certain phrases exist the way they are in this part. Here are further notes on Section 5:\", \"Why the whole first paragraph focused on hot the current paper is different than Goodfellow et al (2014) (it is obvious that it is for a different purpose), or why do we read the sentence \\\"Li et al (2017b) also visualize....\\\" What way do they visualize, is it the only paper that does visualization, what's the relation with the current paper and barrier crossing?\", \"For the second paragraph, I can suggest another paper, https://arxiv.org/abs/1803.06969, that was at ICML which also looks at the diffusion process through the parameter distance at different times which is similar to Hoffer et al. which also claims no barrier crossing similar to the present paper.\", \"However, my main issue is the exact connection between diffusion and no barrier crossing and it's connection to SGD preferring wide local minima instead of a narrow one. The second paragraph of the conclusion touches upon this subject. But it is not entirely clear how they are linked (except for the brittle SDE approximation at Li et al (see https://arxiv.org/abs/1810.00004)). Overall, the paper would benefit a lot from the discussion on why it is preferable to have SGD choose one basin over another in the beginning, as it is, it looks like the paper has another agenda behind the scenes.\", \"In the fourth paragraph of the conclusion, the paper refers to three papers that link DNN to spin glasses, in two of the (older) references the networks are far from what we have today, and the third one is far from \\\"showing\\\" anything between DNN and spin glass. In any case, what's the link between the aspects studied there with the present paper?\", \"Finally, the paper claims at the last few sentences that the works referred a little bit earlier look at the loss surface \\\"in isolation from the optimization dynamics\\\", however, many of those works cited have their empirical observations much like the current paper, and clearly they all \\\"study the DNN loss surface along the trajectory of SGD\\\" necessarily as it is the way to find local minima, saddle points, paths, curvature etc... The present paper is already very interesting and full of novel insight, I fail to see the value of struggling to stand out like this.\", \"Overall, I think the paper is a very interesting step forward in understanding SGD dynamics on the DNN landscape. And, even though it has many shortcomings as it currently stands, I think it has a lot of room to improve.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Two interesting ideas but somewhat disconnected\", \"review\": \"This paper explores the idea that mini-batch SGD rarely *crosses* barriers during DNN optimization, but rather uses a 'seemingly' or 'alternate' mechanism, as the authors somewhat mysteriously call it on the first page. In the second part of the paper, they also investigate why the loss surface is explored more slowly when the batch size increases.\\n\\nI found both parts of the paper reasonably interesting but not too surprising. My main concern is that both parts are , in themselves, not strong enough to warrant publication at ICLR, and the connection between them is rather weak. The authors write 'to complement this finding' to connect the first to the second investigation, but that's not connecting them very closely is it?\\nI think it would be better to work out both insights in more detail and publish them in separate papers. \\nEspecially the second insight should be explored more thoroughly. For example, the authors write 'in convex optimization theory, when gradients point along the sharp directions of the loss surface, optimization exhibits under-damped convergence'. This is repeated later in different wordings. But no reference to this result (I presume it's a mathematical theorem?) is given, neither here nor later when it is said again. The link from the convex to the nonvex DNN case could also be established more convincingly. Everything became quite (too) heuristic at some point...\\n\\nA few small remarks (which did not influence my judgement):\\n- while in general (with the exception of the too-fast move from convex to nonconvex that I just explained) the paper is written quite clearly, the prose could be made significantly tighter. For example, the definition of what 'crossing a barrier' means is given three times (!) in the paper (two times in a figure, once in section 2). BTW, isn't it better to say 'moving *around* barriers' rather than 'over' barriers? You now use 'over' but still sounds very similar to just 'crossing'. \\n- plural nouns are often combined with singular vers ('measurements that ensures'). This happens not just once but all the time...\", \"pros\": [\"two nice little ideas; esp. the first one is well-explained\", \"easy to read\"], \"cons\": [\"ideas are not very surprising; and just tested on a few data sets; things could be more robust.\", \"second idea not fully convincingly explained\", \"(most important): the two ideas are not closely connected, making this a somewhat strange paper.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Qualitative analysis lacking key insights and not comprehensive enough?\", \"review\": \"I like the idea of trying to qualitatively illustrate the behavior of SGD when optimizing parameters of complex models, such as Deep and Conv Nets, but I think that the contribution is not very substantial. The connection between SGD and diffusion has been pointed out in previous papers, as acknowledged by the Authors. The study of the effect of batch size is interesting, but again somewhat derived from previous works.\\n\\nIt would helpful to illustrate the difference between \\\"crossing\\\" and \\\"moving over\\\" a barrier with a simple figure. \\n\\nThe experimental validation is interesting, although I think it is limited and perhaps the conclusions that can be drawn from it are not so surprising. I believe it would have been interesting to study other important factors that affect the behavior of SGD, such as learning rate and type of momentum. For example, a larger learning rate might allow for more crossing of barriers. Also, different SGD algorithms (ADAGRAD, ADAM, etc...) would behave considerably differently I expect. At the moment these important factors are overlooked. \\n\\nIt is not clear to me why we would want to avoid larger batch sizes. A larger batch size allows for a lower variance of stochastic gradients, and therefore faster convergence. I think this point requires elaboration, because this forms the motivation behind theoretically grounded and successful SGD works, such as SAGA and the like. I agree that a smaller batch-size is preferable at the beginning of the optimization, but again this is a well known fact (again, see SAGA) and it is for computational reasons mostly (being far away from the (local) mode, a noisy gradient is enough to move in the right direction - no need to spend computations to use an accurate gradient). There is no guarantee that the local optimum close to initialization is a bad local optimum in general, so I don't think that using a large batch size at the beginning is a bad idea for this reason - again it is just computational. \\n\\nAnother thing missing I think is the discussion around why it is potentially a good thing to cross the barrier, either at the beginning of the exploration or towards convergence to a local optimum. At the moment, the paper seems to report the behavior of SGD without key insights on the importance of crossing or avoiding crossing barriers.\\n\\nAs a concluding remark - there has been a lot of work on the connections between diffusions and MCMC algorithms (see e.g., the Metropolis Adjusted Langevin Algorithm - MALA) and a lot of the considerations made in the paper are somewhat known. That is, random walk/diffusion type MCMC (and even gradient-based MCMC like Hybrid Monte Carlo) struggle a lot in non-convex problems and they hardly move across modes of a posterior distribution (equivalent to crossing barriers of potential). So I'm not at all surprised that SGD does not cross barriers during optimization and I would challenge the statement in the introduction saying \\\"Intuitively, when performing random walk on a potential, one would expect barriers being crossed quite often during the process.\\\"\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Re: What does \\\"moving over a barrier\\\" mean?\", \"comment\": \"Apologies for the late reply, open review did not notify us with an email. Let us first try to describe \\\"moving over a barrier\\\" in a completely non-mathematical and purely layman terms. To visualize what we mean by it, consider when flying in an airplane and looking out the widow below the airplane. While the height of the airplane above sea level (denoting training loss) may be fixed, the landscape below can have ups and downs (with respect to sea level). These ups and downs in the landscape below is what we refer to as \\\"moving over the barriers\\\" instead of crossing them. Here sea level acts as a reference and denotes 0 training loss.\\n\\nIn the last paragraph of page 2 (section 3), we described the above phenomenon using a geometrical construct where the model parameters go from state A to B to C due to 2 consecutive SGD update steps. When interpolating the loss between parameters A and B along the straight line connecting them, the interpolation looks like a convex quadratic with a minimum in between. The same thing happens between points B and C. Hence no barriers are crossed during SGD update from A to B, and update from B to C. However, since the minimum between A and B is lower than the minimum between B and C in the construct, there must exist a barrier along any path from the minimum between A and B, to the minimum between B and C. This is what we referred to as \\\"moving over a barrier\\\" when SGD goes from A to B to C in this construct.\\n\\nThis behavior is in contrast with the traditional intuition that noise in SGD helps in exploring different regions of the non-convex loss surface by entering and then escaping local minima by jumping out of them, which would require SGD to cross barriers and is a slow process.\"}"
]
} |
|
S1lTg3RcFm | Perception-Aware Point-Based Value Iteration for Partially Observable Markov Decision Processes | [
"Mahsa Ghasemi",
"Ufuk Topcu"
] | Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome. In conventional POMDP models, the observations that the agent receives originate from fixed known distribution. However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive. Due to combinatorial nature of such selection process, it is computationally intractable to integrate the perception decision with the planning decision. To prevent such expansion of the action space, we propose a greedy strategy for observation selection that aims to minimize the uncertainty in state.
We develop a novel point-based value iteration algorithm that incorporates the greedy strategy to achieve near-optimal uncertainty reduction for sampled belief points. This in turn enables the solver to efficiently approximate the reachable subspace of belief simplex by essentially separating computations related to perception from planning.
Lastly, we implement the proposed solver and demonstrate its performance and computational advantage in a range of robotic scenarios where the robot simultaneously performs active perception and planning. | [
"partially observable Markov decision processes",
"active perception",
"submodular optimization",
"point-based value iteration",
"reinforcement learning"
] | https://openreview.net/pdf?id=S1lTg3RcFm | https://openreview.net/forum?id=S1lTg3RcFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1l-Sf3ExE",
"BklEFRIqAQ",
"HkeUQ089RQ",
"ryxh9685Am",
"BylVdUNZR7",
"r1enwzJYn7",
"SyeBQi3d3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545024057092,
1543298684138,
1543298590162,
1543298452073,
1542698603695,
1541104228352,
1541094172823
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1121/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1121/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1121/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1121/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1121/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1121/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1121/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This was a borderline paper and a very difficult decision to make.\\n\\nThe paper addresses a potentially interesting problem in approximate POMDP planning, based on simplifying assumptions that perception can be decoupled from action and that a set of sensors exhibits certain conditional independence structure. As a result, a simple approach can be devised that incorporates a simple greedy perception method within a point-based value iteration scheme.\\n\\nUnfortunately, the assumptions the paper makes are so strong and seemingly artificial to the extent that they appear reverse engineered to the use of a simple perception heuristic. In principle, such a simplification might not be a problem if the resulting formulation captured practically important scenarios, but that was not convincingly achieved in this paper---indeed, another major limitation of the paper is its weak motivation. In more detail, the proposed approach relies on decoupling of perception and action, which is a restrictive assumption that bypasses the core issue of exploration versus exploitation in POMDPS. As model of active perception, the proposal is simplistic and somewhat artificial; the motivation for the particular cost model (cardinality of the sensor set) is particularly weak---a point that was not convincingly defended in the discussion. Perhaps the biggest underlying weakness is the experimental evaluation, which is inadequate to support a claim that the proposed methods show meaningful advantages over state-of-the-art approaches in important scenarios. A reviewer also raised legitimate questions about the strength of the theoretical analysis.\\n\\nIn the end, the reviewers did not disagree on any substantive technical matter, but nevertheless did disagree in their assessments of the significance of the contribution. This is clearly a borderline paper, which on the positive side, was competently executed, but on the negative side, is pursuing an artificial scenario that enables a particularly simple algorithmic approach.\\n\\nDespite the lack of consensus, a difficult decision has to be made nonetheless. In the end, my judgement is that the paper is not yet strong enough for publication. I would recommend the authors significantly strengthen the experimental evaluation to cover off at least two of the major shortcomings of the current paper: (1) The true utility of the proposed method needs to be better established against stronger baselines in more realistic scenarios. (2) The relevance of the restrictive assumptions needs to be more convincingly established by providing concrete, realistic and more challenging case studies where the proposed techniques are still applicable. The paper would also be improved if the theoretical analysis could be strengthened to better address the criticisms of Reviewer 4.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper: strong assumptions enable simplified approximate planning for restricted POMDPs\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the thoughtful and constructive feedback. Please find the authors\\u2019 response below.\\n\\n-----Explanation of theoretical parts\\n\\n*revision* The authors added more explanation to the theoretical proofs.\\n\\n-----Dependency of perception and planning actions\\n\\nThe point mentioned by the reviewer is a very nice extension, for instance, considering the scenarios where the availability of information sources depends on the position of the agent. Adding such dependency makes the problem much harder, since, the belief points are almost-surly a combination of all states.\\n\\nHowever, this is one of the directions that the authors are currently following. One idea is to account for possible future uncertainty reduction with respect to each belief point and considering a branch-and-bound type algorithm, prune the planning actions that cannot obtain desired uncertainty given current level of uncertainty. This would lead to a heuristics-based method that limits the planning actions considered at each belief point. \\n\\n-----Caption of Figure 5\\n\\n*revision* We revised the caption and added the color spectrum.\\n\\n-----Effect of cardinality constraint\\n\\n\\u2018k\\u2019 is a given integer, coming from the physical constraint in the problem, not a parameter from the algorithm. However, increasing \\u2018k\\u2019 leads to less uncertainty as more measurements are obtained, due to monotonicity of entropy. Furthermore, this reduction has diminishing-returns property, due to submodularity of entropy. \\n\\n*revision* In a new appendix, the authors illustrate the effect of \\u2018k\\u2019 on the value function for each sampled belief point.\\n\\n-----Application over real-world robots\\n\\nAs part of the future work, the authors plan to extend the simulation to a swarm of UAVs, following individual tracking tasks while communicating their sensory data with a few selected UAVs in their communication range.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the thoughtful and constructive feedback. Please find the authors\\u2019 response below .\\n\\n-----Clarifying motivation\\n\\nThe cost of measurement acquisition and processing in terms of power, communication, and processing computations define the physical constraints on the agent\\u2019s perception. This leads to the definition of problem 1 where we capture these constraints, similar to majority of sensor selection problems, by a cardinality constraint. As the reviewer correctly points out, some problems may call for different constraints. For instance, a knapsack constraint can define non-uniform cost over sensors, leading to 0.5 approximation instead of 0.667 when using greedy algorithms. For homogenous information source, as in the simulations, uniform cost is reasonable. For heterogenous information sources, a knapsack constraint is a better choice.\\n\\nHaving defined the problem from physical constraints and demands, now the computational complexity arises from combining the selection actions with planning actions. This complexity has motivated the proposed solver and the sentence \\u201cone must establish a trade-off between optimality and tractability\\u201d refers to the defined problem (with cardinality constraint). Otherwise, if there is no limitation on the number of selected sensors, as the reviewer mentioned, one can use all the measurements, leading to less uncertainty and without the complexity of selection.\\n\\n*revision* The reviewer's assessment is completely valid regarding shortcomings in explaining the motivation. To resolve that, we edited the introduction of the paper to better convey the motivation and remove the ambiguity.\\n\\n-----Stronger empirical results\\n\\nThe backup step in almost all the point-based solvers are the same while the sampling and pruning steps rely on efficient heuristics. Therefore, comparing the proposed solver that has a different back-up step and a conventional sampling and pruning would not result in the desired comparison. We intentionally avoided to use a specific sampling method in order to keep the solver general enough such that it can be combined with any sophisticated sampling (and/or pruning) method.\\n\\nThe benchmarks in Kurniawati et al. cannot be used without significant modification since they lack perception actions. The designed experimental scenarios are based on Satsangi et al. (2018) and Spaan & Lima (2009) where POMDPs are used for active perception. The authors absolutely agree that more complex and more realistic simulations will better represent the importance of the proposed solver and plan to perform more empirical analysis as part of future work on a swarm of UAVs with tracking tasks and limited communication.\\n\\n*revision* The authors included the numerical values for 2-D navigation in a new appendix.\\n\\n-----Reason behind assumption 1\\n\\nDue to assumption 1, the sensors\\u2019 measurements only depend on the state and action, therefore eliminating the sensors effect on each other\\u2019s measurement, e.g., through noise from magnetic fields. This assumption is realistic in many practical settings, especially if they are not in small scales, e.g., micro size. \\n\\n-----Tightness of bounds\\n\\nThe theoretical analysis for finding the bound follows a similar procedure as that of the classical analysis of point-based methods. The only difference is that the distance appearing between belief points is from the difference between greedy vs. optimal approach, not from the density of points. As part of future work, the authors aim to refine the bound for special classes of measurement models, e.g., Gaussian measurements with bounded variance. \\n\\n-----Supporting computational advantage\\n\\n*revision* To support the statement \\u201cthis added complexity is significantly lower than concatenating the combinatorial perception actions with the planning actions\\u201d, the authors added an appendix detailing the complexity and its comparison with standard methods.\\n\\n-----Minor comments\\n\\n*revision* The manuscript is revised according to the minor comments.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for the thoughtful and constructive feedback. Please find the authors\\u2019 response below.\\n\\n-----Clarifying \\u201cnear-optimality\\u201d \\n\\nThe reviewer's point is indeed valid in that the near-optimality is with respect to entropy objective. As stated by the reviewer, the entropy, while widely used as an uncertainty metric, may not be the best measure in some settings. However, the intractability of combining perception and planning actions calls for introducing a separate uncertainty measure. As part of the future work, the authors plan to design task-oriented perception schemes that take into account the given task/rewards to shape a non-uniform (with respect to reward) measure of uncertainty.\\n\\n*revision* We revised the manuscript by emphasizing on the choice of entropy as one possible measure and clarifying what the near-optimality refers to.\\n\\n-----Restrictive assumptions\\n\\nAs the first step toward joint perception and planning for POMDPs, we focus on a special case of perception where the perception action is defined as selecting a limited number of available information sources. While restrictive, this encompasses an important type of perception that has been extensively studied in many applications in control systems and signal processing, in wireless sensor networks, as well as machine learning (Krause & Guestrin, 2007).\\n\\n*revision* We revised the introduction to better explain the applications.\\n\\n-----Simplifying belief notation\\n\\n*revision* We simplified the notation used for representing belief to improve the readability.\\n\\n-----Is argmax the same under both directions of KL divergence?\\n\\nWe thank the reviewer for mentioning this important point. The argmax would be the same under both directions if the distributions are restricted to have non-zero elements. This would be the case if the belief points are not on the boundary of the probability simplex.\\n\\n*revision* We added the required clarification regarding the distributions in the proof.\"}",
"{\"title\": \"Algorithmic/theoretical development is sound, but assumptions are questionable.\", \"review\": \"This paper proposes a planning algorithm for a restricted class of POMDPs where the sensing decisions do not have any bearing on the hidden state evolution, or any material cost in terms of reward. A sensing decision consists of querying k out of n sensors which yield independent measurements of the hidden state. In this setting, the authors propose an optimization 2-stage optimization strategy, the first stage tries to find the optimal \\\"planning\\\" action in a point-based fashion, whereas the second aims to find the the sensor configuration that reduces the entropy of the post-update belief-state. The key observation is that the entropy minimization step is submodular and can be approximated greedily. This in turn translates to policy approximation bounds via information geometric arguments.\", \"the_positive\": \"the paper is well written (save for some contained parts), the algorithm looks to be generally sound. Altogether the paper makes good points and is an interesting read.\", \"the_negative\": [\"first, I think there is some wide-spread misuse of the term \\\"nearly optimal\\\". When talking about near-optimality, this usually refers to finding a (controllably) bounded approximation to the optimal policy/value function. However, here this refers to the error relative to the approximate solution produced by the 2-stage procedure of minimizing entropy, then making a planning decision. It is not clear to me to begin with that this approach would produce bounded policies/value functions. As a counter example, consider a state space consisting of two state variables S1, S2 which evolve independently with additive reward R(S1, a) + R(S2, a) with R(S1, a) !=0, while R(S2, a) = 0 for all actions a. Now, there could be a sensing configuration that collapses the uncertainty over S2 completely, but does nothing over S1, and a different one that give some small reduction of uncertainty over S1 and nothing over S2. The former may outperform the latter to any degree in terms of belief state entropy, but it will not lead to an optimal policy, since that entropy reduction is not value directed. Unless I misunderstand something, in which case the authors should clarify.\", \"Second, the particular assumptions in this paper are quite restrictive. This paper generally reads like a solution that was fit to a problem. This really hurts the story of the paper. It would be a vast improvement if the authors could find at least one plausible problem where there's a compelling case for this particular configuration of assumptions and try to evaluate how well they do on that problem relative to some reasonable baseline.\"], \"remarks\": [\"The belief state notation used in this paper impacts undue suffering upon the reader. It comes in the form of expressions with multi-level sub/superscipts and accents such as: \\\"b prime subscript b tilde superscript a superscipt pr comma omega\\\". This is extremely hard to parse and possibly unnecessary, as b prime subscript b tilde and b tilde are the only configurations of accents and subscripts that appear. These could just be called alpha and beta and the rest is clear from the context.\", \"The claim in theorem 4, the argmax is the same under both directions of the KL divergence, is not obvious. It is definitely not true for minimization, otherwise the I-projection and the M-projection would coincide. This should be argued. Alternatively, this point can be alltogether skipped, since Pinsker's bound, which is the only place this is used, does not depend on the direction of KL.\", \"Overall, this paper raises some nice points, but with these problems it is not a clear accept.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Decomposition of observation acquisition and action planning in POMDPs: Insufficient motivation and results\", \"review\": \"This work addresses the problem of decomposing the observation acquisition from action planning in POMDPs. Unfortunately, the paper has two major weaknesses. First it is hindered by a confusing motivation, and the lack of clarity on the real purpose of the work is a problem throughout. Second the experiments are insufficient given current standards in the literature.\\n\\n1. Motivation: The introduction suggests that the main motivation is to reduce the computational cost of treating all observations within planning (\\u201cone must establish a trade-off between optimality and tractability\\u201d), though later, different reasons are offered (\\u201cpower, processing capability, and cost constraints\\u201d). It seems that each of these poses different constraints, and depending on which we are most concerned with, a different approximation scheme should be selected. For example, if the real concern is tractability of the POMDP solution, I don\\u2019t see why it\\u2019s not possible to just acquire all the sensor information, and afterwards decide how to approximate the tracking (this by the way is what most point-based POMDP methods effectively do). For the proposed AP^2-POMDP, possibly a more reasonable motivation is high cost of observation acquisition; a clean argument would have to be made about the class of problems for which this constraint is crucial, why cardinality of sensor is the right way to articulate this constraint.\\n\\n2. Experimental results: The domains selected for the experiments are too simple, given current standards in the literature. Looking at the 1D and 2D domains is fine to illustrate specific properties of your methods. But it does not support the claim that the proposed model is more scalable than standard POMDPs. In such simple domains, why not include results for a point-based method? They should work in the 1D domain, probably also in the 2D domain. Also, the setting for the 1-D domain, with a camera in every cell, seems very artificial. First, if sensors are expensive, why put a camera in every cell? And if they are not expensive, then why do we need to reason about which sensor to use at each step? And why just read from k cameras at every step? These questions point back to the concern regarding what is the real motivation for this work. For the 2D domain, there are not even quantitative results on cumulative reward. To be convincing, the results would need to be on substantially more complex domains; there are several POMDP benchmarks that could be considered, e.g. those in the work of Kurniawati et al.\", \"other_comments\": [\"Assumption 1 states that the observations from sensors are mutually independent give the state and action. Can you explain why this is reasonable? Or whether this is a strong assumption (unlikely to be met in practice)?\", \"Some of the bounds seem like they could be very loose in practice, even (in the worse case) worse than the default bound of (R_max-R_min)/(1-\\\\gamma). For example in Thm 3, in the case where the L1 distance between the 2 beliefs is 2, this is worse than the default bound. Did you check what is the bound for the domains in the experiments? Is it tighter than this?\", \"A key statement is on p.8: \\u201cthis added complexity is significantly lower than concatening the combinatorial perception actions with the planning actions\\u201d. It is important to support this statement, ideally with both a precise complexity analysis, and with empirical results showing the lesser performance of standard point-based methods.\"], \"minor_comment\": [\"The referencing style is broken and should be fixed, in particular proper use of Author (year) in the text.\", \"The derivations in the top part of p.4 (Eqn 2-4 & surrounding text) are confusing, given that these apply to a standard POMDP, whereas on the previous page your present the AP^2-POMDP model. It might be better in Sec.2 to first (briefly) introduce POMDPs, with Eqn 2-4, then introduce AP^2-POMDP in Sec.3.\", \"P.5: \\u201cIt is worth noting that the objective function does not explicitly depend on perception actions\\u201d. This is a confusing statement; V depends on observations through b_t. The next sentence clarifies this, but it would be better to avoid the confusing statement.\", \"Alg.2: Add a reference beside the title (unless you claim it is new). Maybe Pineau et al. 2003.\", \"P.7: \\u201ccan be combined with any sampling and pruning method in other solvers\\u201d -> Add references for such sampling & pruning methods.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review for \\\"Perception-Aware Point-Based Value Iteration for Partially Observable Markov Decision Processes\\\"\", \"review\": \"Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome. In conventional POMDP models, the observations that the agent receives originate from fixed known distribution. However, in a variety of real-world scenarios the agent has an active role in its perception by selecting which observations to receive. Due to the combinatorial nature of such a selection process, it is computationally intractable to integrate the perception decision with the planning decision.\\n\\nThe author proposes a new form of POMDPs called AP2-POMDP, which takes active perception into account. The AP2-POMDP problem restricts the maximum number of sensors that can be selected by an agent. The agent also faces the planning problem to select the sensors. To prevent such expansion of the action space, the authors propose a greedy strategy for observation selection and obtain a near optimal bound based on submodular optimization.\\n\\nThe author also proposes a greedy-based scheme for the agent to find an almost optimal active perception action by minimizing the uncertainty of beliefs. The author also uses theories to prove the near-optimal guarantees of this greedy method. The author also proposes a novel perception-aware point-based value iteration to calculate the value function and obtain the policy. The author also operates an interesting simulation experiment, which shows less uncertainty of the robot when taking planning actions when using the proposed solver.\\n\\nThe contribution is significant in reinforcement community. The writing is in general clear. It can be improved with minor modifications, for example, explaining math equations better in English. \\n\\nMy main comment for the authors is whether they have considered the scenario where the perception and the planning actions are connected. I agree with the authors that the best strategy for perception is to reduce uncertainty (and indeed, the greedy approach yields a near-optimal performance), given the restricted situation that the perception and planning are two separated processes. Nonetheless, in most real-world applications, the two processes are coupled, and therefore, we face, immediately, the trade-off between exploration and exploitation. I wonder if the authors have considered how they can extend their approach to such scenarios.\", \"a_few_minor_comments\": \"(i) The authors should add a legend and perhaps, more explanation in the captions of Figure 5. The colors of the heat-map are confusing. If dark blue and dark red represent lowest and highest frequency, what about other colors? Are there obstacles placed in the grid? If so, are they placed as shown in Figure 3(b)?\\n\\t\\n(ii) What is the effect of k, the maximum number of sensors to be placed? Can the authors provide a figure showing the change of performance with varying k?\\n\\t\\n(iii) It will be more convincing if the author deploys this algorithm to real-world robots and demonstrate its effectiveness.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
r1lpx3A9K7 | Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference | [
"Ruying Bao",
"Sihang Liang",
"Qingcan Wang"
] | Deep neural networks have been demonstrated to be vulnerable to adversarial attacks, where small perturbations intentionally added to the original inputs can fool the classifier. In this paper, we propose a defense method, Featurized Bidirectional Generative Adversarial Networks (FBGAN), to extract the semantic features of the input and filter the non-semantic perturbation. FBGAN is pre-trained on the clean dataset in an unsupervised manner, adversarially learning a bidirectional mapping between a high-dimensional data space and a low-dimensional semantic space; also mutual information is applied to disentangle the semantically meaningful features. After the bidirectional mapping, the adversarial data can be reconstructed to denoised data, which could be fed into any pre-trained classifier. We empirically show the quality of reconstruction images and the effectiveness of defense. | [
"bidirectional gan",
"adversarial defense",
"classifier",
"defense",
"fbgan",
"bidirectional mapping",
"semantic inference",
"vulnerable",
"adversarial attacks"
] | https://openreview.net/pdf?id=r1lpx3A9K7 | https://openreview.net/forum?id=r1lpx3A9K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJgqyiqHl4",
"BJxwxKK9R7",
"SkgRopH90X",
"rJgINoH9CQ",
"BJgNu_B5AX",
"S1lODbPZp7",
"r1lOxLgv3X",
"HJgJ3w4LhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545083617747,
1543309551479,
1543294373941,
1543293741622,
1543293036143,
1541661023538,
1540978159692,
1540929447172
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1120/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1120/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1120/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1120/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1120/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1120/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1120/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1120/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers agree the paper is not ready for publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}",
"{\"title\": \"some points taken, main concerns are not addressed\", \"comment\": \"2. \\\"DefenseGAN is broken\\\"\\nYou are right about DefenseGAN not being broken.\\n\\n3. The evaluation methods used in our work are standard methods which are widely used in all other previous adversarial defense works. \\n\\nFGSM and PGD are indeed widely used, but many previously proposed defences used additional attacks (like transfer-based, score-based, decision-based). Please check https://arxiv.org/pdf/1802.05666.pdf for an in-depth discussion of this issue.\\n\\n4. \\\"Gradient masking\\\"\\nYou are right that the gradient masking effects visible in the graybox attack doesn't necessarily indicate gradient masking in the white-box setting (but still means that hyper parameters of the attack have not been tuned properly).\\n\\nGiven the discussion I will increase my score by one point, but the lack of a reliable robustness evaluation and the reduced novelty compared to DefenseGAN still puts it below the acceptance threshold in my opinion.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your comments. And we really appreciate for your contribution and time. Here are our feedbacks to your concerns:\\n\\n1. \\\"Novelty: several defences are based on a similar principle and the contributions of this paper are unclear.\\\"\\n\\nAt the time we submitted our paper, the only relevant defense mechanism we noticed at that time was DefenseGAN. There were also some other defense mechanisms which leveraged generative models but none of them attempted to extract semantic codes from the adversarial images, which is the main novelty of our model.\\n\\nOur contribuition is as following. FBGAN is the first model trying to understand the semantic meaning of an adversarial image and using this semantic meaning to reconstruct the original one. Our model is easily to be applied after training on the original data. On contrary, for example, defenseGAN, which also leverages the generative capability of GAN, needs to do search in the generated sample space every time it meets a new adversarial sample. FBGAN is not only faster but also has better performance than other generative model based defense methods.\\n\\n2. \\\"DefenseGAN is broken: the most similar work, DefenseGAN, has already been broken by Athalye et al. 2018, which is not discussed. The attacks deployed in this paper do not break DefenseGAN.\\\"\\n\\nWe are very glad this reviewer mentioned the paper by Athalye et al. 2018. This work provided a very good method called BPDA which can defeat all seemingly strong methods related to so-called obfuscated gradient in last year\\u2019s ICLR. However, in that paper, they mentioned that defenseGAN was NOT broken at the time they wrote the paper. In addition, BPDA is an attack method to deal with those obfuscated gradient masking defend methods, which has nothing to do with DefenseGAN nor our FBGAN. Nonetheless, we are still happy to provide our defense result against BPDA method in that paper. Please see the second point in our reply to AnonReviewer1 for experiment details.\\n\\n3. \\\"Insufficient evidence: The evaluation is minimal (only FGSM and PGD, no decision-, transfer- or score-based attacks) and insufficient to support the claims.\\\"\\n\\nThe evaluation methods used in our work are standard methods which are widely used in all other previous adversarial defense works. We don\\u2019t think the methods this reviewer mentioned are popular nor necessary to show the effectiveness of our work.\\n\\n4. \\\"Gradient masking: There is at least one clear sign of gradient masking in the results (FGSM performing better than PGD).\\\"\\n\\nThe reviewer believes that there exists gradient masking in Figure 5 (b). However, Figure 5 (b) is the results of gray-box attack, and the gray-box attack is calculated on the original non-robust classifier, so there is no gradient masking at all. Although the gradient masking may result in the fact that the defense accuracy of PGD is better than the defense accuracy of FGSM, it is not always true to claim that the gradient masking is the only reason that causes this phenomenon. Also our new experiment shows that BPDA, an attack method that works well on defenses utilizing the gradient masking, fails on our FBGAN (the detailed experiment results is shown under the feedback for AnonReviewer1). \\n\\nThanks again for your feedbacks.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your comments. And we really appreciate for your contribution and time. Here are our feedbacks to your concerns:\\n\\n1. \\\"Only two attacks are considered (FGSM and PGD).\\\"\\n\\nFGSM is the simplest and fastest adversarial attack method and it is widely used as a first-step robustness check for all state-of-art defense papers. As far as we know, most of attack methods invented by researchers in this community are based on the prototype FGSM, but with different approaches to doing gradient iteratively. Among those methods, PGD has been shown to be the strongest representative. Thus, our defense results under FGSM and PGD is enough to show our model's robustness.\\n\\n2. \\\"DefenseGAN is similar in defense mechanism but the authors do not attempt to use the attacks of Athalye et al 2018 (ICML 2018) in their evaluation. \\\"\\n\\nAnonReviewer3 and his/her reference [1] claims that \\\"Defense-GAN is broken\\\" under the BPDA attack proposed by Athalye et al. (2018). However, Table 1 in Athalye et al. (2018) shows that Defense-GAN is one of the only two survivors under BPDA. We think that BPDA is an effective attack for gradient masking defense methods, but not for generative-model-based methods, so we did not test BPDA in our original paper.\\n\\nAccording to the reviewers' request, we implemented the following BPDA experiment: Recall that our prediction is C(G(E(x))), where E, G and C are the encoder, generator and classifier respectively. Following Section 4.1, 5.4 and Appendix B in BPDA, we approximate the backward pass of G(E(x)) with the identity function to calculate the adversarial images. For MNIST, the defense accuracy under Carlini and Wagner\\u2019s attack is 94.8% where the l_2 perturbation is 4.42, and the defense accuracy under PGD attack with l_\\\\infty perturbation 0.3 is 91.6%. It suggests that FBGAN is robust under the attack aiming to gradient masking.\\n\\n3. \\\"In Figure 5b, the attack FGSM performs better than PGD, but FGSM is the single step case of PGD. This indicates that the attacks were not tuned properly, as you should always have PGD as a stronger attacker than FGSM.\\\"\\n\\nFrankly speaking, we don't quite understand the meaning of \\\"the attacks were not tuned properly\\\". The attack methods we used were all from CleverHans. We are pretty sure that we used them properly. In addition, methods like FGSM and PGD, the attack performance only depends on the bound of the perturbation. Furthermore, it is actually not necessary for PGD to always outperform FGSM in adversarial attack. PGD is a multi-step gradient-based method and FGSM is a single-step method. The performance of them depends on the landscape of the object function, which is still an unsolved question for deep learning community.\\n\\n4. \\\"The method does not perform as well as adversarial training in standard defense tasks.\\\"\\n\\nAs mentioned in the main text, the defense performance of our FBGAN highly depends on the training of GAN. Adversarial training is a method only requires doing maximum worst case optimization during training process and it does not require extra networks' training. Thus, it is unfair to compare these two different mechanisms together. If comparing with method of the same category which also using generative model as a re-constructor, for example DefenseGAN, our model outperforms it by 1.4% , 5.0% on FGSM 0.1 and 0.3 attack respectively. \\n\\nThanks again for bringing out those typos, we will correct all of them in our next revision.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your comments. And we really appreciate for your contribution and time. Here are our feedbacks to some of your concerns.\\n1. Weaker performance of FBGAN than that of adversarial training. Our argument is to emphasize that we do not need to re-train the classifier when we have adversarial examples with different attacks in FBGAN; however, adversarial training does need to re-train the classifier under different attacks. This is the limitation of adversarial training that we try to state. Actually, the defense mechanisms of our FBGAN and the adversarial training are quite different. Adversarial training improves the accuracy of the classifier by having access to as many adversarial examples with their corresponding correct labels as possible; while FBGAN only needs to train one classifier with the original clean data, and we may use this classifier to defend different attacks without any re-train process.\\n\\n2. Both PixelDefend and our FBGAN are based on generative models, however, the mechanisms of utilizing generative models are quite different. PixelDefend reconstructs the images from adversarial examples pixel by pixel, which does not care about the overall structure or semantic meaning of images; FBGAN learns the semantic meanings of adversarial examples first, and use these semantic meanings to reconstruct images. \\n\\nThanks again for your precious feedbacks.\"}",
"{\"title\": \"Review\", \"review\": \"Summary:\\nThis paper gives a novel adversarial defense that consists of denoising images before classification. The denoising procedure consists of passing an image through a bidirectional GAN, which the authors use to map inputs to the latent space and then back to the original input space.\", \"novelty\": \"The exact mechanism through which this paper operates is novel, but many similar defenses have been proposed before that involve a latent space mapping followed by a mapping back to the original space; examples include DefenseGAN and PixelDefend.\", \"concerns\": [\"The evaluation is not thorough enough: Only two attacks are considered (FGSM and PGD, with the former being strictly weaker than the latter)\", \"DefenseGAN is similar in defense mechanism but the authors do not attempt to use the attacks of Athalye et al 2018 (ICML 2018) in their evaluation. We thus do not have strong lower bounds on adversarial robustness.\", \"In Figure 5b, the attack FGSM performs better than PGD, but FGSM is the single step case of PGD. This indicates that the attacks were not tuned properly, as you should always have PGD as a stronger attacker than FGSM\", \"The method does not perform as well as adversarial training in standard defense tasks\", \"Several writing/clarity errors (detailed below)\"], \"smaller_edits\": \"\", \"page_2\": \"bullet 1: under our contribution: line 3: \\\"which are unchanged\\\" instead of \\\"which is unchanged\\\"\", \"page_3\": \"Section 2.2: paragraph 2: line 2: \\\"here are two most famous attacks\\\" missing \\\"the\\\" before \\\"two most famous\\\"\", \"page_4\": \"Section 3.2: first paragraph: line 4: \\\"the latent codes is decomposed\\\" should be \\\"are\\\" instead of \\\"is\\\"\", \"page_5\": \"Section 4: Paragraph 1: last line: \\\"are those have access \\\" should be \\\"are those which have access\\\" missing which/that\", \"page_6\": \"Last paragraph: Line 1: \\\"the attacker can only access to the classifier\\\" there is no need for \\\"to\\\"\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novelty and evidence is not yet sufficiently clarified\", \"review\": \"This work proposes to defend against adversarial examples by \\u201cdenoising\\u201d the input image through an autoencoder (a BiGAN trained similar to InfoGAN) before classifying it with a standard CNN. The robustness of the model is evaluated on the L_infinity metric against FGSM and PGD.\", \"my_main_criticism_is_as_follows\": \"* Novelty: several defences are based on a similar principle and the contributions of this paper are unclear.\\n* Insufficient evidence: The evaluation is minimal (only FGSM and PGD, no decision-, transfer- or score-based attacks) and insufficient to support the claims.\\n* Gradient masking: There is at least one clear sign of gradient masking in the results (FGSM performing better than PGD).\\n\\n### Novelty\\nThe only prior work against which the paper compares is DefenseGAN. The only advantage over DefenseGAN being stated is performance (because no intermediate optimisation step is used). However, besides DefenseGAN there are several other defences that project the input onto the learned manifold of \\u201cnatural\\u201d inputs, including (see prior work section in [1] for an up-to-date list):\\n\\n* Adversarial Perturbation Elimination GAN\\n* Robust Manifold Defense\\n* PixelDefend (autoregressive probabilistic model)\\n* MagNets\\n\\n### Insufficient evidence\\nThe only attacks employed are two gradient-based techniques (FGSM and PGD). It is known that gradient-based techniques may suffer from gradient-masking (see also next point) and that the effectiveness of different attacks various greatly (which is why one should use many different attacks). Hence, a full evaluation of the model should include score-based and decision-based attacks.\\n\\n### Gradient masking\\nIn Figure 5 (b) the FGSM attack performs better than PGD for epsilon = 0.05 (66.4% vs 71.5%). PGD, however, should be strictly more powerful than FGSM if the gradients and the hyperparameters are ok.\\n\\nGradient masking is the primary reason for why 95% of all proposed defences turned out to be ineffective, and there are good reasons to believe that the same might affect this defence. The robustness evaluation has to be much more thorough and convincing before any substantiated claims about the bidirectional architecture proposed here can be derived. In addition, the difference to prior work has to be made much clearer.\\n\\n[1] Schott et al. \\u201cTowards the first adversarially robust neural network model on MNIST\\u201d\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Performs worse than adversarial training\", \"review\": \"This paper presents a new adversarial defense based on \\\"cleaning\\\" images using a round trip through a bidirectional gan. Specifically, an image is cleaned by mapping it to latent space and back to image space using a bidirectional gan. To encourage the bidirectional gan to focus on the semantic properties, and ignore the noise, the gan is trained to maximize the mutual information between z and x, similar to the info gan.\", \"pros\": \"1. The paper presents a novel (as far as I am aware) way to defend against adversarial attacks by cleaning images using a round trip in a bidirectional gan\", \"cons\": \"1. The method performs significantly worse than existing techniques, specifically adversarial training.\\n\\t\\ta. The authors argue \\\"Although better than FBGAN, adversarial training has its limitation: if the attack method is harder than the one used in training(PGD is harder than FGSM), or the perturbation is larger, then the defense may totally fail. FBGAN is effective and consistent for any given classifier, regardless of the attack method or perturbation.\\\"\\n\\t\\tb. I do not buy their argument, however, because one can simply apply the strongest defense (PGD 0.3 in their results) and this outperforms their method in *all* attack scenarios. And if someone comes out with a new stronger attack there's no guarantee their method will be strong defense against that method\\n\\t2. The paper is not written that well. Even though the technique itself is very simple, I was unable to understand it from the introduction, and didn't really understand what they were doing until I reached the 4th page of the paper.\", \"missing_citation\": \"\", \"pixeldefend\": \"Leveraging Generative Models to Understand and Defend against Adversarial Examples (ICLR 2018)\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1lTg3RqYQ | Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency | [
"Liqian Ma",
"Xu Jia",
"Stamatios Georgoulis",
"Tinne Tuytelaars",
"Luc Van Gool"
] | Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. | [
"image-to-image translation",
"image generation",
"domain adaptation"
] | https://openreview.net/pdf?id=S1lTg3RqYQ | https://openreview.net/forum?id=S1lTg3RqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hyg_T-vggV",
"SJguY0hSCX",
"S1e9uT3BAX",
"HJlpBp2rCX",
"S1lxxj3BA7",
"S1g7d5hHRm",
"Skg7ixp5nX",
"rJgsuogq2Q",
"HkxhpFFSn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544741311593,
1542995584199,
1542995314096,
1542995268666,
1542994663912,
1542994539224,
1541226650763,
1541176179205,
1540884932148
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1119/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1119/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1119/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1119/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1119/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1119/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1119/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1119/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1119/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes an image to image translation technique which decomposes into style and content transfer using a semantic consistency loss to encourage corresponding semantics (using feature masks) before and after translation. Performance is evaluated on a set of MNIST variants as well as from simulated to real world driving imagery.\\n\\nAll reviewers found this paper well written with clear contribution compared to related work by focusing on the problem when one-to-one mappings are not available across two domains which also have multimodal content or sub-style. \\n\\nThe main weakness as discussed by the reviewers relates to the experiments and whether or not the set provided does effectively validate the proposed approach. The authors argue their use of MNIST as a toy problem but with full control to clearly validate their approach. Their semantic segmentation experiment shows modest performance improvement. Based on the experiments as is and the relative novelty of the proposed approach, the AC recommends poster and encourages the authors to extend their analysis of the current results in a final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"New technique for semantic consistency for transfer across heterogenous domains with preliminary empirical evidence\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank Reviewer 3 for the constructive review and detailed comments.\\n\\n1. Ablation study \\nIn our paper, we present the ablation study on the MNIST-Single dataset because it is a more controlled setting where we can generate ground truth for comparisons. Furthermore, as mentioned in a previous answer, we believe that this simplified experiment offers a more direct and intuitive way to evaluate the expected translations compared to e.g. street-view translation. However, as Reviewer 3 suggested, it is interesting to show the ablation study results on more complex examples too. As such, we added these results in the supplementary material in Fig. 15. We observe that: 1) removing the feature mask leads to color mismatches or inaccuracies (e.g. Fig. 15(a) 1st row 3rd col); 2) removing AdaIN reduces the model to unimodality (e.g. all images are translated to a sunny day with blue sky, see Fig. 15(a) 4th col) since the output image's style is not guided by the exemplar image; 3) removing perceptual loss leads to incorrect style (e.g. Fig. 15(b) 5th col) and the color spreads even given the feature mask since there is no perceptual feedback during training (e.g. Fig. 15(a) 5th col).\"}",
"{\"title\": \"(1 | 2) Response to AnonReviewer2\", \"comment\": \"We thank Reviewer 2 for the constructive review and detailed comments. We apologize if the experimental section text due to space constraints feels short and leads to misunderstandings. We are going to clarify everything below. Please see the blue fonts in the newly uploaded draft to check how our paper is changed, to be in accordance with the following responses.\\n\\n\\n1. Style information propagation.\\nAs already mentioned, the style information in our method is propagated by using the Adaptive Instance Normalization (AdaIN) technique [b]. This is a well-known technique in the style transfer field which has proven to be very successful for arbitrary style transfers, and has been adopted by many follow-up works. The idea behind AdaIN in our case is to align the mean and variance of the content feature channels coming from domain A (after applying the feature mask) with those of the style feature channels coming from domain B. According to [c], these feature statistics have been found to carry the style information in an image. Noise can also be used to generate images with diverse style [d], as proposed by Reviewer 2. However, our goal in this work is to allow users more explicit control over the translation process, which is something that noise-based approaches do not allow. In particular, noise inputs do not easily translate to intuitive style guidance, in contrast to our exemplar-guided approach, where we propose to use a sub-network (F_B in Fig. 2) to explicitly extract the feature statistics from the exemplar image itself and - through AdaIN - adapt accordingly the source image. As a result, the user can match any desired style from the target domain just by picking the corresponding exemplar.\\n\\n2. Latent space.\\nWith respect to the previous question, Reviewer 2 asks for a visualization of how the latent space changes given different exemplar images. Although generally a valid request, it does not apply to our case. Let us explain why. Due to the specifics of our architecture, the latent space is only associated with the content representation of an image, not the style of the exemplar. The latter is only added after the encoder part, i.e. the latent space, and the feature mask sub-network through the AdaIN sub-network (see Fig. 2). As such, changing exemplars and visualizing the latent space would give the same point in the latent space if the source image (that provides the content) is the same. Alternatively, changing source images and visualizing the latent space is not informative about the translation procedure as the style is only added after the feature mask and AdaIN sub-networks. In summary, the latent space is only related to the source image since the style information is only combined in the decoder part through feature mask and AdaIN techniques. Instead, to mimic what Reviewer 2 asked for, we added the male->female face translation results matrix in Fig. 9. We observe that the output image's content is consistent with the source image and its style is consistent with the target image. Such observation can reflect how the latent space changes given different source images as well as exemplars.\"}",
"{\"title\": \"(2 | 2) Response to AnonReviewer2\", \"comment\": \"3. t-SNE embedding visualization and standard deviation of SSIM scores.\\nThe t-SNE embeddings are calculated from the translated images. We first use PCA to reduce the dimension to 50. Then, we use the t-SNE implementation in the sklearn package to compute the t-SNE embeddings. We have tried different t-SNE parameters and choose the good ones for visualization. That is to say, the t-SNE figures are generated using standard techniques w.r.t related works. Previously, we set 'init=pca' for discriminative dense point clouds. As suggested, we now set 'init=random' for a better visualization of the distributions. This results in nicer visualizations, although the difference between some of the methods becomes less outspoken. Reviewer 2 asked to modify some t-SNE parameters to avoid the \\\"projected on wall\\\" visualization. We updated the figures for better visualizations, but the tendency is the same. A larger size version of the single-digit translation t-SNE embedding visualization is shown in Fig. 11. We also changed the markers from '1','2' to '.','x' in the t-SNE figures for better visualization without color. The standard deviation of SSIM scores are added to Tab. 2 and Tab. 5 as requested.\\n\\n4. Segmentation improvement.\\nThe segmentation improvement is the natural outcome of trying to preserve semantic consistency during the translation process. Our translated GTA->BDD images are semantically consistent w.r.t. the original GTA images, i.e. the sky is still the sky and trees are still trees, which allows us to obtain improved segmentation performance as e.g. the semantic boundaries are better delineated in the generated images compared to techniques that do not account for semantic consistency. Another advantage is that our model can handle large within domain variations, such as day and night. As a result, when using the translated images with paired GTA semantic segmentation labels to train a segmentation model, the domain difference will be reduced, and as such the segmentation results will also improve.\\n\\n\\n[b] Xun Huang and Serge J. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.\\n\\n[c] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.\\n\\n[d] Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In NIPS, 2017.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank Reviewer 1 for the constructive review and detailed comments. Below, we respond to each comment in detail. Please see the blue fonts in the newly uploaded draft to check how the paper has changed, to be in accordance with the following responses.\\n\\n\\n1. Multi-digit translation.\\nWe added more explanation for the multi-digit translation experiment to the paper.\\n\\n(1) The MNIST-Multiple dataset is a controlled experiment, similar to MNIST-CD/CB [a], designed to verify our method's ability to disentangle content and style representations during I2I translation. In particular, the content is represented by the different digits (0-9) and background classes, whereas style is represented by the shape and color variation added in digits as well as the (black or white) background colors. Our goal is to encourage the network to understand the semantic information, i.e. the different digits and backgrounds, when translating an image from domain A to domain B. That is, a successfully translated image should have the content of domain A, i.e. the digit class, and the style of domain B, i.e. the digit and background colors respectively. \\n\\n(2) We can achieve this by using the proposed EGSC-IT model. First, the feature mask sub-network (F_A in Fig. 2) extracts the content information, i.e. the digit class, and provides it to the decoder. Second, the employed perceptual loss encourages the decoder to learn how to use this content information to do the translation while retaining semantic consistency. Finally, the AdaIN sub-network (F_B in Fig. 2) translates the style information of each semantic class, i.e. the digit and background colors respectively, by using AdaIN, which has proven very successful in arbitrary style transfer tasks (see later responses for a further comment on this). We refer to the theoretical part of the paper where we provide a very detailed description of the translation procedure.\\n\\n(3) We agree with Reviewer 1 that this experiment is quite challenging, but we observe that our model can still obtain good results without the need for ground-truth semantic labels or paired data. For example, in Figure 6 top row the digits 1,2,3,4,6 can be successfully translated given the criteria described above. In street view translation the scenes in an image are generally more complex - i.e. the within class variation in the BDD dataset is much larger than that of the synthetic MNIST-Multiple dataset. We designed this experiment as a controlled, yet challenging, task to evaluate the different image-to-image translation methods. Given the lack of ground-truth translated images to compare to in our setting - due to the unsupervised nature of our problem - we believe this simplified experiment offers a more direct and intuitive way to evaluate the expected translations compared to e.g. street-view translation.\\n\\n(4) The setting of different colors that overlap at random locations, as proposed by Reviewer 1, seems very interesting in theory. However, we believe that in practice it would be very difficult for any unsupervised translation method as there is too much ambiguity in the overlapped locations for a network to decide where to draw style cues from.\\n\\n2. Full name of mIoU.\\nWe added the full name 'mean Intersection over Union' (mIoU). Thank you for the useful note.\\n\\n3. Limitations.\\nSince our method does not use any semantic segmentation labels nor paired data, there are still some artifacts in the generated images for some hard cases. This seems natural given the difficulty of the task. For example: (a) in street view translation, day->night and night->day (e.g. Fig. 7 bottom row) are more challenging than day->day (e.g. Fig. 7 top row). As a result, it is sometimes hard for our model to understand the semantics in such cases. Even state-of-the-art fully-supervised semantic segmentation networks suffer in low light or adverse weather conditions. (b) in face gender translation, our model can successfully translate the gender attribute while keeping the semantics, e.g. skin, hair and background color, consistent with the exemplar image. However, since we do not provide any semantic segmentation labels this results in some artifacts. This discussion about limitations will be added in the paper. In the future it would be interesting to extend our method to the semi-supervised setting in order to benefit from the presence of some fully-labeled data.\\n\\n4. Typos.\\nWe fixed them. Thank you for finding them.\\n\\n\\n[a] A. Gonzalez-Garcia, J. van de Weijer, Y. Bengio. Image-to-image translation for cross-domain disentanglement. In NIPS, 2018.\"}",
"{\"title\": \"Summary of the first revision\", \"comment\": \"We thank all reviewers for their constructive reviews and detailed comments. We are committed to incorporate all the proposals to further improve the original draft. Below, we respond to each comment in detail. In the first revision, we provide additional experimental results in Appendix A and more explanations in both the main paper and Appendix A. Please see the blue fonts in the newly uploaded revision to check how the paper has changed according to your indications. Note that, any future suggestions or requests from the reviewers will also be incorporated similarly to the blue font changes.\\n\\nWe strongly believe that we made an important and novel step towards solving the unsupervised image-to-image translation problem.\\n\\nIf you have any further questions or suggestions, please do not hesitate to let us know.\\n\\nMany thanks again for all your sincere contributions on ICLR 2019,\\nThe authors.\"}",
"{\"title\": \"Interesting and well-written paper, but needs some clarification on the experiments\", \"review\": [\"The paper is well organized with a clear idea of the proposed method and good related work descriptions. Overall, the descriptions are clear and easy to follow, but the experimental results need clarifying.\", \"Regarding the multi-digit translation task, it is not straightforward to this reviewer how the proposed method could match the digits (semantic) with different colors (style) in different locations. The description in the paper is not enough to explain the results in Fig. 6. To this reviewer, this task is more complex than the street view translation one. In the same line, it is curious what the results would be if digits with different colors are overlapping at random location, rather than the grid-like arrangement.\\u0000\", \"For the potential readers who are not knowledgeable in semantic segmentation, please give the full name of mIoU for reference.\", \"For further researches in this topic, it would be good to depict the limitations of the proposed method. For examples, the translated images in the CelebA dataset are not photorealistic (Fig. 8) and there are odd red lights in the middle of the results in GTA5<-BDD (Fig. 12).\", \"typos: Fig. 2-caption: m_{a}->m_{A}\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Justifying style transfer via conditioning needs more analysis.\", \"review\": \"The introduction is written to perfection. The paper discusses a core failing and need for I2I translation models. The one-to-one mapping assumption does not apply to most tasks. While the approach seems novel the analysis of the results are insufficient to convince me that the method is really working. This should be a workshop paper.\\n\\nFor the motivation of the approach I am not convinced how the conditioned style is being used. It would be nice to see some analysis of how the latent space changes given different input images. Why would style information be propagated through the network? Why wouldn't noise work just as well? Although an abiliation study is performed there is no standard deviation reported so it is unclear if this number is fair. \\n\\nIn Figure 5 the t-sne doesn't look correct. The points all seems to be projected on walls which could indicate some sort of overflow error. The text only devotes 3 lines to discuss this figure. It is not mentioned what part of the model the t-sne is computed from. To me this experiment that studies the internal representation is critical to convincing a reader to use this method. \\n\\nThe segmentation results sound good. Where is the improvement coming from? The experimental section is cut short. The experiment section is really squeezed in the last two pages while the other sections are overly descriptive and could be reduced.\\n\\nThe figures should be changed to be visible without color (put a texture on each block).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"interesting submission with good intuition and evaluation\", \"review\": \"I enjoyed reading this manuscript. The paper is based on a simple idea used by others as well (i.e., the image has two components, one that encode content which is shared across domains and another one characterizing the domain specific style). The other important idea is the use of feature masks that steer the translation process without requiring semantic labels. This is similar to attention models used by others but I think it is novel when applying to this specific application domain. I was a bit disappointed by the evaluation part. The authors decided to perform ablation and to show the importance of each component using only the MNIST-Single dataset. While this is good as a toy example I would have expected to see such analysis on a more complex example, e.g., street-view translation. This is also surprising considering that it is not even present in the supplementary material. Overall, this is a solid submission with interesting ideas and good implementation.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJg6e2CcK7 | Clean-Label Backdoor Attacks | [
"Alexander Turner",
"Dimitris Tsipras",
"Aleksander Madry"
] | Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks. Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior. While the attack is very powerful, it crucially relies on the adversary being able to introduce arbitrary, often clearly mislabeled, inputs to the training set and can thus be detected even by fairly rudimentary data filtering. In this paper, we introduce a new approach to executing backdoor attacks, utilizing adversarial examples and GAN-generated data. The key feature is that the resulting poisoned inputs appear to be consistent with their label and thus seem benign even upon human inspection. | [
"data poisoning",
"backdoor attacks",
"clean labels",
"adversarial examples",
"generative adversarial networks"
] | https://openreview.net/pdf?id=HJg6e2CcK7 | https://openreview.net/forum?id=HJg6e2CcK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJg1wdCIxN",
"B1eHKRmEe4",
"BJedSbMEgE",
"BkeQgHN0RQ",
"Hke0YdA3CX",
"HJgnjDA2Rm",
"B1gjWlMoA7",
"HkgVGbfqCm",
"rkeRA_KuAX",
"HJlyuOh1RX",
"H1xefd2J0m",
"r1xZxO3J0X",
"Bke-NGHlTQ",
"rkgXH4Xwn7",
"SJgU__OUn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545164886983,
1544990333459,
1544982848390,
1543550187299,
1543460998164,
1543460772242,
1543344131280,
1543278860067,
1543178454483,
1542600807436,
1542600711985,
1542600681079,
1541587497179,
1540990011435,
1540946029771
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1118/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer1"
],
[
"~Mohammad_Mahmoody1"
],
[
"ICLR.cc/2019/Conference/Paper1118/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1118/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer1"
],
[
"~Mohammad_Mahmoody1"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1118/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1118/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1118/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1118/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The present work proposes to improve backdoor poisoning attacks by only using \\\"clean-label\\\" images (images whose label would be judged correct by a human), with the motivation that this would make them harder to detect. It considers two approaches to this, one based on GANs and one based on adversarial examples, and shows that the latter works better (and is in general quite effective). It also identifies an interesting phenomenon---that simply using existing back-door attacks with clean labels is substantially less effective than with incorrect labels, because the network does not need to modify itself to accommodate these additional correctly-labeled examples.\\n\\nThe strengths of this paper are that it has a detailed empirical evaluation with multiple interesting insights (described above). It also considers efficacy against some basic defense measures based on random pre-processing.\\n\\nA weakness of the paper is that the justification for clean-label attacks is somewhat heuristic, based on the claim that dirty-label attacks can be recognized by hand. There is additional justification that dirty labels tend to be correlated with low confidence, but this correlation (as shown in Figure 2) is actually quite weak. On the other hand, natural defense strategies against the adversarial examples based attack (such as detecting and removing points with large loss at intermediate stages of training) are not considered. This might be fine, as we often assume that the attacker can react to the defender, but it is unclear why we should reject dirty-label attacks on the basis that they can be recognized by one detection mechanism but not give the defender the benefit of other simple detection mechanisms for clean-label attacks.\\n\\nA separate concern was brought up that the attack is too similar to that of Guo et al., and that the method was not run on large-scale datasets. The Guo et al. paper does somewhat diminish the novelty of the present work, but not in a way that I consider problematic; there are definitely new results in this paper, especially the interesting empirical finding that the Guo et al. attack crucially relies on dirty labels. I do not agree with the criticism about large-scale datasets; in general, not all authors have the resources to test on ImageNet, and it is not clear why this should be required unless there is a specific hypothesis that running on ImageNet would test. It is true that the GAN-based method might work more poorly on ImageNet than on CIFAR, but the adversarial attack method (which is in any case the stronger method) seems unlikely to run into scaling issues.\\n\\nOverall, this paper is right on the borderline of acceptance. There are interesting results, and none of the weaknesses are critical. It was unfortunately the case that there wasn't room in the program this year, so the paper was ultimately rejected. However, I think this could be a strong piece of work (and a clear accept) with some additional development. Here are some ideas that might help:\\n\\n(1) Further investigate the phenomenon that adding data points that are too easy to fit do not succeed in data poisoning. This is a fairly interesting point but is not emphasized in the paper.\\n(2) Investigate natural defense mechanisms in the clean-label setting (such as filtering by loss or other such strategies). I do not think it is crucial that the clean-label attack bypasses every simple defense, but considering such defenses can provide more insight into how the attack works--e.g., does it in fact lead to substantially higher loss during training? And if so, at what stage does this occur? If not, how does it succeed in altering the model without inducing high loss?\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"interesting idea, good execution, but just below threshold\"}",
"{\"title\": \"Check the other reviews in addition to the authors' responses to your comments?\", \"comment\": \"Dear AnonReviewer2,\\n\\nI am writing to bring your attention to my comments below, per the Area Chair's request. In my opinion, this paper's overarching idea is very interesting and yet the implementation could be improved. In particular, I had three major concerns in my initial review. \\n\\n1. This paper does not propose any new attack algorithms. Instead, it investigates an existing adversarial attack method and the GAN based interpolation for the backdoor attack. \\n2. As experiments are conducted on small-scale datasets, it is unclear how effective the improved backdoor attack is. \\n3. Moreover, one of the main disadvantages of the proposed attack method is that simple data augmentation techniques, especially random cropping, can successfully defend against the attack. \\n\\nThe first two concerns were reinforced by the authors' responses (at least from my point of view). The answer to the third concern is not convincing. My background is largely from computer vision, and I can think of many data augmentations to overcome the four-corner backdoor patterns studied in this paper. \\n\\n\\nBest regards,\\nAnonReviewer1\"}",
"{\"title\": \"The responses are unnecessarily defensive\", \"comment\": \"It is unfortunate that the responses are unnecessarily defensive. Nonetheless, it seems that the authors have understood the basis of my concerns according to their responses. Hence, I will refrain from any further clarification.\"}",
"{\"comment\": \"Indeed, backdoor attacks (studied here) are different from targeted (and other types of) poisoning attacks (studied in papers I listed), though they both have a poisoning phase that might or might not use (only) correct/clean labels.\\n\\nMy comment was only meant to point out the earlier works that also use (only) clean labels in the poisoning phase of the attack and hope that you find them useful!\", \"title\": \"Yes! previous comment was about the type of labels used\"}",
"{\"title\": \"Thank you for the suggestions\", \"comment\": \"We thank Mohammad for bringing this line of work to our attention. This is a very interesting theoretical study of the phenomenon that we will make sure to cite and discuss in future versions of our manuscript.\\n\\nFor the sake of the discussion, we would like to note though that the attacks explored here are _targeted_ poisoning attacks and not _backdoor_ attacks such as those considered in our paper.\"}",
"{\"title\": \"Author response\", \"comment\": \"We are somewhat puzzled by the reviewer's criticism that our results are \\\"proof-of-concept only\\\".\\n\\nAfter all, our goal is to explore the space of possible attacks and understand their power. To this end, we have proposed a new approach to backdoor attacks (clean labels with hard samples) that addresses shortcomings of previous methods (implausible labels). We have evaluated our approach on a concrete, realistic learning problem with concrete attacks and showed that it can be effective. \\n\\nIn the light of this goal, scaling the approach to larger datasets and making our attack robust against all possible defenses falls beyond the scope. One could argue that, from such a perspective, most research is indeed \\\"proof-of-concept only\\\".\\n\\nIn regard to the reviewer's concerns that this approach would not scale to a larger scale dataset, we are not sure we understand the basis of that concern. \\n\\nYes, it is possible that, for large scale datasets, the latent space of current GANs is not as well behaved. However, this is not a limitation of our method per se. After all, as GANs continue to become better, our approach will also improve with them, and research progress on GANs has been impressive over the last few years. As an example, we would like to point the reviewer to Figure 8 of a concurrent ICLR'19 submission (https://openreview.net/pdf?id=B1xsqj09Fm). That figure presents visually striking cross-class interpolations, which is all our attack requires.\\n\\nAt the same time, our second approach, the perturbation-based attack, faces no such difficulties and can be applied as-is to large scale datasets (see, e.g., high-resolution images from Tsipras et al., 2018, https://arxiv.org/abs/1805.12152). \\n\\nFinally, the reviewer is concerned about the robustness of our approach to data augmentation. We want to emphasize that the networks of Appendix B were trained with random flips _and_ random crops. Moreover, we forgot to mention that those models also perform per image standardization (mean subtraction and standard deviation scaling). The results of Appendix B follow _exactly_ the input preprocessing pipeline used by state-of-the-art models without any modification on our part. (Random rotations are not part of any CIFAR-10 or ImageNet pipeline that we are aware of). Nevertheless, our attack remains successful in this regime. (More broadly, as data augmentation techniques tend to be standardized and well known, once the attacker knows what they are, they would be able to ensure the triggers 'survive' them.)\"}",
"{\"title\": \"Appreciate the efforts on improving the manuscript; Concerns remain\", \"comment\": \"Thank the authors for improving the manuscript and clarifying some details.\\n\\nUnfortunately, I still think the idea is interesting and yet the implementation of the idea is poor. This assessment is acknowledged by the authors by that the two attacks investigated are meant as proof-of-concept only. It is hard to extend the proof-of-concept methods to larger-scale datasets because the latent space of GAN does not naturally possess any disentangling properties, likely giving rise to unexpected output of the interpolation over the latent space. Besides, it is also hard to make the proof-of-concept methods robust against data augmentation --- Appendix B is unfortunately not convincing to me. Still, random cropping, mean subtraction, and rotation among other randomization techniques could increase a neural network's robustness to the changes at the image corners.\"}",
"{\"comment\": \"This interesting work considers backdoor attacks that involve poisoning using correctly labeled examples. This is indeed a very interesting direction. As the focus of the paper is to show the power of correctly-labeled poisoned data in attacks on classifiers, naturally the paper cites a recent work on targeted poisoning attacks using correctly labeled examples.\\n\\nHere we would like to mention some earlier works (some from 2017) showing the power of (targeted) poisoning attacks using *correctly-labeled* examples in the attack. We hope these references will be found useful.\\n\\nAll of the previous attacks listed below *provably* apply to *any* classification task and *any* classifier (so obviously they apply to neural networks as well). \\n \\n1. In a TCC 2017 paper titled \\u201cBlockwise p-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners\\u201d (online since Sept 2017)\", \"https\": \"//arxiv.org/abs/1809.03063\\nit was shown that the adversary can do the same job by substituting way fewer examples (namely sqrt(m) ones, where m is the sample complexity) with other correctly labeled examples. \\n\\n4. The first 2 attacks above are polynomial-time, while the 3rd one is existential. In a more recent work https://arxiv.org/abs/1810.01407 (online since Oct 2)\\nit is shown how to get the best of the worlds above; namely in order to increase the targeted error from an arbitrary small constant to an arbitrary large constant, a polynomial-time adversary only needs to substitute O(sqrt (m)) of the examples (where m is the sample complexity) with other still correctly labeled ones. \\n\\nThe latter two papers (3,4) above, refer to attacks using correct labels as \\u201cplausible\\u201d attacks (which seems to be the term also used in this paper).\\n\\nAll the attacks above are stated in the targeted poisoning scenario with the goal of increasing the classification *error* of a target instance (e.g., the true label is 0, while the adversary wants to get a label other than 0). However the same proofs (as is) apply even if the attacker wants to make the target instance x get a specific label \\\\ell with a probability close to 1 assuming that originally (without any attacks) the probability of x being labeled \\\\ell (over the whole training and testing processes) is at least an arbitrary small constant.\", \"title\": \"Earlier attacks on classifiers using clean/correct labels\"}",
"{\"title\": \"Thank you for the clarifications!\", \"comment\": \"Those answer all the questions I had.\"}",
"{\"title\": \"Author response\", \"comment\": [\"We thank the reviewer for the thoughtful comments. We will address comments raised below.\", \"On the novelty of our attacks. We believe that the main conceptual contribution of our work is the formulation of the clean-label attack problem and showing how these attacks can be made successful by modifying samples to be \\\"harder\\\". The two attacks investigated are meant as proof-of-concept that this approach works with existing methods. We agree with the reviewer that designing specialized attacks for this task is a valuable research direction that could lead to even more successful attacks.\", \"On the scale of our datasets. We do not have the resources to run an equally comprehensive study on ImageNet-scale datasets. Hence, we decided to perform more experiments on a small dataset rather than fewer experiments on a larger dataset. Note that the plots in Figure 3 and 4 involved training 50 models each. Does the reviewer have concrete concerns about the applicability of our approach to large-scale datasets?\", \"On the resistance of our approach to data augmentation. We have demonstrated (Appendix B) that simply modifying the pattern to appear in all 4 corners is already sufficient to make the attack significantly more resistant to data augmentation. Thus, we don't consider data augmentation to be a fundamental obstacle to our attack. We believe that future work investigating different backdoor triggers can further increase the resistance of our attack to data augmentation.\", \"We thank the reviewer for concrete suggestions on improving our manuscript. We incorporated the following changes:\", \"We replaced Figure 1 with more illustrative examples.\", \"We modified the second paragraph of Section 2 to better explain the original Gu et al. (2017) attack.\", \"We changed the wording of \\u201creduced amplitude backdoor trigger\\u201d to \\\"less conspicuous backdoor trigger\\\" which should be clear without any further context.\", \"The goal of Sections 4.3 - 4.5 is to provide a reader with an overview of our results before going into the experimental details. We modified these Sections to be more self-contained.\", \"On concerns about Section 3. We do not argue that manual inspection will find all the poisoned examples (or enough to render the attack ineffective). We rather argue that if manual inspection of 300 images reveals 20 *clearly mislabelled* images, then the attack will very likely be detected leading to additional investigation and filtering. This argument illustrates a broader point -- if poisoned inputs appear suspicious upon human inspection, the attack is not truly insidious and can always be detected by more advanced filtering. This is why we believe our proposed attack is powerful: even if the samples are identified as potential outliers, they will not appear suspicious upon human inspection. We modified the text to better explain our argument.\", \"We chose the parameter tau by manually inspecting different values of \\\\tau on a 100 images.\"]}",
"{\"title\": \"Author response\", \"comment\": [\"We thank the reviewer for the kind comments and helpful suggestions. We will address points raised below:\", \"The attack success rate (ASR) is computed as the fraction of inputs that are _not_ labeled with the target class but are classified as the target class after the backdoor pattern is applied (Beginning of Section 5). We have edited the manuscript to make this definition appear more prominently earlier in the paper and edited the relevant captions.\", \"We use adversarially trained models trained with the publicly available code from https://github.com/MadryLab/cifar10_challenge (we train the non-wide variant both with L2 and Linf). The adversarial examples are generated once using this pre-trained network. Since our threat model only allows us to add examples to the training set, we cannot compute these adversarial perturbations on the fly. We have edited the manuscript to incorporate this discussion.\", \"We were also surprised initially but we believe that there is a fairly simple explanation (outlined in Section 4.4). On noisy images, the classifier learns to predict by relying on the backdoor *in the absence of strong image signal* (since the salient image features are fairly corrupted). However, when evaluated on the test set with a backdoor applied, the image itself will have a strong signal (since it will not be noisy) that can overcome the backdoor pattern. Therefore, it is necessary for the classifier to learn to predict the backdoor even when the salient image characteristics are present. As a result, random noise is not very effective at injecting backdoors. We have updated Section 4.4 to better reflect this argument.\", \"Since we do not have access to the training procedure, the pattern is applied before any data augmentation. This is the reason why this setting is challenging -- data augmentation might obscure the pattern.\", \"We have updated the manuscript to incorporate the other comments.\"]}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the kind comments. We have updated the manuscript to fix typos.\"}",
"{\"title\": \"Clean-Label Backdoor Attacks\", \"review\": \"This work explores backdoor attacks -- attacks that alter a fraction of training examples which can alter inference -- while ensuring that the poisoned inputs are consisten\\nt with their labels. These attacks are attained through either a GAN mechanism or using adversarial perturbations.\\n\\nThe ideas proposed (i.e. GAN mechanism and adversarial mechanism) are interesting additions to this literaature. I found the observation of greater effectiveness of adversa\\nrial mechanism particularly interesting.\\n\\nThe paper also does a good job of investigating effectiveness of the attack under data augmentation and propooses a limited solution.\", \"main_criticism\": \"there are a number of typos that need fixing.\\n~\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"I think this paper adds an original and valuable angle to the existing literature on data poisoning attacks\", \"review\": [\"Overall I am positive about this manuscript:\", \"I find the motivation is clear and valid. As far as I know, this is a novel contribution (my confidence is not very high on that one though - I might be unaware of related work).\", \"The paper is well-written and organized.\", \"Experiments are conducted systematically, although certain parts could be better explained (see my questions below).\", \"I think this paper adds an original and valuable angle to the existing literature on data poisoning attacks. I don't see any major flaws, therefore I think it should be accepted.\"], \"a_few_points_which_might_need_clarification\": [\"How exactly is \\\"attack success\\\" being measured?\", \"Which model is used to generate the adversarial samples? Is this an (adversarially) pretrained model? (If that's the case, then what is the model architecture?) Or are adversarial samples generated on the fly using the currently trained/poisoned model?\", \"At the end of Section 4.4: if the images with larger noise rely more on the backdoor, why does this have an adverse effect? Shouldn't it increase the effectiveness of the attack?\", \"Was the data augmentation (flips, crops etc) performed before or after the poisoning pattern was applied?\"], \"minor_comments\": [\"definition of the encoding at the bottom of page 4: this should be argmax instead of max\", \"typo in Sec. 5.1: \\\"to evaluate the uat a wide variety\\\"\", \"repetitive sentence in Sec. 5.2: \\\"we find that images generated with $\\\\tau \\\\leq 0.2$ remain [fairly] plausible\\\"\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A nice idea which needs further in-depth exploitation\", \"review\": \"This paper investigates an interesting problem, backdoor attack against neural networks. The main idea is to add a watermark pattern to the corners of the training images, so that the classifier is guided to leverage the watermark as a discriminative cue as opposed to the real content of the image. At the test stage, one can hence manipulate the classifier\\u2019s predictions by adding the watermark to the test images.\\n\\nThis paper is heavily built upon Gu et al. (2017)\\u2019s work. It shows that Gu et al. (2017)\\u2019s method can be easily defended by a data sanitization algorithm. To improve Gu et al.\\u2019s work, the authors propose to add watermark patterns to the adversarial examples or examples interpolated in GAN\\u2019s latent space. The intuition is that these examples are adversarial and hard to learn, forcing the classifier to focus on the watermark pattern instead. \\n\\nIt is an interesting idea and an intuitive improvement over (Gu et al. 2017). However, the implementation of the idea could be improved. This paper does not propose any new attack algorithms. Instead, it investigates an existing adversarial attack method and the GAN based interpolation for the purpose of backdoor attack. As experiments are conducted on small-scale datasets, it is unclear how effective the improved backdoor attack is. Moreover, one of the main disadvantages of the proposed attack method is that simple data augmentation techniques, especially random cropping, can successfully defend against the attack. \\n\\nThe quality of the paper writing could be improved. I had to read the paper more than twice and check the references now and then in order to understand some claims of the paper. The paper\\u2019s lack of clarity was actually also raised by probably one of the coauthors of the paper; see the comment \\u201cDimitris: clarify this point\\u201d on Page 11. Please find some concrete suggestions below.\\n- Figure 1 is visually not appealing at all. Perhaps find better illustrative examples. \\n- It is worth considering to add a separate section/paragraph to describe the details of Gu et al. (2017)\\u2019s method, given that this paper is heavily built upon Gu et al. (2017)\\u2019s work.\\n- It was unclear what the \\u201creduced amplitude backdoor trigger\\u201d means until Section 4. If a context-dependent term has to be used in the introduction, explain it or refer the readers to the right place of the paper. \\n- Merge Sections 4.3\\u20144.5 with the experiment section (Section 5). The results of Section 4.3\\u20134.5 are out of context without any explanation about the experiment setups. \\n\\nI have some concerns about Section 3, which is the main motivation of this work. As the authors noted in Appendix A that Gu et al.\\u2019s method works well with as few as 75 poised examples, the proposed sanitization algorithm would not be able to fail Gu et al.\\u2019s method by only identifying 20 out of 100 poised examples. \\n\\nHow to control the parameter $\\\\tau$ so that the perturbation appears plausible to humans?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJLhxnRqFQ | Adversarially Learned Mixture Model | [
"Andrew Jesson",
"Cécile Low-Kam",
"Tanya Nair",
"Florian Soudan",
"Florent Chandelier",
"Nicolas Chapados"
] | The Adversarially Learned Mixture Model (AMM) is a generative model for unsupervised or semi-supervised data clustering. The AMM is the first adversarially optimized method to model the conditional dependence between inferred continuous and categorical latent variables. Experiments on the MNIST and SVHN datasets show that the AMM allows for semantic separation of complex data when little or no labeled data is available. The AMM achieves unsupervised clustering error rates of 3.32% and 20.4% on the MNIST and SVHN datasets, respectively. A semi-supervised extension of the AMM achieves a classification error rate of 5.60% on the SVHN dataset. | [
"Unsupervised",
"Semi-supervised",
"Generative",
"Adversarial",
"Clustering"
] | https://openreview.net/pdf?id=SJLhxnRqFQ | https://openreview.net/forum?id=SJLhxnRqFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJg2ACxfxN",
"H1lPUJ5iyE",
"r1gBO2w_3m",
"rJgFb-iMhQ",
"S1lt0AiWnQ"
],
"note_type": [
"meta_review",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544847060215,
1544425295408,
1541074028791,
1540694273437,
1540632272897
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1117/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1117/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1117/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1117/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a method for unsupervised/semi-supervised clustering, combining adversarial learning and the Mixture of Gaussians model. The authors follow the methodology of ALI, extending the Q and P models with discrete variables, in such a way that the latent space in the P model comprises a mixture-of-Gaussians model.\\n\\nThe problem of generative modeling and semi-supervised learning are interesting topics for the ICLR community.\\n\\nThe reviewers think that the novelty of the method is unclear. The technique appears to be a mix of various pre-existing techniques, combined with a novel choice of model. The experimental results are somewhat promising, and it is encouraging to see that good generative model results are consistent with improved semi-supervised classification results. The paper seems to rely heavily on empirical results, but they are difficult to verify without published source code. The datasets chosen for experimental validation are also quite limited, making it it difficult to assess the strengths of the proposed method.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}",
"{\"comment\": \"This is an interesting paper. We tried to implement it but can not achieve the same performance in the paper. The margin is large. Could the author help to provide some implementation details or share the code?\", \"title\": \"Implementation details and code\"}",
"{\"title\": \"ADVERSARIALLY LEARNED MIXTURE MODEL\", \"review\": \"The paper uses Generative Adversarial Networks (GAN) for unsupervised and semi-supervised clustering. Neural network based generators are used for sampling using a mixture model. The parameters of the generators are optimised during training against a discriminator that tries to distinguish between generated distributions. Experimental results on MNIST and SVHN datasets are given to motivate their models.\", \"i_am_far_from_being_an_expert_on_gans_but_just_from_a_clustering_viewpoint_i_can_make_the_following_comments_about_the_paper\": [\"Comparison with other clustering techniques is not present. How does error and computational efficiency compare with other techniques?\", \"There doesn\\u2019t seem to be any deep theoretical insight. This paper is more about using GANs in a particular way (different than the previously attempted ways) to study and demonstrate results. Once the model is decided so are the algorithms.\", \"I am not sure what is the standard practice of describing the algorithms in the context of GANs. I found parsing Appendix A and B very difficult.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"the writing should be improved.\", \"review\": \"The paper presents a method for un-/semi- supervised clustering which combines adversarial learning and Mixture of Gaussian.\", \"cons\": \"It is interesting to incorporate the generative model into GAN.\", \"probs\": \"1.\\tThe author claims the method is the first one to generative model inferring both continuous and categorical latent variables. I think that such a conclusion is overclaimed, there are a lot of related works, e.g., Variational deep embedding: An unsupervised generative approach to Clustering, IJCAI17; Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks, ICML2017. Multi-Modal Generative Adversarial Networks for Diverse Datasets, ICLR19 submission. In fact, these methods are also very competitive approaches to clustering. \\n2.\\tIs adversarial learning do helpful for clustering with the generative model? Some theoretical should be given, at least, some specified experiments should be designed. \\n3.\\tThe novelty should be further summarized by highlighting the difference with most related works including but not limited the aforementioned ones. The current manuscript makes the work seem like a straightforward combination of many existing approaches. \\n4.\\tIn fact, the paper is hard to follow. I would recommend improving the logic/clarity.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A GAN variant for joint discrete-continous latent model\", \"review\": \"The paper presents a generative model that can be used for unsupervised and semi-supervised data clustering. unlike most of previous method the latent variable is composed of both continuous and discrete variables. Unlike previous methods like ALI the conditional probability p(y|x) of the labels given the object is represented by a neural network and not simply drown from the data. The authors show a clustering error rate on the MNIST data that is better than previously proposed methods.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
SyVhg20cK7 | Inducing Cooperation via Learning to reshape rewards in semi-cooperative multi-agent reinforcement learning | [
"David Earl Hostallero",
"Daewoo Kim",
"Kyunghwan Son",
"Yung Yi"
] | We propose a deep reinforcement learning algorithm for semi-cooperative multi-agent tasks, where agents are equipped with their separate reward functions, yet with willingness to cooperate. Under these semi-cooperative scenarios, popular methods of centralized training with decentralized execution for inducing cooperation and removing the non-stationarity problem do not work well due to lack of a common shared reward as well as inscalability in centralized training. Our algorithm, called Peer-Evaluation based Dual DQN (PED-DQN), proposes to give peer evaluation signals to observed agents, which quantifies how they feel about a certain transition. This exchange of peer evaluation over time turns out to render agents to gradually reshape their reward functions so that their action choices from the myopic best-response tend to result in the good joint action with high cooperation. This evaluation-based method also allows flexible and scalable training by not assuming knowledge of the number of other agents and their observation and action spaces. We provide the performance evaluation of PED-DQN for the scenarios ranging from a simple two-person prisoner's dilemma to more complex semi-cooperative multi-agent tasks. In special cases where agents share a common reward function as in the centralized training methods, we show that inter-agent
evaluation leads to better performance
| [
"multi-agent reinforcement learning",
"deep reinforcement learning",
"multi-agent systems"
] | https://openreview.net/pdf?id=SyVhg20cK7 | https://openreview.net/forum?id=SyVhg20cK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xFZ3BlxV",
"SkgTcbF-07",
"BklUQlK-A7",
"BJgggeFW07",
"rkeR41FW0m",
"BkgEMMW937",
"SygVYCxF27",
"r1eC3Qh_hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544735745425,
1542717844921,
1542717470500,
1542717416178,
1542717237613,
1541177867896,
1541111419970,
1541092278384
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1116/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1116/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1116/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1116/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1116/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1116/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1116/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1116/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This work introduces a reward-shaping scheme for multi-agent settings based on the TD-error of other agents.\\n\\nOverall, reviewers were positive about the direction and the presentation but had a variety of concerns and questions and felt more experiments were necessary to validate the claims of flexibility and scalability, with results more comparable to the scale of the contemporary multi-agent literature. One note in particular: a feed-forward Q network is used in a partially observable environment, which the authors seemed to dismiss in their rebuttal. I agree with the reviewer that this is an important consideration when comparing to baselines which were developed with recurrent networks in mind.\\n\\nA revised manuscript addressed concerns with the presentation but did not introduce new results or plots, and reviewers were not convinced to alter their evaluation. There is agreement that this is an interesting paper, so I recommend that the authors conduct a more thorough empirical evaluation and submit to another venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Intriguing work, not yet ready for publication.\"}",
"{\"title\": \"Revisions\", \"comment\": \"Dear reviewers,\\n\\nThank you for the valuable insights. Significant updates have been made in the methods section to make our framework clearer. We have also added some illustrations in the Prisoner's dilemma game so as to see the convergence of the Q-tables. Individual responses have been commented on your own posts.\"}",
"{\"title\": \"Convergence, ablation tests, and sharing the neural network\", \"comment\": \"Prisoner\\u2019s Dilemma\\n\\nWe made some modifications to the experiments to see the convergence. We extended the steps to 500k but the epsilon goes down from 1.0 to 0.05 only in the first 100k steps. \\n\\nSince this is a repeated game, the update rule of Q-table is Q[u] = Q[u] + 0.9*maxQ[u]. Given the rewards, it should naturally converge to values around 30. We replaced the tables with a better illustration for the evolution of the Q-values.\\n\\nConvergence of the method\\n\\nSince the Mission-DQN is just a regular DQN, the convergence property of that part is the same, which has no theoretical guarantees. If M-DQN converges, then the TD will be consistent with some granularity. When that happens, the hat{r} will also be consistent. We can then treat the A-DQN as a regular DQN with the same convergence property. The experience replay of the M-DQN has to be large enough to accommodate different policies of the other agents. Otherwise, the M-DQN will not be stable (because it is trying to converge to the recent policies) and thus less likely to converge.\\n\\nIntuition for (2)\\n\\nDue to the page limit, we could not fit in an example. First, |Z_a| is there for instances where the magnitude of the received evaluation is smaller than the given evaluation. For example, when an agent receives no evaluation at all, then \\\\hat{r} should just be the base reward. On the other hand, if the agent a and other observable agents a\\u2019 disagree on the value of transition (i.e. sgn(Z_a) != sgn(z_a)), the agent\\u2019s adjustment would be less drastic. However, if they agree on the value of transition (i.e. sgn(Z_a) == sgn(z_a)), \\\\hat{r} could be too large giving the action-Q a large update, only to be decreased again on the next time the transition is observed. This is because mission-Q is also learning. In many cases, the TD error is lower the next time they observe the transition. The formulation \\\\hat{r}_a = r_a + Z_a will also work.\\n\\nAblation Testing\\n\\nYes, the trio capture was selected randomly. We expect to observe the same trend for the other cases. \\n\\nRegarding Figure 4(a), we believe you are referring to reward, instead of random. In reward, we give the agents reward as a peer evaluation. This means that every time agent a relocates, other agents that can observe will also receive -0.1. The agents learned to avoid other agents, to avoid sharing this additional -0.1 from the other agents. \\n\\nSharing the neural network\\nOne way of sharing the neural network is to concatenate the agent ID to the observation. However, when we experimented on this, the NN learned to ignore the agent IDs. Although the location information is a different information, this may still lead to the agents having the same policy.\"}",
"{\"title\": \"Equation (2)'s effect and reward assumptions\", \"comment\": \"Reward shaping in (2)\\n\\nThe sgn(Z_a) in (2) is there to keep the original sign of the aggregate peer evaluation Z_a which may have changed in the minimization term due to the absolute value operation. We can also clip the peer evaluation values, but it may be difficult to select the correct clipping parameters. The term that actually reduces the magnitude is the minimization term. \\n\\nThe minimization term in (2) takes tries not to overestimate the value of the transition. For example, if agent a thinks that the transition is \\u2018good\\u2019 (z_a > 0) and then it is also incentivized by other agents (Z_a > 0), then agent a only needs a little push because the value update is already going to that direction. On the other hand, if Z_a < 0 and z_a > 0, then it tries to prevent a sudden change in the direction of the update. Of course the change of direction in extreme cases (e.g. Z_a << z_a, Z_a >> z_a) may be inevitable, and we may need to clip the rewards. Without the minimization term, the algorithm still works but the fluctuation of value functions are more apparent. \\n\\nReward assumption and observation claim\\n\\nThe statement was under the assumption that we are using a neural network and that each agent has a fixed input index in each other\\u2019s NN for their shared message (e.g. action, state, signal). In this case, if agent 1 observed state o_t, and the same observation o_{t+k}, but with different agents involved, the two experiences are treated as different. This can be observed in MADDPG\\u2019s critic networks. We have modified the TD as Peer evaluation part to make this clear..\\n\\nYou are correct. Having knowledge of the actions of the other agents may be better. However, even if the reward is a function of both states and actions, the agents can still give a good peer evaluation since the evaluation is a function of the reward. A lower/higher than expected base reward will result in an agent giving a penalty/incentive even if they don\\u2019t have explicit knowledge of the actions.\\n\\nOther Things\\n\\nThank you for pointing out the parameter sharing approaches. And yes, we are also claiming that this approach works on heterogeneous agents.\\n\\nAs for the table 1, we added a more comprehensive illustration and discussion of the evolution of the mission-q and action-q.\\n\\nWe also updated figure 4(a) to include independent DQN (zero evaluation). In contrast to what you said, random does not perform so much better than zero evaluation. If you were asking why random is much better than sharing the reward to observable agents (\\u2018reward\\u2019 in the plot), it is because agents learned to avoid other agents. Since agents are given a reward of -0.1 for relocating, the observers also get -0.1. We added this in the discussion.\"}",
"{\"title\": \"Reward Shaping, Prisoner's Dilemma, and other details\", \"comment\": \"Reward shaping in (2) and (3)\\n\\nThe initial design of the reward shaping is similar to your suggested simple formulation \\\\hat{r}_a = r_a + Z_a. The reason we used a subset K_a in (3) is so that the \\u201cchange\\u201d in the reward can be associated with the agents. We think that receiving peer evaluation from unobservable agents is just noise, although this may not be the case for agents with different observation ranges. \\n\\nThe minimization term in (2) takes tries not to overestimate the value of the transition. For example, if agent a thinks that the transition is \\u2018good\\u2019 (z_a > 0) and then it is also incentivized by other agents (Z_a > 0), then agent a only needs a little push because the value update is already going to that direction. On the other hand, if Z_a < 0 and z_a > 0, then it tries to prevent a sudden change in the direction of the update. Of course the change of direction in extreme cases (e.g. Z_a << z_a, Z_a >> z_a) may be inevitable, and we may need to clip the rewards. Without the minimization term, the algorithm still works but the fluctuation of value functions are more apparent. \\n\\nAs of the comment about VDN, in a way, our method looks like the reverse process of VDN but starting with individual rewards instead of a global one. Thus, the values are already decomposed. However, we don\\u2019t have any proof that this reduces to VDN.\\n\\nPrisoner\\u2019s Dilemma\\n\\nAlthough the last action cannot be observed, the base rewards can imply the action of the opponent. Also, we designed the PD so that the agents do not condition their Q-values on more than 1 state, and thus easier to analyze.\\n\\nThe goal is to reshape the rewards so that the perceived reward of the agents become as cooperative as they can. The willingness to cooperate, \\\\beta, governs this. In the prisoner\\u2019s dilemma, we had to use a beta of 1.4 so that they could read the global optimum (3,3). \\n\\n\\nCentral-V, QMIX, I-DQN\\n\\nThank you for pointing out the error in centralized neural networks. We have updated the introduction appropriately. As for using feed-forward networks, we did not focus on the neural network structure. Similarly, QMIX\\u2019s main contribution is the value function factorization. Of course, RNN can be used as an alternative when necessary. For the experiments that involved I-DQN we only used recent buffer [1] for all the algorithms. Of crouse, the M-DQN in our framework used a replay buffer.\\n\\n[1]Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent reinforcement learning in sequential social dilemmas. InProceedings of International Conference on Autonomous Agents and MultiAgent Systems, pp. 464\\u2013473, 2017.\"}",
"{\"title\": \"This paper addresses this challenge by introducing a reward-shaping mechanism and incorporating a second DQN technique which is responsible for evaluating other agents performance. No discussion about convergence.\", \"review\": \"This work is well-written, but the quality of some sections can be improved significantly as suggested in the comments. I have a few main concerns that I explain in detailed comments. Among those, the paper argues that the algorithms converge without discussing why. Also, the amount of overestimation of the Q-values are one of my big concerns and not intuitive for me in the game of Prisoner's Dilemma that needs to be justified. For these reasons, I am voting for a weak reject now and conditional on the authors' rebuttal, I might increase my score later.\\n\\n1) I have a series of questions about Prisoner's Dilemma example. I am curious to see what are the Q-values for t=100000 in PD. Is table 1h shows the converged values? What I am expecting to see is that the Q-values should converge to some values slightly larger than 3, but the values are ~ 30. It is important to quantify how much bias you add to the optimal solution by reward shaping, and why this difference in the Q-values are observed.\\n\\n2) One thing that is totally missing is the discussion of convergence of the proposed method. In section 3.4, you say that the Q-values converge, but it is not discussed why we expect convergence. The only place in the paper which I can make a conjecture about the convergence is in figure 4c which implicitly implies the convergence of the Mission DQN, but for the other one, I don't see such an observation. Is it possible to formalize the proposed method in the tabular case and discuss whether the Q-values should converge or not? Also, I would like to see the comparison of the Q-values plots in the experiments for both networks.\\n\\n3) The intuition behind (2) should be better clarified. An example will be informative. I don't understand what |Z_a| is doing in this formula.\\n\\n4) One of the main contributions of the paper is proposing the reward shaping mechanism. When I read section 3.3, I was expecting to see some result for policy gradient algorithms as well, but this paper does not analyze these algorithms. That would be very nice to see its performance in PG algorithms though. In such case that you are not going to implement these algorithms, I would suggest moving this section to the end of the paper and add it to a section named discussion and conclusion.\\n\\n5) Is it possible to change the order of parts where you define $\\\\hat{r}$ with the next part where you define $z_a$? I think that the clarity of this section should be improved. This is just a suggestion to explore. I was confused at the first reading when I saw $z_a$, \\\"evaluation of transition\\\" and then (2) without knowing how you define evaluation and why.\\n\\n6) Is there any reason that ablation testing is only done for trio case? or you choose it randomly. Does the same behavior hold for other cases too?\\n\\n7) Why in figure 4a, random is always around zero?\\n\\n8) What will happen if you pass the location of the agent in addition to its observation? In this way, it is possible to have one Dual-Q-network shared for all agents. This experiment might be added to the baselines in future revisions.\", \"minor\": [\"afore-mentioned -> aforementioned\", \"section 4.2: I-DQN is used before definition\", \"Is it R_a in (4)?\", \"I assume that the table 1f-1h are not for the case of using independent Q-learning. Introducing these tables for the first time right after saying \\\"This is observed when we use independent Q-learning\\\" means that these values are coming from independent Q-learning, while they are not as far as I understand. Please make sure that this is correct.\", \"section 4.1: * who's -> whose\", \"This work is also trying to answer a similar question to yours and should be referenced: \\\"Learning Policy Representations in Multiagent Systems, by Grover et al. 2018\\\"\", \"Visual illustrations of the game would be helpful in understanding the details of the experiment. Preparing a video of the learned policies also would informative.\", \"-----------------------------------------------\"], \"after_rebuttal\": \"after reading the answers, I got answers to most of my questions. Some parts of the paper are vague that I see that other reviewers had the same questions. Given the amount of change required to address these modifications, I am not sure about the quality of the final work, so I keep my score the same.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting connections to study of social dilemma and role of peer evaluation; experiments not enough to make the scalability claim\", \"review\": \"The paper introduces a DQN based, hierarchical, peer-evaluation scheme for reward design that induces cooperation in semi-cooperative multi-agent RL systems. The key feature of this approach is its scalability since only local \\u201ccommunication\\u201d is required -- the number of agents is impertinent; no states and actions are shared between the agents. Moreover this \\u201ccommunication\\u201d is bound to be low dimensional since only scalar values are shared and has interesting connections to sociology. Interesting metaphor of \\u201cfeel\\u201d about a transition.\\n\\nRegarding sgn(Z_a) in Eq2, often DQN based approaches clip their rewards to be between say -1 and 1. The paper says this helps reduce magnitude, but is it just an optimization artifact, or it\\u2019s necessary for the reward shaping to work, is slightly unclear. \\n\\nI agree with the paper\\u2019s claim that it\\u2019s important for an agent to learn from it\\u2019s local observation than to depend on joint actions. However, the sentence \\u201cThis is because similar partially-observed transitions involving different subsets of agents will require different samples when we assume that agents share some state or action information.\\u201d is unclear to me. Is the paper trying to just say that it\\u2019s more efficient because what we care about is the value of the transition and different joint actions might have the same transition value because the same change in state occured. However, it seems that paper is making an implicit assumption about how rewards look like. If the rewards are a function of both states and actions, r(s,a) ignoring actions might lead to incorrect approximations.\\n\\nIn Sec 3.2, under scalability and flexibility, I agree with the paper that neural networks are weird and increasing the number of parameters doesn\\u2019t necessarily make the task more complex. However the last sentence ignores parameter sharing approaches as in [1], whose input size doesn\\u2019t necessarily increase as the number of agents grows. I understand that the authors want to claim that the introduced approach works in non homogeneous settings as well.\\n\\nI get the point being made, but Table 1 is unclear to me. In my understanding of the notations, Q_a should refer to Action Q-table. But the top row seems to be showing the perceived reward matrix. How does it relate to Mission Q-table and Action Q-table is not obviously clear.\\n\\nGiven all the setup and focus on flexibility and scalability, as I reach the experiment section, I am expecting some bigger experiments compared to a lot of recent MARL papers which often don\\u2019t have more two agents. From that perspective the experiments are a bit disappointing. Even if the focus is on pedagogy and therefore pursuit-evasion domain, not only are the maps quite small, the number of agents is not that large (maximum being 5). So it\\u2019s hard to confirm whether the scalability claim necessarily make sense here. I would also prefer to see some discussion/intuitions for why the random peer evaluation works as well as it did in Fig 4(a). It doesn\\u2019t seem like the problem is that of \\\\beta being too small. But then how is random evaluation able to do so much better than zero evaluation?\\n\\nOverall it\\u2019s definitely an interesting paper. However it needs more experiments to confirm some of its claims about scalability and flexibility.\\n\\nMinor points\\nI think the section on application to actor critic is unnecessary and without experiments, hard to say it would actually work that well, given there\\u2019s a policy to be learned and the value function being learned is more about variance reduction than actual actions.\\nIn Supplementary, Table 2: map size says 8x7. Which one is correct?\\n\\n[1]: https://link.springer.com/chapter/10.1007/978-3-319-71682-4_5\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Potentially an intriguing paper, however some key choices are poorly motivated and a few results miss leading.\", \"review\": \"The authors suggest a reward shaping algorithm for multi-agent settings that adds a shaping term based on the TD-error of other agents to the reward. In order to implement this, each agent needs to keep tack of two different value estimates through different DQN networks, one for the unshaped reward and one for the shaped reward.\", \"points_of_improvement_and_questions\": \"-Can you please motivate the form of the reward shaping suggested in (2) and (3)? It looks very similar to simply taking \\\\hat{r}_a = r_a + sum_{a' not a} z_{'a}. Did you compare against this simple formulation? I think this will basically reduce the method to Value Decomposition Networks (Sunehag \\u200e2017) \\n-The results on the prisoners dilemma seem miss-leading: The \\\"peer review\\\" signal effectively changes the game from being self-interested to optimising a joint reward. It's not at all surprising that agents get higher rewards in a single shot dilemma when optimising the joint reward. The same holds for the \\\"Selfish Quota-based Pursuit\\\" - changing the reward function clearly will change the outcome here. Eg. there is a trivial adjustment that adds all other agents rewards to the reward for agent i that will will also resolve any social dilemma.\\n-What's the point of playing an iterated prisoners dilemma when the last action can't be observed? That seems like a confounding factor. Also, using gamma of 0.9 means the agents' horizon is effectively limited to around 10 steps, making 50k games even more unnecessary. \\n-\\\"The input for the centralized neural network involves the concatenation of the observations and actions, and optionally, the full state\\\": This is not true. For example, the Central-V baseline in COMA can be implemented by feeding the central state along (without any actions or local observations) into the value-function. It is thus scalable to large numbers of agents. \\n-The model seems to use a feed-forward policy in a partially observable multi-agent setting. Can you please provide a justification for this choice? Some of the baseline methods you compare against, eg. QMIX, were developed and tested on recurrent policies. Furthermore, independent Q-learning is known to be less stable when using feedfoward networks due to the non-stationarity issues arising (see eg. \\\"Stabilising Experience Replay\\\", ICML 2017, Foerster et al). In it's current form the concerns mentioned outweigh the contributions of the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkVhlh09tX | Pay Less Attention with Lightweight and Dynamic Convolutions | [
"Felix Wu",
"Angela Fan",
"Alexei Baevski",
"Yann Dauphin",
"Michael Auli"
] | Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU. | [
"Deep learning",
"sequence to sequence learning",
"convolutional neural networks",
"generative models"
] | https://openreview.net/pdf?id=SkVhlh09tX | https://openreview.net/forum?id=SkVhlh09tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkxa89b2G4",
"H1gWfxVozE",
"SJg8yJrbG4",
"H1xwAVoPb4",
"ByeyU5YM-E",
"SyexV29PJN",
"S1l8GOC-k4",
"SkgRkkYAAQ",
"BkxEwCe6pQ",
"BJxVasJp6m",
"B1lA_ikT6m",
"r1gUWo16Tm",
"Bkx6Dmk6Tm",
"SygyHzyaTX",
"BJlW-fk667",
"BklnHDDcT7",
"S1xedHZ_pX",
"rygh1-1upQ",
"Byg-bBXZaX",
"Hkgbd38ChQ",
"H1gJoj8C3X",
"H1glXkLRhm",
"Byx0nlaKnX"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1547602516905,
1547546632659,
1546895069752,
1546265806752,
1545931335015,
1544166440381,
1543788557871,
1543569126036,
1542422108097,
1542417339939,
1542417269568,
1542417150229,
1542415205126,
1542414903484,
1542414840807,
1542252356203,
1542096231811,
1542086884155,
1541645560534,
1541463145124,
1541462934632,
1541459735530,
1541161141698
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1115/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1115/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1115/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1115/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1115/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1115/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Other components besides the convolutions and self-attentions are also important\", \"comment\": \"One of our contributions is to better understand the importance of self-attention which is often perceived as the most important design choice in the architecture of Vaswani et al.\\nTable 3 of our paper shows that self-attention alone only accounts for a small portion of the improvement of Vaswani et al. over previous work, e.g., row 6 in Table 3 \\\"CNN Depthwise + Increasing kernel\\\" uses the depthwise convolution of Kaiser et al. (2017) and it is only 0.5 BLEU behind our reimplementation of Vaswani et al.\\nTherefore, modeling choices other than self-attention contribute a very large fraction of the improvement of Vaswani et al. over other work. In early experiments, we found that FFN blocks between the self-attention module in the Transformer are very important.\"}",
"{\"comment\": \"I'm not sure if I have fully-understand your great work.\\nIn my opinion, the difference between you and (Kaiser et al., 2017) is the softmax-normalized and share weights over the channel dimension. And you use these two mechanism not only reduce the number of parameter, but increase the result greatly(from 26.1 to 28.9). \\nSo can you explain why these two mechanism is so useful?\\nThanks\", \"title\": \"Problem about lightconv\"}",
"{\"title\": \"Re: Problems on the implementation of LightConv\", \"comment\": \"1. Thank you for pointing out this typo. It should reduce the number of parameters by a factor of \\u201cd/H\\u201d rather than \\u201cH\\u201d, so 112 is still the correct number.\\n\\n2. The matrix would be a band matrix, i.e. entries outside of the kernel are zeros. As you noticed, this is similar to the matrix multiplication in the self-attention, which has O(n^2) time complexity. However, we observe that when the sequence is short (< 1000), this implementation is practically faster.\"}",
"{\"comment\": \"It's a good paper and easy to catch, and I got 2 problems confused me when reading it. Forgive me if I misunderstand the paper.\\n\\n1. In the weight sharing of LightConv in Section 3, as far as I understand, the description \\\"We tie the parameters of every subsequent number of d/H channels, which reduces the number of parameters by a factor of H.\\\" should led to 448 (d/H x k) weights, instead of 112 stated, which I think is a typo.\\n\\n2. As for the implementation in the same section, the operation \\\"batch matrix multiplication\\\" confused me a lot. As LightConv is a conv operator, we can implement it by (1) applying image_to_coloum to input, (2) copying the kernels and (3) taking matrix multiplication. These procedures are taken by many DL platforms, like Caffe, to implement the conv operator. But the paper states that only reshape and transpose are applied to input before \\\"batch matrix multiplication\\\", which seems an aggregation over all position (same sprit as self-attention) when taking batch matrix multiplication, conflicting to the paper's claim.\\n\\n\\nLooking forward to your code!\", \"title\": \"Problems on the implementation of LightConv\"}",
"{\"title\": \"Thank you for pointing out these interesting CNN papers!\", \"comment\": \"There are indeed some similarities to their work, but there are also significant differences, including:\\n1) Their methods focus on using the information from one sequence to generate the convolution filter that operates on the other sequence, while we focus on using one sequence as both the source and the target like self-attentions. Admittedly, Gong et al. use intra-sentence convolutional interactions; however, their ablation study is limited to removing them instead of replacing them with self-attentions.\\n2) They use a filter generator network to predict the kernel, while DynamicConv requires only a simple linear projection.\\n3) Their models use the same convolution filter at each time step, while our DynamicConv uses different filters at each time step. This is possible due to LightConv (depthwise + weight sharing) which significantly reduces the number of parameters.\\n4) Their filter generator network uses the information from the whole sequence to generate a convolutional filter, while we only use the information at the current time step.\\n5) Our filters are softmax-normalized, while theirs are not.\\n\\nWe consider their work as orthogonal to our methods. Future work may try to apply LightConv and DynamicConv to their models in order to achieve even better performance!\"}",
"{\"comment\": \"I found your work very interesting, but there are some recent works that are closely related to your work, which take a sentence as input and generate convolutional kernels that are further applied on the sentence, but with a different granularity. I think those works are definitely worth comparing to.\", \"missing_references\": \"Learning Context-Sensitive Convolutional Filters for Text Processing (Shen et al.)\\nConvolutional Interaction Network for Natural Language Inference (Gong et al.)\", \"title\": \"Can you explain the difference between your work with more closely related works such as Convolutional Net (Gong et al.) and Context-Sensitive Convolution (Shen et al.)\"}",
"{\"metareview\": \"Very solid work, recognized by all reviewers as worthy of acceptance. Additional readers also commented and there is interest in the open source implementation that the authors promise to provide.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Accept\"}",
"{\"comment\": \"please note bytenet can also be used for language model, i.e., using only decoder. So it is very important to compare with it, which is also one type of lightweight cnn\", \"title\": \"hi\"}",
"{\"title\": \"code\", \"comment\": \"We are planning to share the code later.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your comments. We improved the description of the adaptive softmax hyperparameters ('band' terminology) in the updated version of the paper. We hope this is clearer now.\\n\\nWe refer to different subsets of the vocabulary as 'bands'. The most frequent words are denoted as \\\"head band\\\", and so on.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your fruitful comments.\", \"q1\": \"For DynamicConv, weight sharing reduces both computation and memory footprint, while for LightConv, it only reduces memory footprint. Yes, we did try using large H sizes; however, the performance degrades and the memory footprint increases dramatically which prohibits us from using a large batch size. As a consequence, training becomes much slower. For your information, DynamicConv with H=64 gets BLEU score 26.8 \\u00b1 0.1 on newstest2013 compared to 26.9 \\u00b1 0.2 with H=16 in Table 3.\", \"q2\": \"We conducted an additional experiment based on your suggestion. We set the encoder kernel size to 237 and the decoder kernel size to 267 at each layer to cover the whole sequence. The BLEU score drops slightly to 26.7 \\u00b1 0.1. This is a small difference and we expect that slightly tuned hyperparameters would close the gap.\", \"q3\": \"In section 6.4, we show experiments for document summarization (CNN/DailyMail) where the input sequence is capped at 400 words and the output sequence is 57 words on average with some examples having summaries of up to 478 words. Our results show that the model performs very well in this setting.\", \"q4\": \"We found it very important as training diverged without softmax-normalization (see Note in Table 3) for DynamicConv. We added a comparison of softmax-normalization to various alternatives to Appendix A of the updated paper.\\nFurthermore, we are able to train the model without softmax-normalization with more aggressive gradient clipping, a lower learning rate (reducing it by a factor of 5) and more updates (increasing it by 5 times), but this slowed down training dramatically.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your fruitful comments.\", \"q\": \"\\u201cYou should have tried a language modeling dataset with longer-term dependencies\\u201d\\nIn section 6.4, we show experiments for CNN/DailyMail document summarization which entails long input and output sequences. The input sequence is capped at 400 words and the output sequence is 57 words on average with some examples having summaries of up to 478 words.\", \"here_are_training_times\": \"Self-attention, 17.5h (with or without limited window)\\nDynamicConv, 16.9h\\nLightConv, 16.5h\\n\\nOur current implementation of dynamic convolutions is actually quite inefficient. We put the various convolution kernels in a sparse tensor that has only non-zero entries for the diagonal entry, thus using a lot of space. We expect a dedicated CUDA kernel to be more efficient. We are investigating such a kernel.\\n\\nNote that batching is much more efficient during training which smooths out some of the speed advantages we see at test time. During inference batching is by far not as efficient due to repeated invocation of the decoder at every time step.\"}",
"{\"title\": \"Re: explain the advantages over bytenet\", \"comment\": \"We do compare to a CNN baseline (non-separable convolutions), see Table 3 \\\"CNN (k=3)\\\". However, our model does use source-target attention which is not the case for ByteNet. Finally, our model performs better on newstest2014 of WMT English-German translation at 29.7 BLEU vs. 23.75 BLEU for ByteNet.\\n\\nAnd yes, we will release the code.\"}",
"{\"title\": \"CUDA kernel & code\", \"comment\": \"We are currently investigating a dedicated CUDA kernel and we will make the code available.\"}",
"{\"title\": \"Open sourcing the code\", \"comment\": \"Yes, we will share the code at a later stage!\"}",
"{\"comment\": \"I found this paper is very interesting, would you like to share the source code, which is very helpful for fully understanding it\", \"title\": \"Dear authors,\"}",
"{\"comment\": \"1 what do you mean by saying \\u201cWe expect a dedicated CUDA kernel to be much more efficient.\\u201d\\n\\nYou mean the efficiency. advantage in current CUDA is not obvious??\\n\\nis it possible to expect a new CUDA kernel specifically designed for your model\\n\\n2 code is not available\", \"code_and_pre_trained_models_available_at_http\": \"//anonymized\", \"title\": \"can you explain this\"}",
"{\"comment\": \"Hi,\\n\\nI have a question. You claim that your lightweight cnn can has fewer parameters and linear time. I think it is very necessary to compare with a well-know CNN sequence baseline, i.e. bytenet. it is also a pure con sequence model and shows very good performance in language modeling and translation. Have you compare with it?? Better accuracy or higher efficiency??\\n\\nDo you plan to you share your code? I am quite interested.\", \"title\": \"Hi can you explain the advantages over bytenet\"}",
"{\"comment\": \"Hi, the Code link is not available!\", \"title\": \"Hi, the Code link is not available!\"}",
"{\"title\": \"Where \\\"head band\\\" comes from\", \"comment\": \"The \\\"head band, next band, last band\\\" terminology is from https://openreview.net/forum?id=ByxZX20qFQ, which is presumably the cited anonymous paper.\"}",
"{\"title\": \"Major advance in sequence-to-sequence architectures\", \"review\": [\"The authors present lightweight convolutions and dynamic convolutions, two significant advances over existing depthwise convolution sequence models, and demonstrate very strong results on machine translation, language modeling, and summarization. Their results go even further than those of the Transformer paper in countering the conventional wisdom that recurrence (or another way of directly modeling long-distance dependencies) is crucial for sequence-to-sequence tasks. Some things that I noticed:\", \"While you do cite \\\"Depthwise Separable Convolutions for Neural Machine Translation\\\" from Kaiser et al. (ICLR 2018), there are some missed opportunities to compare more directly to that paper (e.g., by comparing to their super-separable convolutions). Kaiser et al. somewhat slipped under the community's radar after the same group released the Transformer on arXiv a week later, but it is in some ways a more direct inspiration for your work than the Transformer paper itself.\", \"I'd like to see more analysis of the local self-attention ablation. It's fantastic to see such a well-executed ablation study, especially one that includes this important comparison, but I'd like to understand more about the advantages and drawbacks of local self-attention compared to dynamic convolutions. (For instance, dynamic convolutions are somewhat faster at inference time in your results, but I'm unsure if this is contingent on implementation choices or if it's inherent to the architecture.)\", \"From a systems and implementation perspective, it would be great to see some algorithm-level comparisons of parallelism and critical path length between dynamic convolutions and self-attention. My gut feeling is that dynamic convolutions significantly more amenable to parallelization on certain kinds of hardware, especially at train time, but that the caching that's possible in self-attention inference might make the approaches more comparable in terms of critical path latency at inference time; this doesn't necessarily line up with your results so far though.\", \"You mostly focus on inference time, but you're not always as clear about that as you could be; I'd also like to see train time numbers. Fairseq is incredibly fast on both sides (perhaps instead of just saying \\\"highly optimized\\\" you can point to a paper or blog post?)\", \"The nomenclature in this space makes me sad (not your fault). Other papers (particularly a series of three papers from Tao Shen at University of Technology Sydney and Tianyi Zhou at UW) have proposed architectures that are similarly intermediate between self-attention and (in their case 1x1) convolution, but have decided to call them variants of self-attention. I could easily imagine a world where one of these groups proposed exactly your approach but called it \\\"Dynamic Local Self-Attention,\\\" or even a world where they've already done so but we can't find it among the zillions of self-attention variants proposed in the past year. Not sure if there's anything anyone can do about that, but perhaps it would be helpful to briefly cite/compare to some of the Shen/Zhou work.\", \"I think you should have tried a language modeling dataset with longer-term dependencies, like WikiText-103. Especially if the results were slightly weaker than Transformer, that would help place dynamic convolutions in the architecture trade-off space.\", \"That last one is probably my most significant concern, and one that should be fairly easy to address. But it's already a great paper.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting work, strong results, good paper\", \"review\": \"Overall, this is a really good paper.\\nThe authors propose an alternative to content based similarity for NL applications as compared to self-attention models by proposing the parameter and sequence length efficient Lightweight and Dynamic Convolutions.\\nThe authors show, over various NL tasks like Translation, LM and Abstractive summarisation, the comparison of self attention models with Lightweight and Dynamic convolution layer.\\nThe weight sharing was particularly interesting and can be seen as applying different heads for the same kernel. \\n\\nThe experimental results give strong evidence for these alternatives proposed by the authors.\\nThe lightweight and dynamic convolution layers, both perform similar or better than the self-attention layer in all the tasks.\\nThe WMT EnFr result is much better than all the other models, establishing a new state of the art.\", \"question_for_the_authors\": \"1. Is the weight sharing within the kernel mostly for reducing computation?\\nIf so, did you trying varying H size and measure how much that affects performance? What is surprising is that, in the ablation table the weight sharing increases the BLEU score by 0.1. \\n2. Did you run any experiments where the kernel size covers the whole sentence?\\n3. Since the number of parameters only change linearly wrt sequence length, did you try running this on datasets that have really long sequences to show the effectiveness of this approach further?\\n4. How important was softmax normalization for training?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"well-written, surprising and promising results\", \"review\": \"The paper proposes a convolutional alternative to self-attention. To achieve this, the number of parameters of a typical convolution operation is first reduced by using a depth-wise approach (i.e. convolving only within each channel), and then further reduced by tying parameters across layers in a round-robin fashion. A softmax is applied to the filter weights, so that the operation computes weighted sums of its (local) input (LightConv).\\n\\nBecause the number of parameters is dramatically reduced now, they can be replaced by the output of an input-dependent linear layer (DynamicConv), which gives the resulting operation a \\\"local attention\\\" flavour. The weights depend only on the current position, as opposed to the attention weights in self-attention which depend on all positions. This implies that the operation is linear in the number of positions as opposed to quadratic, which is a significant advantage in terms of scaling and computation time.\\n\\nIn the paper, several NLP benchmarks (machine translation, language modeling) that were previously used to demonstrate the efficacy of self-attention models are tackled with models using LightConv and DynamicConv instead, and they are shown to be competitive across the board (with the number of model parameters kept approximately the same).\\n\\nThis paper is well-written and easy to follow. The proposed approach is explained and motivated well. The experiments are thorough and the results are convincing. I especially appreciated the ablation experiment for which results are shown in Table 3, which provides some useful insights beyond the main point of the paper. The fact that a linear time approach can match the performance of self-attention based models is a very promising and somewhat surprising result.\\n\\nIn section 5.3, I did not understand what \\\"head band, next band, last band\\\" refers to. I assume this is described in the anonymous paper that is cited, so I suppose this is an artifact of blind review. Still, even with the reference unmasked it might be useful to add some context here.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkG3e205K7 | Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives | [
"George Tucker",
"Dieterich Lawson",
"Shixiang Gu",
"Chris J. Maddison"
] | Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by (Kingma & Welling 2013, Rezende et al. 2014). These approaches maximize a variational lower bound on the intractable log likelihood of the observed data. Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases. Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases (Rainforth et al. 2018, Le et al. 2018). Roeder et a. (2017) propose an improved gradient estimator, however, are unable to show it is unbiased. We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick. The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously raised issues. The same idea can be used to improve many recently introduced training techniques for latent variable models. In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update (RWS) (Bornschein & Bengio 2014), and the jackknife variational inference (JVI) gradient (Nowozin 2018). Finally, we show that this computationally efficient, drop-in estimator translates to improved performance for all three objectives on several modeling tasks. | [
"variational autoencoder",
"reparameterization trick",
"IWAE",
"VAE",
"RWS",
"JVI"
] | https://openreview.net/pdf?id=HkG3e205K7 | https://openreview.net/forum?id=HkG3e205K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lc8Cd-ZN",
"HJe9Sh_--N",
"Bye_rmJWlV",
"rJlMEiAnam",
"rygSDcRhaQ",
"SJluWc0npX",
"HyelqKC2pX",
"Hkx5PKC3hX",
"Hyle7iIF27",
"S1gF1q36j7"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545862737798,
1545862210457,
1544774463663,
1542413098280,
1542412892702,
1542412799600,
1542412679758,
1541364066387,
1541135128060,
1540372960534
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1114/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1114/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1114/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1114/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1114/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1114/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1114/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1114/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1114/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1114/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"RE:\", \"comment\": \"Thank you for pointing out this source of confusion. Indeed, the identity has been used extensively in previous publications and we are not claiming it as an original contribution of the paper. We introduce the identity in Eq. 5 as a \\\"well-known equivalence\\\". As the AC notes, our contribution is in the application of the identity, the experimental evaluation, and the theoretical asymptotic analysis.\"}",
"{\"title\": \"Camera ready update\", \"comment\": \"We have uploaded the final version with a link to the source code used for the experiments ( https://sites.google.com/view/dregs ).\"}",
"{\"metareview\": \"The paper is well written and easy to follow. The experiments are adequate to justify the usefulness of an identity for improving existing multi-Monte-Carlo-sample based gradient estimators for deep generative models. The originality and significance are acceptable, as discussed below.\\n\\nThe proposed doubly reparameterized gradient estimators are built on an important identity shown in Equation (5). This identity appears straightforward to derive by applying both score-function gradient and reparameterization gradient to the same objective function, which is expressed as an expectation. The AC suspects that this identity might have already appeared in previous publications / implementations, though not being claimed as an important contribution / being explicitly discussed. While that identity may not be claimed as the original contribution of the paper if that suspicion is true, the paper makes another useful contribution in applying that identity to the right problem: improving three distinct training algorithms for deep generative models. The doubly reparameterized versions of IWAE and reweighted wake-sleep (RWS) further show how IWAE and RWS are related to each other and how they can be combined for potentially further improved performance. \\n\\nThe AC believes that the paper makes enough contributions by well presenting the identity in (5) and applying it to the right problems.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An useful identity that helps improve existing training algorithms for deep generative models\"}",
"{\"title\": \"Updated manuscript\", \"comment\": \"We have updated the manuscript based on reviewer feedback. Apart from clarifying edits, we have rewritten the derivation in Appendix 8.1 and included a plot of variance for several values of K as Appendix Figure 8.\"}",
"{\"title\": \"Author response\", \"comment\": \"Recent work on reparameterizing mixture distributions has shown that the necessary gradients can be computed with the implicit reparameterization trick (Graves 2016, Jankowiak & Obermeyer 2018; Jankowiak & Karaletsos 2018; Figurnov et al. 2018). Using this approach to reparameterize the mixture, DReGs readily apply when q is a Gaussian mixture model. We mention this explicitly in the text now.\\n\\nEq. 6 explicitly characterizes the bias in STL. There is no reason to believe this term analytically vanishes, and we confirm numerically that it is non-zero in the toy Gaussian example. We believe this is sufficient to support our claim of bias.\\n\\nWe present the K ELBO results in these plots to be consistent with previous work (Rainforth et al. 2018). We agree that it can be misleading for the reasons you indicated, so we now explicitly call this out in the maintext.\\n\\nYes, the color assignment is the same. We note this in the caption for both figures now.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for the helpful suggestions.\\n\\n1. Thank you for pointing out this source of confusion. The correctness of the proof is related to the fact that \\\\frac{\\\\partial}{\\\\partial \\\\phi} g(\\\\phi, \\\\tilde{\\\\phi}) |_{\\\\tilde{\\\\phi} = \\\\phi} != \\\\frac{\\\\partial}{\\\\partial \\\\phi} g(\\\\phi, \\\\phi). On the left hand side the derivative is taken first, which results in a function of \\\\phi and \\\\tilde{\\\\phi}, which we then evaluate. As you note, this is not equivalent to setting \\\\tilde{\\\\phi} = \\\\phi, and then taking the derivative. We want the former. Following your suggestion, we have completely rewritten the proof to avoid this confusing step.\\n\\n2. We used the trace of the Covariance matrix (normalized by the number of parameters) to summarize the variance, and we implemented this by maintaining exponential moving average statistics. SNR was computed as the mean of the estimator divided by the standard deviation (as in Rainforth et al. 2018). We added this information as footnotes in the maintext.\\n\\n3. We have added a plot of the variance of the gradient estimator as K changes (Appendix Fig. 8). We found that as K increases, for IWAE and JVI, the variance of the doubly reparameterized gradient estimator slowly decreases relative to the variance of the original gradient estimator. On the other hand for RWS, we found that as K increases, the variance of the doubly reparameterized gradient estimator gradually increases relative to the variance of the original gradient estimator. However, we emphasize that in all cases, the variance of the doubly reparameterized gradient estimator was less than the variance of the original gradient estimator.\\n\\n4. Yes, intuitively, the right hand side directly takes advantage of the gradient of f whereas the left hand side ends up computing something akin to finite differences. We have added a sentence explaining this intuition in the maintext.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for checking the derivations. We appreciate the positive comments.\"}",
"{\"title\": \"Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives\", \"review\": \"This paper applies a reparameterization trick to estimate the gradients objectives encountered in variational autoencoder based frameworks with continuous latent variables. Especially the authors use this double reparameterization trick on Importance Weighted Auto-Encoder (IWAE) and Reweighted Wake-Sleep (RWS) methods. Compared to IWAE, the developed method's SNR does not go to zero with increasing the number of particles.\\n\\nOverall, I think the idea is nice and the results are encouraging. I checked all the derivations, and they seem to be correct. Thus I recommend this paper to be accepted in its current form.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good paper\", \"review\": \"The paper observes the gradient of multiple objective such as IWAE, RWS, JVI are in the form of some \\u201creward\\u201d multiplied with score function which can be calculated with one more reparameterization step to reduce the variance. The whole paper is written in a clean way and the method is effective.\\n\\nI have following comments/questions:\\n\\n1. The conclusion in Eq(5) is correct but the derivation in Sec. 8.1. may be arguable. Writing \\\\phi and \\\\tilde{\\\\phi} at the first place sets the partial derivative of \\\\tilde{\\\\phi} to \\\\phi as 0. But the choice of \\\\tilde{\\\\phi} in the end is chosen as \\\\phi. If plugging \\\\phi to \\\\tilde{\\\\phi}, the derivation will change. The better way may be calculating both the reparameterization and reinforce gradient without redefining a \\\\tilde{\\\\phi}.\\n\\n2. How does the variance of gradient calculated where the gradient is a vector? And how does the SNR defined in the experiments?\\n\\n3. How does the variance reduction from DReG changes with different value of K?\\n\\n4. Is there any more detailed analysis or intuition why the right hand side of Eq(5) has lower variance than the left hand side?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Proposed method is interesting but additional experiments are needed\", \"review\": \"Overall:\\nThis paper works on improving the gradient estimator of the ELBO. Author experimentally found that the estimator of the existing work(STL) is biased and proposed to reduce the bias by using the technique like REINFORCE.\\nThe problem author focused on is unique and the solution is simple, experiments show that proposed method seems promising.\", \"clarity\": \"The paper is clearly written in the sense that the motivation of research is clear, the derivation of the proposed method is easy to understand.\", \"significance\": \"I think this kind of research makes the variational inference more useful, so this work is significant. But I cannot tell the proposed method is really useful, so I gave this score.\\nThe reason I doubt the reason is that as I written in the below, the original STL can handle the mixture of Gaussians as the latent variable but the proposed method cannot. So I do not know which is better and whether I should use this method or use the original STL with flexible posterior distribution to tighten the evidence lower bound. I think additional experiments are needed. I know that motivation is a bit different for STL and proposed method but some comparisons are needed.\", \"question_and_minor_comments\": \"In the original paper of STL, the author pointed out that by freezing the gradient of variational parameters to drop the score function term, we can utilize the flexible variational families like the mixture of Gaussians.\\nIn this work, since we do not freeze the variational parameters, we cannot utilize the mixture of Gaussians as in the STL. IWAE improves the lower bound by increasing the samples, but we can also improve the bound by specifying the flexible posteriors like the mixture of Gaussians in STL.\\nFaced on this, I wonder which strategy is better to tighten the lower bound, should we use the STL with the mixture of Gaussians or use the proposed method? \\nTo clarify the usefulness of this method, I think the additional experimental comparisons are needed.\\n\\nAbout the motivation of the paper, I think it might be better to move the Fig.1 about the Bias to the introduction and clearly state that the author found that the STL is biased \\\"experimentally\\\".\\n\\nFollowings are minor comments.\\nIn experiment 6.1, I'm not sure why the author present the result of K ELBO estimator in the plot of Bias and Variance.\\nI think author want to point that when K=1, STL is unbiased with respect to the 1 ELBO, but when k>1, it is biased with respect to IWAE estimator.\\nHowever, the objective of K ELBO and IWAE are different, it may be misleading. So this should be noted in the paper.\\n\\nIn Figure 3, the left figure, what each color means? Is the color assignment is the same with the middle figure?\\n(Same for Figure 4)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bylnx209YX | Adversarial Attacks on Graph Neural Networks via Meta Learning | [
"Daniel Zügner",
"Stephan Günnemann"
] | Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers. | [
"graph mining",
"adversarial attacks",
"meta learning",
"graph neural networks",
"node classification"
] | https://openreview.net/pdf?id=Bylnx209YX | https://openreview.net/forum?id=Bylnx209YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJg9H3BvgV",
"SkeoiDVcAX",
"rJlQtgaW0m",
"B1xHHl6Z0m",
"BJxOYyT-0m",
"Skli62kQ6Q",
"BkltS1uMpQ",
"H1lFhSrF3X",
"HJe1c7y_27",
"HkeXZtLM2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545194561920,
1543288739493,
1542733946682,
1542733884556,
1542733696152,
1541762242711,
1541730112779,
1541129648733,
1541038983081,
1540675834715
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1112/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1112/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1112/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1112/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1112/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1112/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1112/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1112/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1112/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an method for investigating robustness of graph neural nets for node classification problem; training-time attacks for perturbing graph structure are generated using meta-learning approach. Reviewers agree that the contribution is novel and empirical results support the validity of the approach.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A novel meta-learning based approach for testing robustness of grap neural nets\"}",
"{\"title\": \"the authors have addressed my concerns\", \"comment\": \"The authors have made efforts in addressing my concerns and have improved their paper.\"}",
"{\"title\": \"Re: Review 2\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you for your constructive feedback and suggestions. We have run experiments on a larger dataset with roughly 20K nodes and found that our attacks are also successful in this scenario. You can find the results in Table 8 in Appendix F of the updated manuscript. Furthermore, we have included a discussion on the complexity of our approach in Appendix C in the updated manuscript.\", \"regarding_your_question_about_the_transferability_to_other_graph_embedding_algorithms\": \"We would like to point out that we already evaluate the impact of our attacks on DeepWalk. Our experiments show that our method\\u2019s adversarial attacks also transfer to DeepWalk.\"}",
"{\"title\": \"Re: Review 1\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for your detailed and constructive feedback. We have used your suggestions to improve the paper and have uploaded the updated manuscript.\", \"we_would_like_to_address_each_point_individually_here\": \"1) Based on your suggestion, we ran experiments on Citeseer where we use meta gradients to modify the graph structure and features simultaneously. We evaluated on GCN and CLN (DeepWalk does not use features) and we observed that the impact of the combined attacks is comparable but slightly lower (GCN: 38.6 vs 37.2, CLN: 35.3 vs 34.2; structure-only vs combined). We attribute this to the fact that we assign the same \\u2018cost\\u2019 to structure and feature changes, but arguably we expect a structure perturbation to have a stronger effect on performance than a feature perturbation. We have summarized these findings in Appendix E of the updated manuscript.\\n\\n2) We would like to emphasize that the attack model does *not* have access to the ground-truth labels of the unlabeled nodes V_u. We use the labels of the labeled nodes to train the surrogate classification model and predict the labels \\\\hat{C}_u of the unlabeled nodes. These labels are then treated as the \\u2018ground truth\\u2019 for the self-training loss L_self. Thus, the attack never uses or has access to the labels C_u of the unlabeled nodes.\\n\\n3) We agree that the set of admissible attacks is significantly smaller than O(N^{2 delta}). However, since it is challenging to derive a tighter upper bound on the size of the set of admissible perturbations, we decided to use this conservative upper bound. The main point we wanted to make (which also holds for a tighter bound) is that there is an exponential growth in the number of perturbations, i.e. exhaustive search is infeasible.\\n\\n4) Thank you for this suggestion. We have updated the manuscript to make this point more clear. Yes, the dimensionality of the adjacency matrix is NxN.\\n\\n5) T is the number of inner optimization steps (i.e., gradient descent steps of learning the surrogate model). S is the number of meta-steps on the graph structure. We have replaced G^(S) by G^(delta) in the manuscript to avoid confusion.\\n\\n6) Thank you for raising this point. We have changed the section title to \\u2018Greedy Poisoning Attacks via Meta Gradients\\u2019 in the updated manuscript.\\n\\n7) We have changed (i,j) to (u,v). A negative gradient in an entry (u,v) means that the target quantity (e.g. error) increases when the value is decreased. Decreasing the value is only admissible for node pairs connected by an edge, i.e. we change the adjacency matrix entry from a 1 (edge) to a 0 (no edge). To account for this, we flip the sign of gradients of node pairs connected by an edge, as achieved by multiplying by (-2a_uv+1). This enables us to use the arg max operation later. Equivalently, we could compute the maximum of the gradients where there is no edge and the minimum where the nodes are connected, and then choosing the entry with the higher absolute value as the perturbation.\\n\\n8) You are correct, Meta-Train uses l_atk=-l_train. \\n\\n9) We have added an experiment to Appendix D showing the effect of the unnoticeability constraint (see Figure 4). As shown, even when enforcing the constraints the attacks have similar impact. Thus we conclude that the constraint should always be enforced since they improve unnoticeability while at the same time our attacks remain effective.\\n\\n10) We agree that an increasing misclassification rate is expected when increasing the number of edges changed. Our intention in Figure 1 was to visualize this relationship and, more importantly, to show that our attacks consistently outperform the DICE baseline that has access to all class labels, i.e. more information than our method.\"}",
"{\"title\": \"Re: Review 3\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you for your constructive feedback and suggestions. We used your suggestions to improve the manuscript.\\n(1+3) We have added an algorithm summary and complexity discussion to the appendix. \\n(2) As Reviewer 1 also requested information about graph attribute attacks, we ran experiments on Citeseer where we use meta gradients to modify the graph structure and features simultaneously. We evaluated on GCN and CLN (DeepWalk does not use features) and we observed that the impact of the combined attacks is comparable but slightly lower (GCN: 38.6 vs 37.2, CLN: 35.3 vs 34.2; structure-only vs combined). We attribute this to the fact that we assign the same \\u2018cost\\u2019 to structure and feature changes, but arguably we expect a structure perturbation to have a stronger effect on performance than a feature perturbation. We have summarized these findings in Appendix E of the updated manuscript.\", \"regarding_your_question_about_the_benefit_of_meta_learning\": \"Meta learning is a principle that enables us to directly tackle the bilevel optimization problem. That is, the meta gradient gives us an indication of how the value of the outer optimization problem will change when modifying the input to the inner optimization problem (i.e. the classifier training). This proves to be a very powerful principle for poisoning attacks (essentially a bilevel optimization problem) on node classification as we show in our work.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Dear commenter,\\n\\nWhile we appreciate any constructive feedback and questions on OpenReview, we have the impression that you have not read our paper. Still, since your comment contains various incorrect claims, we address your points here:\\n\\n1) Graph neural networks are NOT a special case of networks for text classification. If at all, they are generalizations. We recommend to read the broad literature on graph neural networks to clarify your confusion (references are mentioned in our paper). Here we just want to point out two important differences: (i) The neighborhood in graphs is not ordered; unlike text/images where you have before-after/left-right-up-down information. (ii) The interaction structure in graphs, i.e. the edges, is an explicit part of the data (i.e. observed) -- while in text it is NOT. Put simply: The graph structure is part of the data and, thus, can be manipulated. This is what we consider in our work.\\n\\n2) You are linking to a discussion which does NOT apply to our setting. (i) It talks about text classification. (ii) The discussion you are linking to claims that text classification can easily be fooled (e.g. just simple random perturbations). Simple perturbations, however, do NOT have a strong effect on graph neural networks. This result was already clearly shown by other graph attack papers (see again the references in our paper). We also compare to strong baselines (including a random one) in our work which are consistently outperformed by our method.\\n\\n3) Your statement \\u201cit is even easier to fool graph neural networks\\u201d is simply incorrect. Due to (1) you cannot make any direct conclusion from text to graphs and due to (2) it has been shown that it is NOT easy to fool graph neural networks (e.g. with random perturbations). Due to the challenging nature of achieving graph attacks, we need more advanced principles -- like the one proposed in our paper.\"}",
"{\"comment\": \"Graph neural networks are just special cases of neural networks for classifying text (which is just a chain graph). To generate text that fools state-of-the-art classifiers one doesn't need to do much, and certainly not the method used in the paper (see e.g. the discussion in https://openreview.net/forum?id=ByghKiC5YX¬eId=B1xno5Dz6X). It is therefore quite obvious that it is even easier to fool graph neural networks, so why all the fancy methods?\", \"title\": \"Why is this problem important?\"}",
"{\"title\": \"Good paper of using meta-learning to solve the bilevel optimization problem in graph attacking\", \"review\": \"This paper proposes an algorithm to alter the structure of a graph by adding/deleting edges so as to degrade the global performance of node classification. The main idea is to use the idea of meta-gradients from meta-learning to solve the bilevel optimization problem.\\n\\nThe paper is clearly presented. The main contribution is to use meta-learning to solve the bilevel optimization in the discrete graph data using greedy selection approach. From the experimental results, this treatment is really effective in attacking the graph learning models (GCN, CLN, DeepWalk). However, the motivation in using meta-learning to solve the bilevel optimization is not very clear to me, e.g., what are the advantages it can offer?\\n\\nTheoretically, the paper could have given some discussion on the optimality of the meta-gradient approach to bilevel optimization to strengthen the theoretical aspect. For the greedy selection approach in Eq (8), is there any sub-modularity for the score function used?\", \"some_minor_suggestions_and_comments\": \"1) please summarize the attacking procedures in the form of an algorithm\\n2) please have some discussion on attacking the graph attributes besides the structure\\n3) please have an complexity analysis and empirical evaluations of the meta-gradient computations and approximations\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Used meta-learning by treating graph structure as hyperparameter to get the poisoned graph. Achieved reasonable results on three graph datasets.\", \"review\": \"This paper studies the problem of learning a better poisoned graph parameters that can maximize the loss of a graph neural network. The proposed using meta-learning to compute the second-order derivatives to get the meta-gradients seems reasonable. The authors also proposed approximate methods to compute the graph as learning parameters, which could be more efficient since the second-order derivatives are no longer computed. The experimental results on three graph datasets show that the proposed model could improve the misclassification rate of the unlabeled nodes.\\n\\nThe paper is well-written. It would be good if the authors could address the following suggestions or concerns:\\n\\n1) The proposed attack model assumes the only the graph structure are accessiable to the attackers, which might limit the proposed model in real applications. Joint study with the graph features would be useful to convince more audience and potentially have larger impacts.\\n\\n2) In the self-learning setting, in order to define l_atk, l_self is used, however, l_self is using v_u, which is the ground truth label of the test nodes based on my understanding, so this approach is using labels of the unlabeled data, which might be not applicable in real world.\\n\\n3) About the action space, based on the constraints of the attacker's capability, the possible attacks will be significantly smaller than O(N^2 delta), might be O(N^delta).\\n\\n4) Change 'treat the graph structure as a hyperparameter' to 'treat the graph structure tensor/matrix as a hyperparameter' would be earier to understand. And is the graph structure tensor with shape (NXN)? \\n\\n5) What's the relationship between T and S? Are T in theta_T is the same as the S in G_S?\\n\\n6) The title of section 4.2 is misleading. It would be better to name it as 'Greedy Computing Meta-Gradients'. \\n\\n7) It lacks intuition of why define S(u,v)=delta . (-2.a_uv+1). '(-2.a_uv+1)' looks lack of intuition. Please also change 'pair (i,j), we define S(u,v)' -> 'pair (u,v)'.\\n\\n8) In the experiments, what's the definition of meta-train? l_atk=-l_train?\\n\\n9) In the experiments, it would be interesting to study the impact of unnoticaability constraints on the model results.\\n\\n10) In figure 1, it is not surprising that when increasing the number of edges changed, the misclassification rates will increase. A graph NN considers more graph features rather than the structure is expected to show the impact of the graph structure change.\\n\\nI have read the authors' detailed rebuttal. Thanks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting idea and good results\", \"review\": \"This paper studied data poisoning attacking for graph neural networks. The authors proposed treating graph structures as hyperparameters and leveraged recent progress on meta-learning for optimizing the adversarial attacks. Different from some recent work on adversarial attacks for graph neural networks (Zuigner et al. 2018; Dai et al. 2018), which focus on attacking specific nodes, this paper focuses on attacking the overall performance of graph neural networks. Experiments on a few data sets prove the effectiveness of the proposed approach.\", \"strength\": [\"the studied problem is very important and recently attracting increasing attention\", \"Experiments show that the proposed method is effective.\"], \"weakness\": [\"the complexity of the proposed method seems to be very high\", \"the data sets used in the experiments are too small\"], \"details\": \"-- the complexity of the proposed method seems to be very high. The authors should explicitly discuss the complexity of the proposed method. \\n-- the data sets in the experiments are too small. Some large data sets would be much more compelling.\\n-- Are the adversarial examples identified by the proposed method transferrable to other graph embedding algorithms (e.g., the unsupervised node embedding methods, DeepWalk, LINE, and node2vec)?\\n-- I like Figure 3, though some concrete examples would be more intuitive.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bkg3g2R9FX | Adaptive Gradient Methods with Dynamic Bound of Learning Rate | [
"Liangchen Luo",
"Yuanhao Xiong",
"Yan Liu",
"Xu Sun"
] | Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound . | [
"Optimization",
"SGD",
"Adam",
"Generalization"
] | https://openreview.net/pdf?id=Bkg3g2R9FX | https://openreview.net/forum?id=Bkg3g2R9FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkx1d-PsdS",
"BkejfJ71l4",
"Bke-32cM1N",
"BylLNcbdAX",
"S1eizWWuCQ",
"ryl12OxuAX",
"S1lEtdgdAQ",
"BJgABOx_C7",
"SJef8P3thQ",
"BkeFPweF3m",
"rkg0-SM-3m",
"rkg7oagJn7",
"Skx8lvomim",
"H1glD5ZAcm",
"Byg-RFWA5X",
"Hklkd3A25X",
"SklPiVVncm",
"r1lJqtvjcm",
"HkxJNxD55X",
"S1goTjL5qX"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1570627943130,
1544658707326,
1543838888992,
1543146029557,
1543143698653,
1543141543417,
1543141499582,
1543141446100,
1541158730254,
1541109601368,
1540592901595,
1540455835085,
1539712750503,
1539344984309,
1539344840881,
1539267686636,
1539224734714,
1539172742613,
1539104807398,
1539103683407
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1111/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1111/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1111/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1111/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1111/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"~Hyesst_Wu1"
],
[
"ICLR.cc/2019/Conference/Paper1111/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"Assuming epsilon = 0,the effective step taken in parameter space at timestep t is \\u2206t = \\u03b1t \\u00b7 m t/\\u221avt.\\n\\\"The effective stepsize has\", \"two_upper_bounds\": \"|\\u2206t| \\u2264 \\u03b1 \\u00b7 (1 \\u2212 \\u03b21)/\\u221a1 \\u2212 \\u03b22 in the case (1 \\u2212 \\u03b21) >\\u221a1 \\u2212 \\u03b22, and \\n|\\u2206t| \\u2264 \\u03b1 otherwise\\\"\\nHow do you prove this? since \\u03b1^t = \\u03b1 \\u00b7\\u221a(1 \\u2212 \\u03b22^t)/(1 \\u2212 \\u03b21^t) can go to positive infinity too\", \"title\": \"proof of bounds on the effective stepsize\"}",
"{\"metareview\": \"The paper was found to be well-written and conveys interesting idea. However the AC notices a large body of clarifications that were provided to the reviewers (regarding the theory, experiments, and setting in general) that need to be well addressed in the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Summary review\"}",
"{\"title\": \"Comment\", \"comment\": \"I thank the reviewers for their response, and I keep my score.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your comments.\\n\\n>>> There is not much novelty in Theorems 1,2,3 since similar results already appeared in Reddi et al.\\n\\nWe argue that Reddi et al. (2018) did not prove \\u300cfor all the initial learning rates\\u300d, Adam has bad behavior, and this condition is important for showing the necessity of our idea of restricting the actual learning rates. That\\u2019s why we complete the proof with this weaker assumption. We would not claim the theoretical analysis as our main contribution in this paper, but it is a necessary part that serves for our actual main contribution\\u300cproposing the idea of an optimization algorithm that can gradually transform from adaptive methods to SGD(M), combining both of their advantages\\u300d. All the other parts in the paper, including preliminary empirical study, theoretical proofs, experiments, and further analysis, serve for this main contribution.\\n\\n>>> Also, the theoretical part does not demonstrate the benefit of the clipping idea. Concretely, the regret bounds seem to be similar to the bounds of AMSBound. Ideally, I would like to see an analysis that discusses a situation where AdaGrad/AMSBound fail or perform really bad, yet the clipped versions do well.\\n\\nFirst, the name of our new proposed methods are AdaBound and AMSBound. I guess you mean AMSGrad in your suggestion?\\nActually, it is easy to use a setting similar to that of Wilson et al. (2017), to show AdaGrad/Adam achieve really bad performance while our methods do well. But I don\\u2019t think it is very meaningful since it is only a bunch of examples. As also mentioned by review 2, the average performance of the algorithms is what really matters. But due to its difficulty, most similar works on optimizers tend to use experiments to support their arguments and lack the theoretical proofs for this part.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your comments!\\n\\nWe deeply agree that the average performance of different algorithms is very important in practice. But as also mentioned in the reply to anonymous comments before (on 11.12), our understanding of the generalization behavior of deep neural networks is still very shallow by now. It is a big challenge of investigating from theoretical aspects. Actually, the theoretical analysis of most recent related work is still under strong or particular assumptions. I believe if one could conduct convincing theoretical proof without strong assumptions, that work is totally worth an individual publication.\\n\\nWe are conducting more experiments on larger datasets such as CIFAR-100 and on more tasks in other fields, and the results are very positive too. We will add the results and analysis in the final revision if there is space left in the paper.\\n\\nWe want to argue that the use of diag() is necessary since \\\\phi_t is a matrix rather than a vector. Also, $g$ is not a vector but $g_t$ is, and $g_{t,i}$ is coordinate. \\nIt is true that the expression $x_i$ might be ambiguous without context: 1) $x$ is a vector and it means the i-th coordinate of $x$ or 2) $x$ is not a vector and $x_i$ is a vector at time $i$. But since $x$ cannot be or not be a vector at the same time, it is clear in a specific context. This kind of notation is also used in many other works. We re-check the math expressions in our paper and think they are ok.\"}",
"{\"title\": \"about extra details & experiments\", \"comment\": \"[About details and extra experiments you asked for]\\n\\n>>> Am I correct in saying that with t=100 (i.e., the 100th iteration), the \\\\eta s constrain the learning rates to be in a tight bound around 0.1? If beta=0.9, then \\\\eta_l(1) = 0.1 - 0.1 / (0.1*100+1) = 0.091. After t=1000 iterations, \\\\eta_l becomes 0.099. Again, are the good results coincidental with the fact that SGD with learning rate 0.1 works well for this setup? In the scheme of the 200 epochs of training (equaling almost 100-150k iterations), if \\\\eta s are almost 0.099 / 0.10099, for over 99% of the training, we're only doing SGD with learning rate 0.1.\\n\\nActually, we used \\\\beta_1=0.99 in our experiments. Therefore \\\\eta_l comes to 0.091 at t=1000 rather than t=100, and it is about 10 epochs. Also, as mentioned above, we test larger \\\\beta and the performance are similar (see Figure 5).\\n\\n>>> Along the same lines, what learning rates on the grid were chosen for each of the problems?\", \"the_settings_are\": \"SGD(M): \\\\alpha=0.1 for MNIST/CIFAR10, \\\\alpha=10 for PTB; momentum=0.9\\nAdam, AMSBound: \\\\alpha=0.001, \\\\beta_1=0.99, \\\\beta_2=0.999\", \"adagrad\": \"\\\\alpha=0.01\\n\\nWe only provided the grid search sets of hyperparameters due to the page limit before. \\nWe will soon add a section in the appendix to illustrate the specific settings of hyperparameters for all the optimizers.\\n\\n>>> Does the setup still work if SGD needs a small step size and we still have \\\\eta converge to 1? A VGG-11 without batch normalization typically needs a smaller learning rate than usual; could you try the algorithms on that?\\n\\nYes. We add an experiment according to your suggestion (VGG-11 without batch normalization on CIFAR-10, using AdaBound/AMSBound, SGD, and other baselines). The best step size for SGD is 0.01 and AdaBound with \\\\alpha*=1 still have similar performance with the best-tuned SGD (see this anonymous link to the results: https://github.com/AgentAnonymous/X/blob/master/vgg_test.pdf ) . \\n\\n>>> Can the authors plot the evolution of learning rate of the algorithm over time? You could pick the min/median/max of the learning rates and plot them against epochs in the same way as accuracy. This would be a good meta-result to show how gradual the transition from Adam to SGD is.\\n\\nWe conduct an experiment as you suggested, the results are placed in Appendix H.\\nFor short, we can see that the learning rates increase rapidly in the early stage of training, then after a few epochs its max/median values gradually decrease over time, and finally converge to the final step size. The increasing at the beginning is due to the property of the exponential moving average of \\\\phi_t of Adam, while the gradually decreasing indicates the transition from Adam to SGD.\"}",
"{\"title\": \"about contributions\", \"comment\": \"[About contributions]\\n\\n>>> Is it correct that a careful theoretical analysis of this framework is what stands as the authors' major contribution?\", \"we_want_to_clarify_that_our_main_contribution_is\": \"\\u300cproposing the idea of an optimization algorithm that can gradually transform from adaptive methods to SGD(M), combining both of their advantages\\u300d\\nAll the other parts in the paper, including preliminary empirical study, theoretical proofs, experiments, and further analysis, serve for the main contribution. From Wilson et al. (2017), many researchers have been devoted to finding a way to train as fast as Adam and as good as SGD. Many of them failed and some of them present so complicated algorithms.\\nThe purpose of this paper is to tell other researchers that such an interesting, simple and direct approach can achieve surprisingly good and robust performance. Note that \\u201cbound functions on learning rates\\u201d is only one particular way to conduct \\u201cgradual transformation from Adam to SGD\\u201d. There might be other ways that can work too, such as well-designed decay. We think publicizing now with several baseline experiments and a basic theoretical proof so as to stimulate other people's research can benefit the research community.\\n\\n>>> The core observation of extreme learning rates and the proposal of clipping the updates is not novel; \\n\\nWe are not the first to propose clipping of learning rates. But we would argue that no one has given a clear observation of the existence of extreme learning rates before. Wilson et al. (2017) first mentioned that extreme learning rates may cause bad performance, but it is just an assumption. Keskar & Socher (2017)\\u2019s preliminary experiment can be seen as indirect evidence. As far as we know, we are the first that directly show both extremely large and small learning rates exist in the final stage of training.\\n\\n>>> Keskar and Socher (which the authors cite for other claims) motivates their setup with the same idea (Section 2 of their paper). I feel that the authors should clarify what they are proposing as novel. \\n\\nWe will clarify that the idea of learning rate clipping has been proposed by Keskar & Socher (2017). \\nEven if they had not mentioned the idea of clipping learning rates, we wouldn\\u2019t claim it as our new contribution. Actually, clipping is really common in practice/in many frameworks\\u2019 API. The difference is that we usually use it on gradients. We have also mentioned the above facts in Section 4.\\nAlso, we want to clarify again that our main contribution is the idea of \\u201cgradual transformation from Adam to SGD\\u201d, and clipping is just one particular way of implementation.\\nIt should also be mentioned that this part in Keskar & Socher (2017) is preliminary. They did not give a thorough discussion about clipping or extreme learning rates.\"}",
"{\"title\": \"Rebuttal: about bound functions\", \"comment\": \"Thanks for your questions and suggestions. We separate the questions into 3 parts (bound functions, contributions, and extra details & experiments) and post the responses below. We hope they can address your questions.\\n\\n[About bound functions]\", \"we_want_to_clarify_the_following_facts_about_the_bound_function\": \"1. The convergence speed (indicated by \\\\beta in current settings) and convergence target (indicated by \\\\alpha*) exert minor impacts on the performance of AdaBound.\\n2. In other words, AdaBound is not sensitive to the form of bound functions, and therefore we don\\u2019t have to waste much time fine-tuning the hyperparameters, especially compared with SGD(M).\\n3. Moreover, even not carefully fine-tuned AdaBound can beat SGD(M) with the optimal step size.\\n\\nWe conducted the empirical study in Appendix G in order to illustrate the above points. But as you have raised a few questions about the bound function, it seems that our original experiments are not enough. We expand the experiments in an attempt to give more evidence to support the above statements and hope this can answer some questions you mentioned.\\n\\n>>> I'm somewhat confused by the formulation of \\\\eta_u and \\\\eta_l. The way it is set up (end of Section 4), the final learning rate for the algorithm converges to 0.1 as t goes to infinity. In the Appendix, the authors show results also with final convergence to 1. Are the results coincidental with the fact that SGD works well with those learning rates? It is a bit odd that we indirectly encode the final learning rate of the algorithm into the \\\\eta s.\\n\\n(Note: SGD and SGDM have similar performance in our experiments. Here we directly use SGD to generally indicate SGD or SGDM)\\nIt is not a coincidence. SGD is very sensitive to the step size. \\\\alpha=0.1 is the best setting and other settings have large performance gaps compared with the optimal one (see Figure 6a). But AdaBound has stable performance in different final step sizes (see Figure 6b). Moreover, for all the step sizes, AdaBound outperforms SGD (see Figure 7).\\n\\n>>> Can you try experimenting with/suggesting trajectories for \\\\eta which converge to SGD stepsize more slower?\\n\\nWe further test \\\\beta for {1-1/10, 1-1/50, 1-1/100, 1-1/500, 1-1/1000}, which translates to some slower convergence speed of bound functions. Their performances are really close (see Figure 5).\\n\\n>>> Similarly, can you suggest ways to automate the choice for the \\\\eta^\\\\star? It seems that the 0.1 in the numerator is an additional hyperparameter that still might need tuning?\\n\\nIn the current form of bound functions, yes, it is an additional hyperparameter. But as illustrated by the experiments, AdaBound is very robust and not sensitive to hyperparameters (we can randomly use \\\\alpha from 0.001 to 1 and still get stable and good performance). I think in practice, we can somehow treat it as \\u201cno need of tuning\\u201d, and 0.1 can be a default setting.\"}",
"{\"title\": \"Review of \\\"Adaptive Gradient Methods with Dynamic Bound of Learning Rate\\\"\", \"review\": \"This paper presents new variants of ADAM and AMSGrad that bound the gradients above and below to avoid potential negative effects on generalization of excessively large and small gradients; and the paper demonstrates the effectiveness on a few commonly used machine learning test cases. The paper also presents detailed proofs that there exists a convex optimization problem for which the ADAM regret does not converge to zero.\\n\\nThis paper is very well written and easy to read. For that I thank the authors for their hard word. I also believe that their approach to bound is well structured in that it converges to SGD in the infinite limit and allows the algorithm to get teh best of both worlds - faster convergence and better generalization. The authors' experimental results support the value of their proposed algorithms. In sum, this is an important result that I believe will be of interest to a wide audience at ICLR.\\n\\nThe proofs in the paper, although impressive, are not very compelling for the point that the authors want to get across. That fact that such cases of poor performance can exists, says nothing about the average performance of the algorithms, which is practice is what really matters.\\n\\nThe paper could be improved by including more and larger data sets. For example, the authors ran on CIFAR-10. They could have done CIFAR-100, for example, to get more believable results.\\n\\nThe authors add a useful section on notation, but go on to abuse it a bit. This could be improved. Specifically, they use an \\\"i\\\" subscript to indicate the i-th coordinate of a vector and then in the Table 1 sum over t using i as a subscript. Also, superscript on vectors are said to element-wise powers. If so, why is a diag() operation required? Either make the outproduct explicit, or get rid of the diag().\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": [\"The authors introduce AdaBound, a method that starts off as Adam but eventually transitions to SGD. The motivation is to benefit from the rapid training process of Adam in the beginning and the improved convergence of SGD at the end. The authors do so by clipping the weight updates of Adam in a dynamic way. They show numerical results and theoretical guarantees. The numerical results are presented on CIFAR-10 and PTB while the theoretical results are shown on assumptions similar to AMSGrad (& using similar proof strategies). As it stands, I have some foundational concerns about the paper and believe that it needs significant improvement before it can be published. I request the authors to please let me know if I misunderstood any aspect of the algorithm, I will adjust my rating promptly. I detail my key criticisms below:\", \"I'm somewhat confused by the formulation of \\\\eta_u and \\\\eta_l. The way it is set up (end of Section 4), the final learning rate for the algorithm converges to 0.1 as t goes to infinity. In the Appendix, the authors show results also with final convergence to 1. Are the results coincidental with the fact that SGD works well with those learning rates? It is a bit odd that we indirectly encode the final learning rate of the algorithm into the \\\\eta s.\", \"Am I correct in saying that with t=100 (i.e., the 100th iteration), the \\\\eta s constrain the learning rates to be in a tight bound around 0.1? If beta=0.9, then \\\\eta_l(1) = 0.1 - 0.1 / (0.1*100+1) = 0.091. After t=1000 iterations, \\\\eta_l becomes 0.099. Again, are the good results coincidental with the fact that SGD with learning rate 0.1 works well for this setup? In the scheme of the 200 epochs of training (equaling almost 100-150k iterations), if \\\\eta s are almost 0.099 / 0.10099, for over 99% of the training, we're only doing SGD with learning rate 0.1.\", \"Along the same lines, what learning rates on the grid were chosen for each of the problems? Does the setup still work if SGD needs a small step size and we still have \\\\eta converge to 1? A VGG-11 without batch normalization typically needs a smaller learning rate than usual; could you try the algorithms on that?\", \"Can the authors plot the evolution of learning rate of the algorithm over time? You could pick the min/median/max of the learning rates and plot them against epochs in the same way as accuracy.This would be a good meta-result to show how gradual the transition from Adam to SGD is.\", \"The core observation of extreme learning rates and the proposal of clipping the updates is not novel; Keskar and Socher (which the authors cite for other claims) motivates their setup with the same idea (Section 2 of their paper). I feel that the authors should clarify what they are proposing as novel. Is it correct that a careful theoretical analysis of this framework is what stands as the authors' major contribution?\", \"Can you try experimenting with/suggesting trajectories for \\\\eta which converge to SGD stepsize more slower?\", \"Similarly, can you suggest ways to automate the choice for the \\\\eta^\\\\star? It seems that the 0.1 in the numerator is an additional hyperparameter that still might need tuning?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Nice experiments, but theory does not reflect any benefit\", \"review\": \"*Summary :\\nThe paper explores variants of popular adaptive optimization methods.\\nThe idea is to clip the magnitude of the gradients from above and below in order to prevent too aggressive/conservative updates.\\nThe authors provide regret bound to this algorithm in the online convex setting and perform several illustrative experiments.\\n\\n\\n*Significance:\\n-There is not much novelty in Theorems 1,2,3 since similar results already appeared in Reddi et al.\\n\\n-Also, the theoretical part does not demonstrate the benefit of the clipping idea. Concretely, the regret bounds seem to be similar to the bounds of AMSBound.\\nIdeally, I would like to see an analysis that discusses a situation where AdaGrad/AMSBound fail or perfrom really bad, yet the clipped versions do well.\\n\\n-The experimental part on the other hand is impressive, and the results illustrate the usefulness of the clipping idea.\\n\\n*Clarity:\\nThe idea and motivation are very clear and so are the experiments.\\n\\n\\n*Presentation:\\nThe presentation is mostly good.\", \"summary_of_review\": \"The paper suggests a simple idea to avoid extreme behaviour of the learning rate in standard adaptive methods. The theory is not so satisfying, since it does not illustrate the benefit of the method over standard adaptive methods. The experiments are more thorough and illustrate the applicability of the method.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some answers and responses\", \"comment\": \"Sorry for the late response. It's been a bit busy in the past few days. Thanks for your comments and we present our responses below.\\n\\n1. We didn't pay much attention to the smoothness of the learning curve before and the analysis mainly focuses on training speed w.r.t. the advantage of adaptive methods. But, actually, we also mentioned that the learning curve of our framework is smoother than that of SGD in the experiment on PTB (in para. 1, section 5.4, page 8). We would agree with your opinion that the smoothness is also important. We are pleased to add some more discussion on this point in the next revision.\\n\\n2. That's interesting. It is a good engineering question that what particular bound function is best (simplicity, efficiency, effectiveness etc.) in production. As for this paper, it is more about to show the potential of a novel framework and stimulate others' research. It would be a direction of future work to investigate whether there is a simpler bound function that guarantees the performance.\\n\\n3. I am afraid that the method in Keskar et al. (2017) seems not able to be applied to our algorithm directly. Introducing automation is meaningful. But it is not a very easy task, IMO. We may think about this point carefully in the future work.\"}",
"{\"comment\": \"Dear authors,\\n\\nInteresting work! My coauthors and I have been suffered from the poor generalization of Adam in many of our productions for a long time. We have to use SGD for better performance but I do HATE fine-tuning hyperparameters of SGD again and again!\\n\\nI noticed that there have been many new proposed optimizers claiming they are better than Adam. I once tried some of them and was disappointed to find that they can bring nothing improvement but more hyperparameters! I doubt that the more and more complicated design of optimizers is not a right way and there must be a simple way to build an optimizer as fast as Adam while as good as SGD.\\n\\nThat\\u2019s why this paper really attracts me. The idea of gradually transforming Adam to SGD is really simple but looks intuitive and reasonable. It makes sense to me. The algorithm is also well-presented. I am surprised that you also provide convincing proofs about the algorithm --- I had thought you would just construct some empirical studies w/o theoretical analysis.\\n\\nI have a few questions about the paper and personal thoughts of the future work. I hope they will be useful to the authors. Feel free to leave them as is if they are not correct. :D\\n\\n- Besides rapid training, the smoothness of the learning curve is another advantage of adaptive methods. Personally, I think it might be more important. When trying to train new models, we often do not know whether it can converge in advance. A common approach is training few epochs and making a preliminary decision of what to do next based on the trend of learning curve in the early stage. The sharp fluctuation of loss is common when using SGD, which makes it hard to estimate the trend of learning curve quickly. Are your framework able to keep this strength of Adam? What is your take on this?\\n\\n- I tried AdaBound on CIFAR-10 by myself. It is interesting that I have used simpler bound functions (linear functions and piecewise constant function) and still got very good performance. As you also mentioned that the convergence speed of bound functions is not very important, I suggest you may choose simpler ones (Occam\\u2019s Razor).\\n\\n- I am thinking about if we may use a way like in Keskar et al. (2017) to determine the final step size automatically. I didn\\u2019t think through this carefully of whether it is possible. What is your opinions?\\n\\nThanks in advance for your time and I hope this paper get accepted!\", \"title\": \"A few comments\"}",
"{\"title\": \"Part2\", \"comment\": \">>> You mention \\\"Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD\\\". Given that the paper only contains a few empirical results (on some important and common tasks) and no theoretical proof in that respect, I find it to be a misleading statement.\\n\\nHonestly, maybe I don't exactly get your point. We said \\\"experimental results show\\\", not \\\"theoretical proof show\\\" or something like that. If you said \\\"your experiments are not enough\\\", I could understand and may add some additional experiments on other tasks if reasonable. But we don't think \\\"no theoretical proof in that respect\\\" is a valid point to criticize that the statement is misleading or overclaiming.\\n\\nIn addition, it should be mentioned that our understanding of the generalization behavior of deep neural networks is still very shallow by now. It is a big challenge of investigating from theoretical aspects. A summary of recent achievements can be found here (http://ruder.io/deep-learning-optimization-2017/), and we can see their theoretical analysis is still under strong or particular assumptions. That's why most similar works on optimizers tend to use experiments to support their arguments.\\n\\nAs for the richness of our experiments, in our paper the tasks include several popular ones on CV and NLP area; the models include simple perceptron, deep CNN models and RNN models. We give a brief comparison to some recent works for a fair judgment. \\n\\n- [1] does not propose novel algorithms or frameworks as we do. Their main contribution is empirically showing that the minima found by adaptive learning rate methods perform generally worse compared to those found by SGD, and providing some possible causes. The richness of experiments of ours is similar to theirs. Personally, the amount of experiments in this work is an average level among similar works as far as I know.\\n- The experiments in [2] are very limited, as the authors also state the experiments are \\\"preliminary\\\".\\n- [3] conducts more experiments than other similar works. But there is no theoretical analysis, which is important in such kinds of works.\\n- [4] (posted on arXiv and also a submission to ICLR19) only conducts experiments on image classification tasks. As it is known that the gap between Adam and SGD on this task is notable, while on some NLP tasks like machine translation, well-turned Adam may even outperform SGD ([6]), it is not enough to just test on this single task.\\n- The experiments in [5] (posted on arXiv and also a submission to ICLR19) are even more limited than that of [2], only a toy model on MINST.\\n\\nTherefore, we argue that our experiments have already shown the potential of our proposed framework. Future papers by other researchers are a more appropriate home for additional experiments on other tasks. We think publicizing now with the set of baselines that we have already included so as to stimulate others' research is more effective than us delaying publication and presentation of this work.\\n\\n-----\\n[1] Wilson, A.C., Roelofs, R., Stern, M., Srebro, N., & Recht, B. (2017). The Marginal Value of Adaptive Gradient Methods in Machine Learning. NIPS.\\n[2] Sashank J.R., Satyen K., & Sanjiv K. (2018). On the Convergence of Adam and Beyond. ICLR.\\n[3] Keskar, N.S., & Socher, R. (2017). Improving Generalization Performance by Switching from Adam to SGD. CoRR, abs/1712.07628.\\n[4] Chen, J., & Gu, Q. (2018). Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks. CoRR, abs/1806.06763.\\n[5] Chen, X., Liu, S., Sun, R., & Hong, M. (2018). On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization. CoRR, abs/1808.02941.\\n[6] Denkowski, M.J., & Neubig, G. (2017). Stronger Baselines for Trustable Results in Neural Machine Translation. NMT@ACL.\"}",
"{\"title\": \"Part1 of Clarifications\", \"comment\": \"Thanks for your interests.\\n\\nI respond point by point below.\\n\\n>>> \\\"they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates.\\\" As far as I am aware the issue with the convergence analysis of exponentiated squared gradient averaging algorithms like ADAM and RMSPROP do not extend to ADAGRAD. So, ADAGRAD is indeed guaranteed to converge given the right assumptions.\\n\\nA more precise expression of that sentence should be \\\"Adam, RMSprop, AdaGrad, and other adaptive methods are observed to generalize poorly ..., and, some of them (i.e. Adam) even fail to converge ...\\\", which is summarized from [1][2][3] and Section 3 in our paper. We didn't notice the original sentence might be misunderstood. We will use a more precise way to summarize the phenomenon. However, although AdaGrad is theoretically guaranteed to converge, it is well-accepted that in practice the convergence is too slow at the end of training due to its accumulation of second order momentum. As we usually use a limited number of epochs or limited time in a training job, it may fail to achieve the \\\"theoretical convergence\\\". Therefore, maybe we can say \\\"may hard to converge\\\" to summarize. :D\\n\\n>>> In the rest of the paper, the experiments and arguments mainly consist of ADAM and not adaptive methods in general. So I think the distinction between adaptive methods in general and adaptive methods like ADAM and RMSPROP with respect to convergence guarantees should be made clearer.\\n\\nThe main purpose of this paper is to introduce a novel framework that can combine the advantages of adaptive methods and SGD(M). The framework applies to Adam as well as AdaGrad and other adaptive methods. As mentioned above, the weaknesses of adaptive methods are in common and combining with SGD can help overcome the problems. Therefore, we don't think it is necessary to distinguish particular adaptive methods everywhere in the paper. We run experiments mainly on Adam because of its popularity. According to your comments, we would consider adding more experiments on other adaptive methods like AdaGrad.\\n\\n>>> I am not sure I understand but could you please clarify how AMSGRAD helps in the generalization of ADAM. From my understanding, it only solved the convergence issue by ensuring that the problematic quantity in the proof is PSD.\\n\\nI guess we understand \\\"generalization\\\" differently. If you regard \\\"generalization error\\\", in a narrow sense, as how large the gap between training and testing error is, then I agree that AMSGrad only solves the convergence issue. But broadly speaking, \\\"generalization error\\\" is a measure of how accurate of a method on unseen data (see https://en.wikipedia.org/wiki/Generalization_error). It depends on not only handling overfitting but the convergence results on training data. Therefore, attempts on solving convergence issue can also help the generalization in a broad sense.\\n\\n>>> The experiments in Wilson et. al(2017) give proper evidence of the gap between SGD and Adaptive methods in overparameterized settings. To show that this method overcomes it, I think you need a stronger argument than what you have shown.\\n\\nWe would first argue that the experiments in Wilson et al. (2017), including a few common tasks in CV and NLP, are not much different to ours and that in other recent similiar works. While, their artifactual example before the experiment section does use a overparameterized settings, but they never claim it is the main cause of poor generalization. It is a necessary but not sufficient condition. Indeed, the poor generalizaion is mainly caused by the propery of the carefully constructed particular task. In other words, it is highly problem-dependent. The actual statement of Wilson et al. (2017) is \\n\\n** When a problem has multiple global minima, different algorithms can find entirely different solutions when initialized from the same point. In addition, we construct an example where adaptive gradient methods find a solution which has worse out-of-sample error than SGD. **\\n\\nTherefore, no one can affirm there are no examples that adaptive methods find a better solution than SGD. The above are just examples and there are infinite exmaples. We don't think it is meaningful to show our framework can perform well on that particular one, even though it is not hard.\"}",
"{\"comment\": \"Hi,\\n\\ni have three main questions for you. It would be great if you could help clarify them.\\n\\n1. You mention the following about ADAGRAD along with ADAM and RMSPROP - \\\"they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates.\\\". As far as I am aware the issue with the convergence analysis of exponentiated squared gradient averaging algorithms like ADAM and RMSPROP do not extend to ADAGRAD. So, ADAGRAD is indeed guaranteed to converge given the right assumptions. In the rest of the paper, the experiments and arguments mainly consist of ADAM and not adaptive methods in general. So I think the distinction between adaptive methods in general and adaptive methods like ADAM and RMSPROP with respect to convergence guarantees should be made clearer.\\n\\n2. I am not sure I understand but could you please clarify how AMSGRAD helps in the generalization of ADAM. From my understanding, it only solved the convergence issue by ensuring that the problematic quantity in the proof is PSD.\\n\\n3. You mention \\\"Experimental results show that new variants can eliminate the generalization gap\\nbetween adaptive methods and SGD\\\". Given that the paper only contains a few empirical results (on some important and common tasks) and no theoretical proof in that respect, I find it to be a misleading statement. The experiments in Wilson et. al(2017) give proper evidence of the gap between SGD and Adaptive methods in overparameterized settings. To show that this method overcomes it, I think you need a stronger argument than what you have shown.\", \"title\": \"Clarification about ADAGRAD and generalization of your method\"}",
"{\"title\": \"Clarification\", \"comment\": \"Hi Hyesst,\\n\\nThanks for your interests. \\n\\n1. You are absolutely right! Thanks for your correction! It should be $\\\\beta_1$ in the upper bound function at the end of Section 4.\\n\\n2. Yes, we used DenseNet-121. We will add this information in the next revision.\\n\\nThank you very much for your comments and suggestions.\"}",
"{\"comment\": \"Hi, thanks for the nice paper. The way of combining the adaptive methods and SGD proposed in the paper is really interesting, while I guess I find some small typos or mistakes. They are all minor and do not much affect the understand of the paper, but I think a clarification on them would be fine.\\n\\nFirst, the upper bound function at the end of Section 4 and Appendix G does not converge to 0.1. I believe it is a typo: there is a redundant \\\"1\\\" at the denominator and the correct expression should be $0.1 + \\\\frac{0.1}{(1-\\\\beta)t}$. Also, I guess you miss the subscripts of $\\\\beta$ in the functions in Section 4. Maybe it should be $\\\\beta_1$ or $\\\\beta_2$, I guess.\\n\\nSecond, how many layers do you use in DenseNet? You provide the source code you used for DenseNet, and it is DenseNet-121 in the code. However, I suggest mentioning the number of layers directly in the paper. It is an important hyperparameter of deep CNN network.\", \"title\": \"Some questions\"}",
"{\"title\": \"Thank you for your interests\", \"comment\": \"Thank you for your interests!\\n\\nHonestly, the code is a little bit messy currently. We are cleaning up the code for releasing it these days.\\nIf you can't wait to have a try, it is easy to implement the algorithm by making some minor changes on the optimizers in PyTorch. Take AdaBound/AMSBound as an example, we just modify the source code of Adam (https://github.com/pytorch/pytorch/blob/master/torch/optim/adam.py). Specifically, we use torch.clamp(x, l, r) function, which can constrain x between l and r element-wisely, to perform the clip operation mentioned in the paper. You can also make similar changes to other optimizers such as AdaDelta and RMSprop.\\n\\nThe codes for the experiments in the paper, as mentioned in the footnote on page 6, are obtained from https://github.com/kuangliu/pytorch-cifar and https://github.com/salesforce/awd-lstm-lm.\\n\\nWe would be happy if you can share your results on your own researches using our methods.\"}",
"{\"comment\": \"Hi! I am interested in the algorithm you proposed and want to have a try on my researches. Could you provide an implementation of the algorithm? Or, if it is not convenient in the review period, could you give a brief instruction of how to implement it?\\nGood luck and hope your paper can be accepted. :-)\", \"title\": \"About the code\"}"
]
} |
|
HylsgnCcFQ | Dynamic Graph Representation Learning via Self-Attention Networks | [
"Aravind Sankar",
"Yanhong Wu",
"Liang Gou",
"Wei Zhang",
"Hao Yang"
] | Learning latent representations of nodes in graphs is an important and ubiquitous task with widespread applications such as link prediction, node classification, and graph visualization. Previous methods on graph representation learning mainly focus on static graphs, however, many real-world graphs are dynamic and evolve over time. In this paper, we present Dynamic Self-Attention Network (DySAT), a novel neural architecture that operates on dynamic graphs and learns node representations that capture both structural properties and temporal evolutionary patterns. Specifically, DySAT computes node representations by jointly employing self-attention layers along two dimensions: structural neighborhood and temporal dynamics. We conduct link prediction experiments on two classes of graphs: communication networks and bipartite rating networks. Our experimental results show that DySAT has a significant performance gain over several different state-of-the-art graph embedding baselines. | [
"Graph Representation Learning",
"Dynamic Graphs",
"Attention",
"Self-Attention",
"Deep Learning"
] | https://openreview.net/pdf?id=HylsgnCcFQ | https://openreview.net/forum?id=HylsgnCcFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByxtAy0gxV",
"rylCJmlmkV",
"BkgzUr-9CQ",
"rkgbrNZ5R7",
"rylPJV-cRX",
"Hyx3dG-cRm",
"Skg4XcgqC7",
"S1eSaFecR7",
"SkgpdOcs27",
"S1g39zdt3m",
"Bkg9hJLDnQ",
"S1eLDjjAtX",
"Skeoa8zRFX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544769489269,
1543860965591,
1543275850282,
1543275577362,
1543275486526,
1543275124344,
1543272987919,
1543272892521,
1541281908748,
1541141140482,
1541001138342,
1538337630510,
1538299586695
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1110/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1110/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1110/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1110/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1110/Authors"
],
[
"~Michael_Bronstein1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a self-attention based approach for learning representations for the vertices of a dynamic graph, where the topology of the edges may change. The attention focuses on representing the interaction of vertices that have connections. Experimental results for the link prediction task on multiple datasets demonstrate the benefits of the approach. The idea of attention or its computation is not novel, however its application for estimating embeddings for dynamic graph vertices is new.\\nThe original version of the paper did not have strong baselines as noted by multiple reviewers, but the paper was revised during the review period. However, some of these suggestions, for example, experiments with larger graph sizes and other related work i.e., similar work on static graphs are left as a future work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Novel application of self attention for estimating dynamic graph embeddings\"}",
"{\"title\": \"Feedback after revisions\", \"comment\": \"Dear Reviewers and ACs,\\n\\nWe thank you once again for the time and effort to review our paper, and appreciate the valuable questions and suggestions. We have made several improvements to our paper, and hope that they sufficiently address your key concerns. We would greatly value any additional comments on the revised paper, for further discussion and improvement.\"}",
"{\"title\": \"Summary of revisions\", \"comment\": [\"We hope our revisions to the paper have adequately addressed the comments of the reviewers, and other interested anonymous researchers. We would like to thank everyone for their insightful comments on our paper and sincerely believe that it has helped improve the overall quality and contribution.\", \"We have added new experiments on dynamic new link prediction, to compare different graph representation learning methods on link prediction focused on (a) new links, and (b) previously unseen new nodes. Our results (summarized in Appendices B and D) indicate similar performance improvements for DySAT over existing methods.\", \"We have conducted additional experiments to evaluate DySAT on multi-step link prediction/forecasting (Appendix C) where the node embeddings trained until G_t, are used to predict the links at future snapshot G_{t+n}. DySAT achieves significant improvements over all baselines and maintains a highly stable link prediction performance over future time steps.\", \"In all of our experimental results, we additionally compare against two static graph embedding baselines, GCN and GAT trained for link prediction, denoted by GCN-AE and GAT-AE respectively. As expected, their performance is typically close to the corresponding GraphSAGE variants.\", \"We have added an experiment in Appendix A.2 to visualize the distribution of attention weights learned by the temporal attention layers, over multiple time steps. Our results indicate a mild bias towards recent snapshots while exhibiting significant variability across different nodes in the graph.\", \"In Section 2, we have added appropriate references to discuss relevant related work on dynamic attributed graphs and continuous-time dynamic graphs.\", \"In section 5.4, we have added details on the running time of DySAT, comparing the relative costs of structural and temporal attention.\"]}",
"{\"title\": \"Reply to AnonReviewer2 (Part 2)\", \"comment\": \"\", \"q2\": \"There are actually quite a number of work done on network embedding on dynamic graphs including [2-4]. In particular, [2-3] support node attributes as well as the addition/deletion of nodes & edges. The author should also compare against these work.\\n\\nWe thank you for the useful references on dynamic graphs. We are aware of these papers and do agree that they are related to our work. Consequently, we have revised Section 2 (Related work) in the revised paper to reflect the same. While we agree on the relevance of these works, we list below our reasons for not including experimental comparisons:\\n\\n>> Attributed Network Embedding for Learning in a Dynamic Environment. Li et. al. In Proc. CIKM '17 [2]: \\nThis paper learns node embeddings in dynamic attributed graphs by initially training an offline model, followed by incremental updates over time. \\n\\nFirst, their key focus is online learning to improve efficiency over retraining static models, while our goal is to improve representation quality by capturing temporal evolutionary patterns in graph structure. This implies that their model can at best reach the performance of a statically re-trained method (as demonstrated in their paper), while we achieve significant improvements over static methods.\\n\\nSecond, their proposed model DANE is designed for attributed graphs with evolving structure and attributes, while our model is designed for dynamic non-attributed graphs. A direct application of DANE to non-attributed graphs would not be optimal and indeed our initial experiments on using DANE indicate significantly inferior performance even versus simplest static embedding methods. Thus, to avoid an unfair comparison, we exclude DANE in our experimental results.\\n\\n>> Streaming Link Prediction on Dynamic Attributed Networks. Li et. al. In Proc. WSDM '18 [3]: \\nThis paper focuses on link prediction in dynamic attributed graphs, but does not learn latent representations for nodes, hence directly orthogonal and does not address our problem of dynamic graph representation learning. We mention a few other differences to support our choice:\\n\\nFirst, they once again focus on online learning to enable scaling to large-scale streaming networks. Their key objective is on efficiency to support streaming graphs, while our goal is to learn latent node representations that capture evolutionary graph structures.\\n\\nSecond, although their method might be relevant for comparison with our incremental variant IncSAT, we were unable to obtain the implementation even after contacting the authors. Since their method falls under the category of streaming graphs with a focus on efficiency and scalability, we believe an experimental comparison is outside the scope of our work.\\n\\n>> Continuous-Time Dynamic Network Embeddings. Nguyen et. al. In Comp. Proc. WWW '18 [4]: \\nThis paper learns dynamic graph embeddings on temporal graphs with continuous time-stamped links. \\n\\nFirst, this paper operates under the assumption of continuous time-stamped links, which is often not realistic and distinct from the most established problem setup of using dynamic graph snapshots at discrete time steps. Thus, a direct comparison may not be fair.\\n\\nSecond, this paper assumes a continuous-time dynamic graph with the restriction that each link occurs *only* once. This is an unrealistic assumption, which prevents applicability to most real-world dynamic graphs including all of our considered datasets where each link typically occurs in multiple snapshots. Thus considering all these factors, we exclude a comparison with this method in our experiments.\", \"q3\": \"The concept of temporal attention is quite interesting. However, the authors do not provide more analysis on this. For one, I am interested to see how the temporal attention weights are distributed. Are they focused on the more recent snapshots? If so, can we simply retain the more relevant recent information and train a static network embedding approach? Or are the attention weights distributed differently?\\n\\nWe thank you for your question on the analysis of temporal attention weights. We have conducted a preliminary study to analyze the distribution of attention weights learned by the temporal attention layers, over multiple time steps as requested. The results are reported in Appendix A.2. We choose the Enron dataset for this experiment, and present a heatmap of the normalized attentional coefficients with mean and standard deviation, over multiple time steps. Figure 3 in Appendix A.2 illustrates a mild bias towards recent snapshots while we observe significant variance in the attentional coefficients across different nodes in the graph. This may indicate that the learned temporal attention weights capture historical context well, and vary to a considerable degree across different nodes in the graph.\"}",
"{\"title\": \"Reply to AnonReviewer2 (Part 1)\", \"comment\": \"We would like to thank you for the in-depth questions on our experimental results! Please refer to our global comment above for a list of all revisions to the paper -- we hope they have appropriately addressed your comments.\", \"we_respond_to_each_of_your_comments_below_as_follows\": \"\", \"q1\": \"The authors compared against several dynamic & static graph embedding approaches. If we disregard the proposed approach (DySAT), the static methods seem to match and even, in some cases, beat the dynamic approaches on the compared temporal graph datasets. The authors should compare against stronger baselines for static node embedding, particularly GAT which introduced the structural attention that DySAT uses to show that the modeling of temporal dependencies is necessary/useful. Please see [1] for an easy way to train GCN/GAT for link prediction.\\n\\nWe agree with you on the observation that static methods often match or beat existing dynamic embedding methods. Our initial experiments contain comparison to GraphSAGE - an unsupervised representation learning framework that supports various neighborhood aggregation functions, including GCN and GAT aggregators.\\n\\nWe thank you for the valuable pointer [1] which trains GCN/GAT as a graph autoencoder directly for link prediction. While conceptually similar to GCN/GAT variants of GraphSAGE, two key differences include (a) lack of neighborhood sampling, and (b) link prediction objective instead of random walk samples. To examine the effect of these differences on link prediction performance, we used the aforementioned implementation [1] to train autoencoder models of GCN and GAT, denoted by GCN-AE and GAT-AE in our experiments. Our experimental results have been updated to include these as static embedding methods for comparison. From the results, we find the performance of these methods to be mostly similar to their corresponding GraphSAGE variants, which is consistent with our expectation.\"}",
"{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"We would like to thank you for the insightful questions on our experiments! Please refer to our global comment above for a list of all revisions to the paper.\", \"we_respond_to_each_of_your_comments_below_as_follows\": \"\", \"q1\": \"What will happen if a never-seen node appears at t+1? The model design seems to be compatible with this case. The structural attention will still work, however, the temporal attention degenerates to a \\u201cstatic\\u201d result - all the attention focus on the representation at t+1. I am curious about the model performance in this situation, since nodes may arise and vanish in real applications.\\n\\nWe agree with the apt observation on the capability of DySAT to handle new nodes, and have conducted additional experiments to examine model performance in such situations. To compute the representation of a new node v at time step t+1, the only available information is the local structure around v at t+1. Although temporal attention will focus on the latest representation due to absence of history, it however does not degenerate to a \\u201cstatic\\u201d result. The temporal attention applied on the neighboring nodes of v (say N_v), indirectly contribute towards the embedding of v, through backpropagation updates. Specifically, the structural embedding of v is computed as a function of N_v, whose structural embeddings receive backpropagation signals through the temporal attention layers (assuming they are not all new nodes). Thus, temporal attention indirectly affects the final embedding of node v.\\n\\nAs suggested, we empirically examine the performance of DySAT on \\u201cnew\\u201d previously unseen nodes. We report link prediction performance *only* on the new nodes at each time step using the same experimental setup, i.e., a test example (node pair) is reported for evaluation only if it contains at least one new node. Due to the significant variance in the number of new nodes at each step, we report the performance (AUC) at each time step, along with a mention on the corresponding number of new nodes. The results are available in Figure 5 of the Appendix D. From Figure 5, we observe consistent gains of DySAT over other baselines, similar to our main results.\", \"q2\": \"What is the performance of the proposed algorithm for multi-step forecasting? In the experiments, graph at t+1 is evaluated using the model trained up to graph_t. However, in real applications we may don\\u2019t have enough time to retrain the model at every time step. If we use the model trained up to graph_t to compute node embedding for the graph_{t+n}, what is the advantage of DySAT over static methods?\\n\\nWe agree with your view on the importance of not re-training the model at every time step in real-world applications. Multi-step forecasting is typically achieved either by (a) designing a model to predict multiple steps into the future, or by (b) recursively feeding next predictions as input, for a desired number of future steps. In case of dynamic graphs, events correspond to link occurrences, which renders forecasting different from conventional time-series, due to the occurrence of new nodes in each time step. Due to this key distinction, we list below, two possibilities for forecasting in dynamic graphs: (a) Link prediction at future step t+n (on all nodes) by incrementally updating the model on new snapshots. (b) Link prediction at future step t+n (among nodes present at t) based on dynamic embeddings learned at t, followed by a downstream classifier to predict the links at t+n. Note that (b) does not involve model re-training or updating while (a) requires incremental model updates.\\n\\nIn our paper, we have examined (a) by proposing an incremental variant named IncSAT and report the performance in Table 5 of Appendix E.\\n\\nWe have now added an additional experiment in Appendix C to evaluate forecasting using strategy (b), which enables direct evaluation of DySAT on multi-step link prediction. Here, each method is trained for a fixed number of time steps, and the latest embeddings are used to predict links at multiple future steps. In each dataset, we choose the last 6 snapshots to evaluate multi-step link prediction where we create examples from the links in G_{t+n} and an equal number of randomly sampled pairs of unconnected nodes (non-links). Our experimental results (Figure 4) indicate significant improvements for DySAT over all baselines and a highly stable link prediction performance over future time steps.\", \"q3\": \"What is the running time for a single training process?\\n\\nWe have revised Section 5.4 to add the running time information. Specifically, we report the runtime of DySAT on a machine with Nvidia Tesla V100 GPU and 28 CPU cores. The runtime per mini-batch of DySAT with batch size of 256 nodes on the ML-10M dataset, is 0.72 seconds. In comparison, the model variant without the temporal attention (No Temporal) takes 0.51 seconds. Thus, structural attention constitutes a major fraction of the overall runtime, while the cost of temporal attention is relatively lower.\"}",
"{\"title\": \"Reply to AnonReviewer1 (Part 2)\", \"comment\": \"Q3: The selected graphs are very small comparing to the dynamic graphs available here http://konect.uni-koblenz.de/networks/.\\n\\nWe thank the reviewer for the useful pointer to an extensive collection of real-world dynamic graphs.\\n\\nFirst, our experiments are conducted on real-world communication and rating networks with over 20,000 nodes and nearly 100,000 edges, which we believe constitute a diverse and representative sample of real-world dynamic graphs. Due to lack of established benchmark datasets for dynamic graphs, we choose Enron, UCI, Yelp, and MovieLens which have been widely in analysis of dynamic graphs.\\n\\nSecond, among the 7 compared baseline models on graph representation learning, 5 of them (GCN, GAT, node2vec, GraphSAGE, and DynGEM) choose their experiment datasets with comparable or smaller sizes. As mentioned in Section 6, our current implementation requires storing the sparse adjacency matrices of each snapshot in GPU memory, which limits scaling to graphs with over millions of nodes. This is a common issue faced by many successful graph neural network architectures such as GCN, GAT, etc.\\n\\nSince DySAT builds on the same framework as GCN and GAT, we foresee a direct extension to incorporate efficient neighborhood sampling strategies (similar to GraphSAGE and others), thus scaling to larger-scale dynamic graphs and we leave it as our future work.\\n\\nFinally, we would like to point out that the sizes of graph used in our experiments are comparable and often larger than the widely established benchmark citation networks Cora, Citeseer, and, Pubmed datasets for node classification in static graphs.\"}",
"{\"title\": \"Reply to AnonReviewer1 (Part 1)\", \"comment\": \"We would like to thank you for your review with thoughtful questions on the experiments! Please refer to our global comment above for a list of all revisions to the paper.\", \"we_respond_to_each_of_your_comments_below_as_follows\": \"\", \"q1\": \"I have seen people used sets of edges and pairs of vertices without an edge for creating examples for link-prediction on a static graph, however, working with a real-world dynamic graph, you can compute the difference between G_t and G_{t+1} as the changes that occur in G_{t+1} 1) Why are you not trying to predict these changes?\\n\\nWe agree with the observation that the differences between graphs G_t and G_{t+1} can be computed in real-world dynamic graphs, and we have included additional experiments on new link prediction in Appendix B.\\n\\nFirst, we would like to clarify our view of dynamic link prediction based on our understanding of existing literature. The goal of dynamic link prediction is to predict the set of future links (or interactions) based on historically observed graph snapshots. In practice, this can be realized as predicting future user interactions in email communication networks or user-item ratings in recommender systems. In such scenarios, dynamic link prediction aims to predict the set of \\u201call\\u201d future links (at time step t+1) given history until time step t. To the best of our knowledge, this evaluation approach has been widely adopted in our surveyed literature on dynamic link prediction in Section 2 (Related Work) of the paper. Our compared dynamic graph embedding baselines DynamicTriad, DynGEM, and Know-Evolve also adopt the same convention, by evaluating the predicted links at (t+1) through classification and ranking metrics.\\n\\nOn the other hand, we do agree with your perspective that a dynamic graph representation should be evaluated in its ability to predict \\u201cnew\\u201d links. We have added an additional experiment (Appendix B) where evaluation examples comprise \\u201cnew\\u201d links at G_{t+1} (which have not been observed in G_t), and an equal number of randomly sampled pairs of unconnected nodes (non-links). We use a similar evaluation methodology to evaluate the performance of dynamic link prediction through AUC scores. This experiment specifically evaluates the ability of different methods to predict new links at (t+1).\\n\\nThough the overall prediction accuracies are generally lower in comparison to the previous experimental setting, we observe consistent gains of 3-5% over the best baseline similar to our earlier results. The new results can be found in Table 4 of Appendix B, along with accompanying discussion. We hope that the addition of this experiment further showcases the capability of DySAT for dynamic link prediction.\", \"q2\": \"Why do you need examples from snapshot t+1 for training when you have already observed t snapshots of the graph?\\n\\nFirstly, the training step for all models only utilizes the snapshots up to t to compute the embeddings for all nodes, which can subsequently be used in different downstream tasks such as link prediction, classification, clustering, etc. No data from snapshot t+1 are utilized in training the node embedding model. Since we focus on dynamic link prediction as the primary task for evaluation, the goal to predict future links (at time step t+1) given history until time step t. Thus, the evaluation set consists of examples from snapshot t+1.\\nSecondly, the examples from time snapshot t+1 are *only* used to train a downstream logistic regression classifier for evaluating link prediction performance. Since the evaluation set comprises the links at t+1, we choose a small percentage of those examples (20%) for training, which is consistent with standard evaluation procedures. We follow the same setup for all the compared methods. In case of a different task such as multi-step forecasting to predict links at t+n, we similarly use 20% of examples at t+n for training the downstream classifier. We have revised the draft to make the experiment setup clearer.\\n\\nMeanwhile, we also describe the reason for using a downstream classifier to evaluate link prediction. Arguably, link prediction can also be evaluated by applying a sigmoid function on the inner product computed on pairs of node embeddings at time step t. However, we instead choose to train a downstream classifier (as done in node2vec, DynamicTriad etc.) to provide a fair comparison against baselines (such as DynamicTriad), which use other distance metrics (L_1 distance, etc.) for link prediction. We believe this evaluation methodology provides a more flexible framework to fairly evaluate various methods which are trained using different distance/proximity metrics.\"}",
"{\"title\": \"Dynamic graph representation learning with self-attention\", \"review\": \"This paper describes learning representation for dynamic graphs using structural and temporal self-attention layers. They applied their method for the task of link-prediction. However, I have serious objections to their experimental setup. I have seen people used sets of edges and pairs of vertices without an edge for creating examples for link-prediction on a static graph, however, working with a real-world dynamic graph, you can compute the difference between G_t and G_{t+1} as the changes that occur in G_t+1 1) Why are you not trying to predict these changes? Moreover, 2) why do you need examples from snapshot t+1 for training when you have already observed t snapshots of the graph?\\n3) The selected graphs are very small comparing to the dynamic graphs available here http://konect.uni-koblenz.de/networks/.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Dynamic Self-Attention Network\", \"review\": \"This paper proposes a model for learning node embedding vectors of dynamic graphs, whose edge topology may change. The proposed model, called Dynamic Self-Attention Network (DySAT), uses attention mechanism to represent the interaction of spatial neighbouring nodes, which is closely related to the Graph Attention Network. For the temporal dependency between successive graphs, DySAT also uses attention structure inspired by previous work in machine translation. Experiments on 4 datasets show that DySAT can improve the AUC of link prediction by significant margins, compared to static graph methods and other dynamic graph methods. Though the attention structures in this paper are not original, combining these structures and applying them on dynamic graph embedding is new.\", \"here_are_some_questions\": \"1. What will happen if a never-seen node appears at t+1? The model design seems to be compatible with this case. The structural attention will still work, however, the temporal attention degenerates to a \\u201cstatic\\u201d result --- all the attention focus on the representation at t+1. I am curious about the model performance in this situation, since nodes may arise and vanish in real applications.\\n\\n2. What is the performance of the proposed algorithm for multi-step forecasting? In the experiments, graph at t+1 is evaluated using the model trained up to graph_t. However, in real applications we may don\\u2019t have enough time to retrain the model at every time step. If we use the model trained up to graph_t to compute node embedding for the graph_{t+n}, what is the advantage of DySAT over static methods?\\n\\n3. What is the running time for a single training process?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, lacks comparison against a few key baselines.\", \"review\": \"This is a well-written paper studying the important problem of dynamic network embedding. Please find below some pros and cons of this paper.\", \"pros\": [\"Studies the important problem of network embedding under a more realistic setting (i.e., nodes & edges evolve over time).\", \"Introduces an interesting architecture that uses two forms of attention: structural and temporal.\", \"Demonstrated the effectiveness of the temporal layers through additional experiments (in appendix) and also introduced a variant of their proposed approach which can be trained incrementally using only the last snapshot.\"], \"cons\": \"* The authors compared against several dynamic & static graph embedding approaches. If we disregard the proposed approach (DySAT), the static methods seem to match and even, in some cases, beat the dynamic approaches on the compared temporal graph datasets. The authors should compare against stronger baselines for static node embedding, particularly GAT which introduced the structural attention that DySAT uses to show that the modeling of temporal dependencies is necessary/useful. Please see [1] for an easy way to train GCN/GAT for link prediction.\\n* There are actually quite a number of work done on network embedding on dynamic graphs including [2-4]. In particular, [2-3] support node attributes as well as the addition/deletion of nodes & edges. The author should also compare against these work.\\n* The concept of temporal attention is quite interesting. However, the authors do not provide more analysis on this. For one, I am interested to see how the temporal attention weights are distributed. Are they focused on the more recent snapshots? If so, can we simply retain the more relevant recent information and train a static network embedding approach? Or are the attention weights distributed differently?\\n\\n[1] Modeling Polypharmacy Side Effects with Graph Convolutional Networks. Zitnik et. al. BioInformatics 2018. \\n[2] Attributed Network Embedding for Learning in a Dynamic Environment. Li et. al. In Proc. CIKM '17. \\n[3] Streaming Link Prediction on Dynamic Attributed Networks. Li et. al. In Proc. WSDM '18. \\n[4] Continuous-Time Dynamic Network Embeddings. Nguyen et. al. In Comp. Proc. WWW '18.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"re: many prior works missing\", \"comment\": \"Thank you very much for providing a detailed description of prior work on deep learning on graphs.\\nWe are aware of most of the works that you have mentioned, which fall into the category of static graph representation learning. Due to our focus on the dynamic graph setting and limited space, we limit our attention mainly on most recent and related state-of-the-art works GCN (Kipf & Welling), GraphSAGE (Hamilton et al.) and GAT (Veli\\u010dkovi\\u0107 et al.). However, we agree that the mentioned papers are relevant and we will be sure to cite and discuss them in a subsequent version of our paper.\"}",
"{\"comment\": \"I would like to draw the authors' attention to multiple recent works on deep learning on graphs directly related to their work. Among spectral-domain methods, the fundamental work of Bruna et al. [1] has started the recent interest in deep learning on graphs. Replacing the explicit computation of the Laplacian eigenbasis of the spectral CNNs in [1] with polynomial [2] and rational [3] filter functions is a very popular approach (the cited method of Kipf&Welling is a particular setting of [1]). On the other hand, there are several spatial-domain methods that generalize the notion of patches on graphs. These methods originate from works on deep learning on manifolds in computer graphics and recently applied to graphs, e.g. the Mixture Model Networks (MoNet) [4] (Note that the cited Graph Attention Networks (GAT) of Veli\\u010dkovi\\u0107 et al. are a particular setting of [4]). MoNet architecture was generalized in [5] using more general learnable local operators and dynamic graph updates. A further generalization of GAT is the dual graph attention mechanism [6]. Finally, the authors may refer to a review paper [7] on non-Euclidean deep learning methods.\\n\\n\\n1. Spectral Networks and Locally Connected Networks on Graphs, arXiv:1312.6203.\\n\\n2. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, arXiv:1606.09375\\n\\n3. CayleyNets: Graph convolutional neural networks with complex rational spectral filters, arXiv:1705.07664,\\n\\n4. Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017. \\n\\n5. Dynamic Graph CNN for learning on point clouds, arXiv:1712.00268\\n\\n6. Dual-Primal Graph Convolutional Networks, arXiv:1806.00770.\\n\\n7. Geometric deep learning: going beyond Euclidean data, IEEE Signal Processing Magazine, 34(4):18-42, 2017\", \"title\": \"many prior works missing\"}"
]
} |
|
BJgolhR9Km | Neural Networks with Structural Resistance to Adversarial Attacks | [
"Luca de Alfaro"
] | In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified. The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are misclassified. In standard neural networks used for deep learning, attackers can craft adversarial examples from most input to cause a misclassification of their choice.
We introduce a new type of network units, called RBFI units, whose non-linear structure makes them inherently resistant to adversarial attacks. On permutation-invariant MNIST, in absence of adversarial attacks, networks using RBFI units match the performance of networks using sigmoid units, and are slightly below the accuracy of networks with ReLU units. When subjected to adversarial attacks based on projected gradient descent or fast gradient-sign methods, networks with RBFI units retain accuracies above 75%, while ReLU or Sigmoid see their accuracies reduced to below 1%.
Further, RBFI networks trained on regular input either exceed or closely match the accuracy of sigmoid and ReLU network trained with the help of adversarial examples.
The non-linear structure of RBFI units makes them difficult to train using standard gradient descent. We show that RBFI networks of RBFI units can be efficiently trained to high accuracies using pseudogradients, computed using functions especially crafted to facilitate learning instead of their true derivatives.
| [
"machine learning",
"adversarial attacks"
] | https://openreview.net/pdf?id=BJgolhR9Km | https://openreview.net/forum?id=BJgolhR9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkxJTYISlE",
"BkgelgBDp7",
"SkxBghB537",
"SJewMf7q2m",
"HJeGSgFK37",
"BJl-4nFP2m",
"HJx7kKdP27"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1545066935218,
1542045672111,
1541196781249,
1541186063286,
1541144633597,
1541016616799,
1541011675421
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1109/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1109/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1109/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1109/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1109/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1109/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a novel unit making the networks intrinsically more robust to gradient-based adversarial attacks. The authors have addressed some concerns of the reviewers (e.g. regarding pseudo-gradient attacks) but experimental section could benefit from a larger scale evaluation (e.g. Imagenet).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}",
"{\"title\": \"Revised the paper to include the results of attacks conducted using pseudogradients\", \"comment\": \"We have revised the paper, including the results of attacks against RBFI networks that are conducted using pseudogradients.\\nThe more complete results still support the robustness claim for RBFI networks wrt adversarial attacks. \\n\\nIndeed, we had experimented with pseudogradient-based attacks before submitting the paper, and we had then decided to omit the results. Mainly, we thought that pseudogradients were an ad-hoc idea, and we thought that while it was acceptable to use such non-standard idea in training, it was best to keep to standard notions -- standard attacks, and true gradients -- for evaluation. \\n\\nIn view of the comments, we have now agree that including the results on attacks based on pseudogradients is of interest. As mentioned, the paper now contains results for both gradient- and pseudogradient-based attacks against RBFI networks.\"}",
"{\"title\": \"An interesting idea, but needs more comprehensive/diverse evaluations\", \"review\": \"This paper introduces a new neural network layer for the purposes of defending against \\\"white-box\\\" adversarial attacks (in which the adversary is provided access to the neural network parameters). The new network unit and its activation function are constructed in such a way that the local gradient is sparse and therefore is difficult to exploit to add adversarial shifts to the input. To train the networks in the presence of a sparse gradient signal, the authors introduce a \\\"pseudogradient\\\", and optimize this proxy-gradient to optimize the parameters. This training procedure shows competitive performance (after training) on the permutation-invariant MNIST dataset versus other more standard network architectures, but is more robust to both adversarial attacks and random noise.\", \"high_level_comments\": [\"Using only a single dataset, and one on which the classification problem is rather easy, is cause for concern. I would need to see performance on another dataset, like CIFAR 10, to be more convinced that this is a general pipeline. In Sec 4, the authors mention that, using the pseudogradient, \\\"one may be concerned that ... we may converge ... and yet, we are not at a minimum of the loss function\\\". They claim that \\\"in practice it does not seem to be a problem\\\" on their experiments. This claim is a bit weak considering only a single, simple dataset was used for training. It is not obvious to me that this would succeed for more complex datasets.\", \"I would also like to see an additional set of adversarial attacks that are \\\"RBFI-aware\\\". A motivated attacker who is aware of this technique might replace the gradient in the adversarial attack with the pseudogradient instead; I expect such an attack would be effective. While problematic in general, I do not think this is necessarily an overall weakness of the paper (since we, the community, should be investigating methods like these to obfuscate the process of exploiting neural network models), but I would still like to see results showing the impact/performance of adversarial training over the pseudo-gradient. (I do not expect this will be very much effort.)\", \"What is the purpose of showing robustness of your network models to random noise? It is nice/interesting to see that your results are more robust to random noise, but what is the intuition for why your network performs better?\"], \"wording_and_minor_comments\": [\"The abstract is rather lengthy, but should probably contain somewhere a spelling-out of RBFI, since it informs the reader that the radial basis function (with infinity-norm) is the structure of the new network unit.\", \"Sec 4: \\\"...indicate that pseudogradients work much better than regular gradients\\\" :: Please be more clear that this is context specific \\\"...than regular gradients for training RBFI networks\\\".\", \"Sec. 4 :: Try to be consistent to how you specify \\\"z\\\" in this section, you alternate between the 'infinity-norm' definition and the 'max' definition from Eq. (2). Try to homogenize these.\", \"In general, the paper was well-proofed and well-written and was easy to read (high clarity).\", \"To my knowledge, this work is a rather unique foray into solving this problem (original).\", \"Overall, I think this work is an interesting idea to address a rather important concern in the Deep Learning community. While the idea has merit, the small set of experiments in this paper is not sufficiently compelling for me to immediately recommend publication. With a bit more work put into exploring the performance of this method on other datasets, this paper could be made more complete. (Also, since I am aware that space is limited, some of the details on the adversarial attacks from other publications can probably be moved to an appendix.)\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Experiments are not convincing\", \"review\": \"This paper proposes an infinity norm variant of the RBF as the activation function of neural networks. The authors demonstrate that the proposed unit is less sensitive to the out-liar generated by adversarial attacks, and the experimental results on MNIST confirmed the robustness of the proposed method against several gradient-based attacks.\\n\\nIntuitively, the idea should work well against the features of adversarial examples which are far from the center of the cluster of \\\"normal\\\" features. However, the experiments are not convincing enough to show this point, and the entire method looks like a simple gradient mask technique. In my opinion, two types of experiments should be further considered:\\n\\n1. Pseudo-gradient-based attacks. Since the networks are trained using Pseudo gradients, all the attacks utilized in this paper should be pseudo-gradient-based as well.\\n\\n2. Black-Box attacks which do not rely on the information provided by gradients, such as transferable adversarial examples.\\n\\nFurthermore, the robustness revealed on the \\\"noise\\\" attack is interesting, I wish the authors could provide an analysis of the effects on feature distributions using different types of attacks.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, but limited evaluation of effectiveness as a defense\", \"review\": \"Summary: The paper proposes a new architecture to defend against adversarial examples. The authors propose a network with new type of hidden units (RBFI units). They also provide a training algorithm to train such networks and evaluate the robustness of these models against different attacks in the literature.\", \"main_concern\": \"I think the idea proposed here of using RBFI units is very interesting and intuitive. As pointed out in the paper, the RBFI units make it difficult to train networks using standard gradient descent, because the gradients can be uninformative. They propose a new training algorithm based on \\\"pseudogradients\\\" to mitigate this problem. However, while evaluating the model against attacks, only gradient based attacks are used (like PGD attack of Madry et al., or Carlini and Wagner). It's natural to expect that since the gradients are uninformative, these attacks might fail. However, what if we considered similar \\\"pseudogradient\\\" based attacks? In particular, just use the same training procedure formulation to attack (where instead of minimizing loss like in training, we maximize loss)?\\nI think this key experiment is missing in the paper and without this evaluation, it's hard to claim whether the models are more robust fundamentally, or it's just gradient masking.\", \"revision\": \"After the authors revision, I change my score since they addressed my main complaint about results using pseudogradient attacks\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"There is some gradient masking in (I-)FGSM, less (or none?) in PGD\", \"comment\": [\"Quick summary:\", \"There is gradient masking for FGSM and I-FGSM, hence we used also PGD as comparison. PGD indeed provides more informative results than FGSM and I-FGSM.\", \"PGD, especially when using pseudogradients, can find most adversarial examples (see below for data). For epsilon = 0.3, it still supports the finding that the accuracy of RBFI is above 90%.\"], \"full_answer\": \"For FGSM and I-FGSM, the nets with RBFI units (let's call them RBFI nets) do present gradient masking. Gradient masking means that the gradient in correspondence to input points is not very helpful in finding adversarial examples, and so experiments with FGSM and I-FGSM over-estimate accuracy. \\n\\nSince, as we noted in the paper, there is masking for FSGM and I-FSGM, we included also the results for PGD (projected gradient descent) with multiple restarts. The restart points are chosen uniformly at random in the neighborhood of size epsilon of each input point. Using PGD in this way to look for adversarial examples, in case of infinity norm, is what is done in Madry et al. (Towards Deep Learning Models Resistant to Adversarial Attacks) and advocated in a series of papers by Carlini and Wagner. In a sense, if you want to look for adversarial examples in a region, the most general thing you can do is use a general optimization method such as PGD. \\n\\nSo the question is, how good is PGD? Is it also affected by masking?\\nTo answer this, we conducted two additional experiments, using more than the 20 restarts used in the paper. We considered the following values for epsilon: \\n\\nepsilon = 0.3 : for this epsilon, we believe RBFI does better than ReLU\\nepsilon = 0.5 : for this epsilon, we should obtain 0% accuracy, since any pixel can be turned to middle gray. However, since digits contain black/white, the middle-gray conversion is right at the extreme of the 0.5-neighborhood, so we includes also \\nepsilon = 0.55 : for which the accuracy should be 0%. \\n\\nUsing PGD with 500 random restarts, we find the following accuracy as function of the restarts:\\n\\nEpsilon = 0.3, Accuracy at 10 restarts 94.4%\\nEpsilon = 0.3, Accuracy at 100 restarts 92.4%\\nEpsilon = 0.3, Accuracy at 500 restarts 91.8%\\n\\nEpsilon = 0.5, Accuracy at 10 restarts 50.8%\\nEpsilon = 0.5, Accuracy at 100 restarts 28.6%\\nEpsilon = 0.5, Accuracy at 500 restarts 20.6%\\n\\nEpsilon = 0.55, Accuracy at 10 restarts 32.2%\\nEpsilon = 0.55, Accuracy at 100 restarts 12.8%\\nEpsilon = 0.55, Accuracy at 500 restarts 8.0%\\n\\nIf we use pseudogradients in PGD, then PGD becomes even more effective at finding adversarial inputs: \\n\\nEpsilon = 0.3, Accuracy at 10 restarts 91.0%\\nEpsilon = 0.3, Accuracy at 100 restarts 90.7%\\n\\nEpsilon = 0.5, Accuracy at 10 restarts 6.4%\\nEpsilon = 0.5, Accuracy at 100 restarts 4.5%\\n\\nEpsilon = 0.55, Accuracy at 10 restarts 1.1%\\nEpsilon = 0.55, Accuracy at 100 restarts 0.7%\\n\\nAs we see, for epsilon = 0.5 and epsilon = 0.55, PGD especially using pseudogradients is able to find the vast majority of adversarial examples with 100 restarts. Since for epsilon = 0.3, the accuracy is above 90% still, we have a strong indication that the true accuracy of RBFI networks, in presence of adversarial attacks with epsilon = 0.3, is above about 90%. \\n\\nWe agree that this data should be included, at least in summary form, in the paper.\"}",
"{\"comment\": \"It looks like this network might just be masking gradients. Can you check what happens when you extend Figure 1 to eps=0.5? Accuracy should drop to at most 10% if the attack is not being broken by gradient masking.\", \"title\": \"Gradient masking?\"}"
]
} |
|
Syeil309tX | Optimized Gated Deep Learning Architectures for Sensor Fusion | [
"Myung Seok Shim",
"Peng Li"
] | Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a body of recent studies. Among these, the so-called netgated architecture was proposed, which has demonstrated improved performances over the conventional convolu- tional neural networks (CNN). In this paper, we address several limitations of the baseline negated architecture by proposing two further optimized architectures: a coarser-grained gated architecture employing (feature) group-level fusion weights and a two-stage gated architectures leveraging both the group-level and feature- level fusion weights. Using driving mode prediction and human activity recogni- tion datasets, we demonstrate the significant performance improvements brought by the proposed gated architectures and also their robustness in the presence of sensor noise and failures.
| [
"deep learning",
"convolutional neural network",
"sensor fusion",
"activity recognition"
] | https://openreview.net/pdf?id=Syeil309tX | https://openreview.net/forum?id=Syeil309tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bketf9iAJE",
"rylmrpAo2Q",
"HJluzZxjhX",
"SJgnfvEqh7"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544628752954,
1541299515287,
1541239056201,
1541191444428
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1108/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1108/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1108/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1108/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper builds up on the gated fusion network architectures, and adapt those approaches to reach improved results. In that it is incrementally worthwhile.\\n\\nAll the same, all reviewers agree that the work is not yet up to par. In particular, the paper is only incremental, and the novelty of it is not clear. It does not relate well to existing work in this field, and the results are not rigorously evaluated; thus its merit is unclear experimentally.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"good work, but not ripe enough\"}",
"{\"title\": \"Interesting Topic but Lacks Insight, Novely, and Experimental Rigor\", \"review\": [\"This paper tackles the problem of sensor fusion, where multiple (possibly differing) sensor modalities are available and neural network architectures are used to combine information from them to perform prediction tasks. The paper proposed modifications to a gated fusion network specifically: 1) Grouping sets of sensors and concatenating them before further processing, and 2) Performing multi-level fusion where early sensor data representations are concatenated to produce weightings additional to the those obtained from features concatenated at a later stage. Experimental results show that these architectures achieve performance gains from 2-6%, especially when sensors are noisy or missing.\", \"Strengths\", \"The architectures encourage fusion at multiple levels (especially the second one), which is a concept that has been successful across the deep learning literature\", \"The paper looks at an interesting topic, especially related to looking at the effects of noise and missing sensors on the gating mechanisms.\", \"The results show some positive performance gains, although see caveats below.\", \"Weaknesses\", \"The related work paragraph is extremely sparse. Fusion is an enormous field (see survey cited in this paper as well [1]), and I find the small choice of fusion results with a YouBot to be strange. A strong set of related work is necessary, focusing on those that are similar to the work. As an example spatiotemporal fusion (slow fusion [2]) bears some resemblence to this work but there are many others (e.g. [3,4] as a few examples).\", \"[1] Ramachandram, Dhanesh, and Graham W. Taylor. \\\"Deep multimodal learning: A survey on recent advances and trends.\\\" IEEE Signal Processing Magazine 34.6 (2017): 96-108.\", \"Ramach\", \"[2] Karpathy, Andrej, et al. \\\"Large-scale video classification with convolutional neural networks.\\\" Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2014\", \"[3] Mees, Oier, Andreas Eitel, and Wolfram Burgard. \\\"Choosing smartly: Adaptive multimodal fusion for object detection in changing environments.\\\" Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016.\", \"[4] Kim, J., Koh, J., Kim, Y., Choi, J., Hwang, Y., & Choi, J. W. (2018). Robust Deep Multi-modal Learning Based on Gated Information Fusion Network. arXiv preprint arXiv:1807.06233.\", \"The paper claims to provide a \\\"deep understanding of the relationships between sensory inputs, fusion weights, network architecture, and resulting performance\\\". I don't think it really achieves\", \"this with the small examples of weights for some simple situations.\", \"It is very unclear whether the architectures have more or less parameters. At one point it is stated that the original architecture overfits and the new architecture has less parameters (Sec 2.2 and 3). But then it is stated for fairness the number of neurons is equalized (5.2), and later in that section that the new architectures have additional neurons. Which of these is accurate?\", \"Related to the previous point, and possibly the biggest weakness, the experimental methodology makes it hard to tell if performance is actually improved. For example, it is not clear to me that the performance gains are not just a result of less overfitting (for whatever reason) of baselines and that the fixed number of epochs therefore results in stopping at a better performance. Please show training and validation curves so that we can see whether the epochs chosen for the baselines are not just chosen after overfitting (in which case early stopping will improve the performance). As another example, there are no variances shown in the bar graphs.\", \"The examples with noise and failures are limited. For example, it is also not clear why an increase of noise in the RPM feature (Table 5) actually increases the weight of that group in the two-stage architecture. What does that mean? In general there isn't any principled method proposed for analyzing these situations.\", \"Some minor comments/clarifications:\", \"What is the difference between these gated networks and attentional mechanisms, e.g. alpha attention (see \\\"Attention is all you need\\\" paper)?\", \"What is a principled method to decide on the groupings?\", \"There are several typos throughout the paper\", \"\\\"in the presence of snesor\\\" => \\\"in the presence of sensor\\\"\", \"Throughout the paper: \\\"Predication\\\" => \\\"Prediction\\\"\", \"\\\"Likelihood of stucking the training\\\"\", \"Tensorflow is not a simulation environment\", \"Overall, the paper proposes architectural changes to an existing method for fusion, and while positive results are demonstrated there are several issues in the experimental methodology that make it unclear where the benefits come from. Further, the paper lacks novelty as multi-level fusion has been explored significantly and the changes are rather minor. There is no principled method or concepts that drive the architectural changes, and while the authors claim a deeper investigation into the networks' effectiveness under noise and failures the actual analysis is too shallow.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"2 new architectures for multisensor fusion, with improved results on standard settings and also with noisy/missing modalities\", \"review\": \"Overview and contributions: The authors improve upon several limitations of the baseline negated architecture by proposing 1) a coarser-grained gated fusion architecture and 2) a two-stage gated fusion architecture. The authors show improvements in driving mode prediction and human activity recognition in settings where all modalities are observed as well as settings where there are noisy or missing modalities.\", \"strengths\": \"1. The model seems interesting and tackles the difficult problem of multisensor fusion under both normal and noisy settings.\\n2. Good results obtained on standard benchmarks with improvements in settings where all modalities are observed as well as settings where there are noisy or missing modalities.\", \"weaknesses\": \"1. I am worried about the novelty of the proposed approach. The main idea for the fusion-group gated fusion architecture is to perform additional early fusion of sensory inputs within each group which reduces the number of group-level fusion weights and therefore the number of parameters to tune. The two-stage gated fusion architecture simply combines the baseline model and the proposed fusion-group model. Both these ideas seem relatively incremental.\\n2. Doesn't the final two-stage gated fusion architecture further increase the number of parameters as compared to the baseline model? I believe there are several additional FC-NN blocks in Figure 3 and more attention gating weights. I find this counterintuitive since section 2.2 motivated \\\"Potential Over-fitting\\\" as one drawback of the baseline Netgated architecture. How does the increase in parameters for the final model affect the running time and convergence?\", \"questions_to_authors\": \"1. I don't understand Tables 4,5,6. Why are the results for Group-level Fusion Weight in the middle of several columns? Which features are being used in which groups? Please make this clear using vertical separators.\\n2. For the proposed two-stage gated fusion architecture, do the 2 branches learn different things (i.e focus on different portions of the multimodal inputs)? I would have liked to see more visualizations and analysis instead of just qualitative results.\\n\\nPresentation improvements, typos, edits, style, missing references:\\n1. General poor presentation of experimental results. Tables are not clear and bar graphs are not professionally drawn. The paper extends to 9 pages when a lot of space could be saved by making the presentation of experimental results more compact. I believe the guidelines mention that more pages can be used if there are extensive results, but I don't think the experimental results warrant the extra page.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"#Summary\", \"review\": \"This paper proposes two gated deep learning architectures for sensor fusion. They are all based on the previous work\\nNaman Patel et al's modality fusion with CNNs for UGV autonomous driving in indoor environments (IROS). By having the grouped features, the author demonstrated improved performance, especially in the presence of random sensor noise and failures.\\n\\n#Organization/Style:\\nThe paper is well written, organized, and clear on most points. A few minor points:\\n1) The total length of the paper exceeds 8 pages. Some figures and tables should be adjusted to have it fit into 8 pages.\\n2) The literature review is limited.\\n3) There are clearly some misspellings. For example, the \\\"netgated\\\" is often written as \\\"negated\\\".\\n\\n#Technical Accuracy:\\nThe two architecture that the author proposes all based on the grouped features, which to my point of view, is a very important and necessary part of the new model. However, the author failed to rigorously prove or clearly demonstrated that why this is effective to our new model. Moreover, how to make groups or how many groups are needed are not clearly specified. The experiments used only two completely different datasets, none of them are related to the previous sensor fusion method they are trying to compete. I'm afraid this method cannot generalize to a common case.\\n\\nIn addition, if we look at Table 4 and Table 5, we can find the first Group-level Fusion Weight actually increases, which seems contradictory to the result shown in Table 6.\\n\\n#Adequacy of Citations: \\nPoor coverage of literature in sensor fusion. There are less than 10 references are related to sensor fusion.\\n\\nOverall, it is not an ICLR standard paper.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryeoxnRqKQ | NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK | [
"Yandong Li",
"Lijun Li",
"Liqiang Wang",
"Tong Zhang",
"Boqing Gong"
] | Recent works find that DNNs are vulnerable to adversarial examples, whose changes from the benign ones are imperceptible and yet lead DNNs to make wrong predictions. One can find various adversarial examples for the same input to a DNN using different attack methods. In other words, there is a population of adversarial examples, instead of only one, for any input to a DNN. By explicitly modeling this adversarial population with a Gaussian distribution, we propose a new black-box attack called NATTACK. The adversarial attack is hence formalized as an optimization problem, which searches the mean of the Gaussian under the guidance of increasing the target DNN's prediction error. NATTACK achieves 100% attack success rate on six out of eleven recently published defense methods (and greater than 90% for four), all using the same algorithm. Such results are on par with or better than powerful state-of-the-art white-box attacks. While the white-box attacks are often model-specific or defense-specific, the proposed black-box NATTACK is universally applicable to different defenses. | [
"adversarial attack",
"black-box",
"evolutional strategy",
"policy gradient"
] | https://openreview.net/pdf?id=ryeoxnRqKQ | https://openreview.net/forum?id=ryeoxnRqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bk2e8tklE",
"ryxoR3ptyE",
"rJxJgUsYkE",
"SklmUtNFJE",
"r1eZ4M7N14",
"BylsJD_90X",
"BJe1JZucAm",
"ByeQ2E_dAQ",
"Hyxdoxb8CX",
"HyxHfJWUCm",
"rygk8TgICX",
"rkxO_clI07",
"BkeY7txIR7",
"r1lSC_3T2m",
"SklRyhS5nQ",
"ByeeVjH92m",
"HJe1uvS52Q",
"H1eCOHrqhQ",
"S1ekDzBc2m",
"SkleNxHqhQ",
"BylxbaE9nX",
"S1gc1YNc3Q",
"B1xPOxN527",
"BkeM4gQq2X",
"BkgZeTW5hX",
"Bkx94ECt2m",
"S1g3Hhj_hm",
"BkxNNaW_hX",
"SyepS_mP3m",
"HJg9cjD6o7",
"HkxngDtniX",
"B1e2hLFnoX",
"rklrRiqY5Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1544685043517,
1544309970939,
1544300007439,
1544272203454,
1543938601107,
1543304931410,
1543303383352,
1543173290935,
1543012511737,
1543012108576,
1543011654960,
1543010927868,
1543010593112,
1541421261304,
1541196774352,
1541196584319,
1541195622879,
1541195125782,
1541194326811,
1541193767907,
1541192952415,
1541191906316,
1541189742932,
1541185577633,
1541180648694,
1541166130053,
1541090371911,
1541049644313,
1540991045317,
1540352913905,
1540294388145,
1540294323880,
1539054541123
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1106/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"(anonymous)"
],
[
"~Nicholas_Carlini1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1106/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1106/AnonReviewer3"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"Although one review is favorable, it does not make a strong enough case for accepting this paper. Thus there is not sufficient support in the reviews to accept this paper.\\n\\nI am recommending rejecting this submission for multiple reasons.\\n\\nGiven that this is a \\\"black box\\\" attack formalized as an optimization problem, the method must be compared to other approaches in the large field of derivative-free optimization. There are many techniques including: Bayesian optimization, (other) evolutionary algorithms, simulated annealing, Nelder-Mead, coordinate descent, etc. Since the method of the paper does not use anything about the structure of the problem it can be applied to other derivative-free optimization problems that had the same search constraint. However, the paper does not provide evidence that it has advanced the state of the art in derivative-free optimization.\\n\\nThe method the paper describes does not need a new name and is an obvious variation of existing evolutionary algorithms. Someone facing the same problem could easily reinvent the exact method of the paper without reading it and this limits the value of the contribution.\\n\\nFinally, this paper amounts to breaking already broken defenses, which is not an activity of high value to the community at this stage and also limits the contribution of this work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"the only favorable review does not make a convincing argument to accept the paper\"}",
"{\"title\": \"Steps 1--4 generate valid adversarial examples\", \"comment\": \"Not sure why you are obsessed with the minimal adversarial examples. If those are what you are looking for, our paper does not provide a direct answer though you probably can derive one based on our work.\\n\\nKindly check Steps 1--4 in the paper which generate valid adversarial examples whose differences from the original images are imperceptible up to the thresholds $\\\\tao_p, p=2 or \\\\infty$.\"}",
"{\"title\": \"minimal adversarial examples?\", \"comment\": \"Sampling from this distribution would have little to do with minimal adversarial examples: if you succeeded in modelling all adversarials, this would include the vast majority that are far away from the clean image. Sampling would mainly return large adversarial perturbations, not small ones.\"}",
"{\"title\": \"Mode is not the mean\", \"comment\": \"If you referred to a parametric distribution by the \\\"more powerful distribution\\\", we learn the parameters to specify the distribution as we described earlier. Once we reach such a distribution, we are supposed to either sample from it or use the mode to generate an adversarial example following Steps 1--4 in the paper. The mean of Gaussian overlaps with the Gaussian's mode.\"}",
"{\"title\": \"you don't want to model the distribution of adversarials\", \"comment\": \"You don't want to model the distribution of adversarial examples. Say you use a more powerful distribution that is really capable of capturing the distribution of adversarial examples. This distribution would comprise the whole input space for which the model decision is different from the label of the original image, right? You would then take the mean of that distribution (at least that's what you do right now). But that point would certainly be far away from the minimum adversarial perturbation that you seek.\"}",
"{\"title\": \"Understanding our approach\", \"comment\": \"Regarding the \\\"whole\\\" wording, we are fine to remove the \\\"whole\\\" because\\n\\\"the whole population of adversarial examples per image\\\" \\nis actually equivalent to \\n\\\"the population of adversarial examples per image\\\". \\nGiven an image, all its adversarial examples comprise the population. We use a Gaussian distribution to model this population in this work. In the future, other multi-variate continuous distributions, like GMM or uniform, may be found a better fit to the population. Additionally, one may also consider to capture the population by non-parametric distributions. No matter which one --- including the Gaussian, what it models is the population per image and not the local region. \\n\\nThanks to the above, we formalize our problem as minimizing the expected loss under the Gaussian distribution. This problem formulation is different from any problem formulations of the existing white-box attack methods. In contrast, BPDA and QL are built upon PGD (and CW for BPDA). PGD is the basic framework for them and they only (and yet non-trivially) replace the true gradients by the estimated ones. In this sense, we said we did not employ any white-box attack methods. We also agree with the reviewer that there is no sharp contrast between BIM and MIM because both are based on the similar principle of solving the following optimization problem: $min_{perturbation} Loss$. In contrast, ours is $min_{Gaussian mean} Expectation Loss$.\"}",
"{\"title\": \"Regarding the experiments\", \"comment\": \"We have tuned the hyper-parameters of the competing methods (BPDA, QL, ZOO, D-based) in order to achieve the best performances they could have. The ES part is to approximate an expectation by a sample mean, so we believe it is fair to fix the sample size for QL and our algorithm --- we did tune the other hyper-parameters in QL such as the learning rate and number of iterations. (QL actually doubles the sample size by reversing the signs of the samples.)\\n\\nFor all of the competing methods but the decision-based, we used the code released by the original authors in the experiments. We did not find the official implementation of the decision-based method (D-based) due to the deadline rush; instead, this implementation (https://github.com/greentfrapp/boundary-attack) was employed. Thanks to the reviewer's question, we tested this implementation using the evaluation metric reported in the original publication and only found it failed to re-produce the reported results. Upon a second search, we found the \\\"foolbox\\\" implementation (https://github.com/bethgelab/foolbox) of D-based. With it, we re-produced the reported results and obtained 66% success rate on attacking INPUT-TRANS (ours: 100%, BPDA: 100%, QL: 66.5%, and ZOO: 38.3%). Regarding the D-based experiments for generating the $\\\\ell_infty$ bounded adversarial examples, we re-wrote the norm function in the two implementations and did not observe any good results. We agree with the reviewer that, since D-based was particularly tailored for the $\\\\ell_2$ bounded adversarial examples, that simple change of norm is not good enough to improve D-based for handling the $\\\\ell_infty$ metric. More careful work has to be done to modify the D-based method to fit the $\\\\ell_infty$ context; for example, the projection to the $\\\\ell_2$ sphere has to be updated by the projection to the $\\\\ell_infty$ polygon. We have updated the PDF.\"}",
"{\"title\": \"The \\\"sharp contrast\\\" you are trying to construct does not exist\", \"comment\": \"Dear authors,\\nfirst of all let me say that I do appreciate the effort you put into revising the manuscript and the rebuttal. I have, however, a couple of questionmarks behind your responses:\\n\\n1) Bad performance of decision- and score-based attacks\\n\\nFirst, [2] is an L2-based attack but you are applying it in an L-infinity scenario, that doesn't make sense. Second, the decision-based attack [2] and other black-box attacks based on score-based methods do perform very well on e.g. Madry et al. (MNIST) or the analysis by synthesis model [3], see results in [3]. In fact, on Madry et al. [2] performs much better than e.g. gradient-based BIM. I hence strongly doubt the results related to [2].\\n\\n2) \\\"We run QL using the same hyper-parameters as N ATTACK for the ES part\\\"\\n\\nThat's not a fair comparison because the optimal parameters for QL are likely different (e.g. because of the clipping) than for the N Attack. Please compare the attacks with hyper parameters optimised for each attack.\\n\\n3) On the contrary, we do not employ any white-box attack methods at all in developing our algorithm.\\n\\nI think you are confusing what is meant by \\\"white-box\\\": white-box refers to whether or not you are using the backpropagated gradient (which requires you to have access to the internal structure and weights of the model). PGD is white-box if you use the exact gradient and score-based if you use estimated gradients. Similarly, your method is performing a gradient descent using an estimated gradient. I fail to see how that fundamentally differs from QL.\\n\\n3) \\\"this work is the first to capture the whole population of adversarial examples per image\\\"\\n\\nYou are not capturing the whole distribution, you are capturing a local Gaussian region. I really do not like the whole wording around \\\"populations\\\" and find that confusing and misleading. Your motivation in the end is exactly equivalent to ES, you are just using the gradient in a different way.\\n\\n4) In the updated manuscript you write: \\\"In sharp contrast, we do not employ any white-box attack methods at all and, instead, provide a novel perspective to the adversarial attack by modeling the whole population of adversarial examples for every single image. This change alleviates the dependence on the gradients and leads to big differences in terms of the attack results.\\\"\\n\\nIn line with what I wrote above I find this part extremely misleading. Of course your method is based on a \\\"white-box attack method\\\" - it's called gradient descent. There is no sharp contrast, just as there is no sharp contrast between BIM and MIM, both are based on a similar principle. I think it would be much better if you would tune your paper to say that you have developed a more effective ES-based attack and show that an PGD-based ES attack doesn't work as well on defended networks.\"}",
"{\"title\": \"Summary of changes in the new PDF\", \"comment\": \"Denote by QL (Ilyas et al., 2018)\\u2019s approach. We have added to the revised PDF\\n+ new results on attacking Adv-Train (Madry et al., 2018) (Table 1), \\n+ a new paragraph to draw readers\\u2019 attention to QL upfront (cf. the highlighted text in the introduction), \\n+ new results of QL on attacking the 10 defense methods (Table 1), \\n+ a new section (Section 3.1.3) to carefully investigate the factors that contribute to the inferior performance of QL algorithm. The results reveal that, in order to improve its attack success rates, it is vital to get rid of PGD (projection and the sign of the gradients), which is the foundation upon which QL is built, and meanwhile to couple the $\\\\ell_infty$ clip with the tanh transformation. \\n\\nThanks to the careful experimental investigation, we make the following conclusion. \\n\\n1) QL is hinged on the white-box PGD attack --- in terms of methodology, it is actually closer to [1] than ours because both QL and [1] essentially approximate the gradients for PGD. As a result, the quality of the estimated gradients in QL is a big deal. Unfortunately, ES does not give rise to stable gradients due to the sampling step and PGD\\u2019s projection and sign functions, especially when the gradients are \\\"obfuscated\\\". On the contrary, we do not employ any white-box attack methods at all in developing our algorithm. The Gaussian mean is more important than the gradients in our approach. Whereas ES is a natural choice to search for the Gaussian mean, some derivative-free methods [2] are also good alternatives. \\n\\n2) The seemingly subtle algorithmic distinction between QL and ours actually leads to significantly different attack success rates. In order to improve QL\\u2019s performance, it is vital to remove PGD, the foundation upon which QL is built. \\n\\n[1] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. \\n[2] Luis Miguel Rios and Nikolaos V Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247\\u20131293,2013.\"}",
"{\"title\": \"Results\", \"comment\": \"Here are the success rates on attacking the vanilla PGD training (Madry et al., 2018) on CIFAR10:\", \"bpda\": \"46.9%\", \"ours\": \"47.9%\\n\\nThe classification accuracy of the PGD-defended network is 87.3% on CIFAR10.\", \"conclusion_1\": \"The vanilla PGD training is strong.\", \"conclusion_2\": \"The vanilla PGD training sacrifices the performance to certain degree on the original classification task, so do some other defense techniques (cf. Table 1 in the PDF).\"}",
"{\"title\": \"PDF updated & responses to questions\", \"comment\": \"Q: the attack introduced here is actually equivalent to (Ilyas et al., 2018).\", \"a\": \"We have to point out the distribution over adversarial examples per image is different from the distribution over the transformations for a physical adversarial in the real world. In order to photograph a real-world adversarial, it is natural to consider all the conditions (location, background, lighting, etc.) as a distribution of transformations. In contrast, it is not so obvious to model by a distribution the adversarial examples for every single image. To the best of our knowledge, this work is the first to capture the whole population of adversarial examples per image.\", \"q\": \"The concept of adversarial distributions is not new\"}",
"{\"title\": \"PDF revised with extensive discussion on [1]; Curves updated\", \"comment\": \"Q: The paper would have to change significantly in order to relate it properly to (Ilyas et al., 2018).\", \"a\": \"Sorry for the confusion about Figure 1. First of all, we did not include all the defense methods in Figure 1 due to the heavy run time on ImageNet. Besides, for each attack method, we had removed all the examples of which it failed to change the labels. Our intention was to compare the relative convergences when their last steps are aligned. Upon reading your comments, however, we think this alignment is actually unnecessary and should be removed. In the revised PDF submission, you can see that some of the attack methods fail to reach 100% success rate.\\n\\nWe will add some example adversarial examples in the appendix, but the adversarial examples in $\\\\ell_\\\\infty = 0.031$ are hardly differentiable from the benign ones.\", \"q\": \"Example adversarial examples to baseline the figure:\"}",
"{\"title\": \"Appreciated; Have re-named the curve\", \"comment\": \"Thank you for the encouraging comments! Regarding the name of the curve, we have removed \\u201cROC\\u201d and now simply call it the curve of success rate vs. number of evolution iterations. We will continue to polish the text.\"}",
"{\"title\": \"original\", \"review\": \"In this paper, authors propose a \\\"universal\\\" Gaussian balck-box adversarial attack.\\nOriginal and well-written (although there are a few grammar mistakes that would require some revision) and structured. Having followed the comments and discussion I am convinced that the proposed method is state of the art and interesting enough fro ICLR.\\nTo the best of my knowledge, the study is technically sound.\\nIt fairly accounts for recent literature in the field.\\nExperiments are convincing.\\nOne thing I am not so convinced about is the naming of the evaluation curve as \\\"a new ROC curve\\\". I understand the appeal of pairing the proposed evaluation curve with the ROC curve but, beyond an arguable resemblance, they have no much in common, really.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Got it\", \"comment\": \"Got it. Thanks for the pointers!\"}",
"{\"comment\": \"FYI, Madry et al. released publicly available pre-trained weights.\", \"https\": \"//github.com/MadryLab/cifar10_challenge\", \"title\": \"Public code\"}",
"{\"title\": \"Will do\", \"comment\": \"Oops, I misunderstood your earlier question.. We are running experiments against the vanilla PGD defended CNN. As Athalye et al. (2018) did not release this very strong model, we had to train it ourselves. Actually, we will ask them for that model by email now.. Stay tuned please.\"}",
"{\"comment\": \"You don't have to use the Thermometer encoding anymore. I mean the pure vanilla PGD training which Athalye et al. (2018) were unable to defeat.\", \"title\": \"standard adversarial training\"}",
"{\"title\": \"Response to \\\"Vanilla PGD training\\\"\", \"comment\": \"Since Therm discretizes the input, it prevents one from simply applying the gradient projection in PGD, not mentioning that the gradients are estimated through other methods like BPDA or DGA (Buckman et al., 2018). Did you mean that DGA is a more faithful application of (Madry et al., 2018)'s defense to Therm than LS-PGA? However, DGA has been shown a weak attack so it likely cannot lead to strong defense (e.g., called Therm-Adv-DGA) either.\"}",
"{\"comment\": \"Thanks for the answer. I still have a question. Vanilla PGD training is more robust than Therm-Adv.\\nFor (2), why do you not use vanilla PGD training instead of Therm-Adv?\", \"title\": \"Vanilla PGD training\"}",
"{\"title\": \"Answers to (1) & (2)\", \"comment\": \"Regarding (2), thank @Nicholas for the catch! We will cite both (Buckman et al., 2018) and (Madry et al., 2018) in the revised paper. It is worth noting that vanilla PGD does not apply to Therm. As a result, the LS-PGA enhanced Therm-Adv is probably one of the best one can do in order to apply Madry et al. (2018)'s defense principle to Therm.\\n\\nRegarding (1), you may consider the Gaussian distribution along with the steps 1--4 in the paper as the threat model. We do not use any substitute network in our approach. The black-box setting: We query a black-box network by an input and obtain its output probability vector. This setting is as standard as many existing works'.\"}",
"{\"comment\": \"Thanks for your comment!\\n\\nI clarify my concern more clear. I was trying to say that THERM-ADV of Buckman et al. (2018) should not be cited as Madry et al. (2018) for evaluation since standard adversarial training, Madry et al. (2018), is more robust than THERM-ADV.\", \"title\": \"Clarification\"}",
"{\"comment\": \"The statement as it is written is technically correct. Thermometer encoding by itself is no more robust than a standard neural network. Adding adversarial training to thermometer encoding confers some amount of robustness, but less than standard adversarial training.\\n\\nSo whether or not adversarial training can \\\"significantly improve the defense strength of THERM\\\" depends I guess on your definition of \\\"significantly\\\". In Athalye et al. (2018) we find this difference to be ~20% at eps=8 and ~40% at eps=4.\", \"title\": \"Regarding thermometer encoding\"}",
"{\"comment\": \"Hi, it looks very interesting.\\n\\nHowever, I have a few questions.\\n\\n(1) Could you specify the threat model? For example, I could not find what substitute models are used to generate adversarial examples. What black-box setting did you use?\\n\\n(2) I think you don't actually evaluate your attack on Madry et al. (2018). THERM-ADV did not technically use PGD adversarial examples described in Madry et al. (2018), but use LS-PGA examples described in Buckman et al. (2018). In addition, Athalye et al. (2018) argued that THERM-ADV is significantly weaker than Madry et al. (2018) since it is trained against the LS-PGA attacks. Therefore, The argument in your paper, \\\"Athalye et al. (2018) find that the adversarial robust training (Madry et al., 2018) can significantly improve the defense strength of THERM.\\\" may be wrong.\", \"title\": \"Questions\"}",
"{\"title\": \"Clarification\", \"comment\": \"Thanks for asking. We will clarify our previous responses (mainly the paragraph below) by answering three of your questions.\\n\\n----------------------------------------\\nOn the algorithmic aspect, both ours and Ilyas et al. (2018)\\u2019s employ NES as the optimization algorithm. However, we arrive at it via different routes and for different purposes. We assume a probabilistic generation process of the adversarial examples (Steps 1\\u20134, Section 2), which finds an adversarial example by a one-step addition to the input. In contrast, Ilyas et al. (2018)\\u2019s modeling assumption is that an adversarial example can be found by PGD, which iteratively updates the original input with a small learning rate until it becomes adversarial. To this end, we use NES to estimate the parameters of the distribution, while Ilyas et al. (2018) use NES to replace the true (stochastic) gradients in PGD. We contend that, due to the non-differentiable clip and projection operations and the fairly large Gaussian covariance, NES is *not* an efficient (and possibly a biased) estimator of the true gradients \\u2014 we are running experiments to empirically verify if this is true or not. \\n--------------------------------------------------------------\\n\\n== Q1: specify exactly what the difference between NES and your attack is? ==\\n\\nUsing our notation, the pseudo code below sketches our algorithm and Ilyas et al. (2018)\\u2019s.\\n\\nOurs, which searches for the Gaussian from which more than one adversarial examples can be generated.\", \"iterate_until_convergence\": \"1. Draw a sample {e} from the normal distribution\\n2. Transform it to a sample of zero-mean Gaussian by {z=0 + \\\\sigma * \\\\epsilon}\\n3. Generate current adversarial examples by {x + z} and {x - z}\\n4. Compute the losses {J(z)}\\n5. Compute the search gradients {g} by equation (5)\\n6. x = Projection(x - r * sign(g))\\nReturn x\\n\\n\\nThe differences start from the second line, where we transform the normal sample to a sample of the Gaussian N(\\\\theta, sigma^2) while Ilyas et al. (2018) transform it following a zero-mean Gaussian N(\\\\theta, sigma^2).\", \"line_3\": \"The difference is on how to generate the adversarial examples.\", \"line_4\": \"Slightly different loss functions are used in the two methods. This is not vital.\\n\\nLine 5 is the same for the two methods.\", \"line_6\": \"While we update the Gaussian mean by a gradient descent step, Ilyas et al. (2018) update the adversarial example by PGD.\\n\\n== Q2: a smaller standard deviation for sampling ==\\nBy using the same setting for the NES component of our algorithm and Ilyas et al. (2018)\\u2019s, including the same sample size and standard deviation, we obtain the comparison results below. Ours still performs better. We will complete the experiments with all the defense methods studied in our paper.\", \"table_1\": \"Success rate on attacking Randomization (ImageNet)\\n# of iterations 30 90 150 210 270 300 360 400\\nours 21.54 78.58 90.02 95.41 95.5 95.5 95.5 95.5\\nIlyas et al. (2018)'s 20.5 46.37 53.33 53.33 53.33 53.33 53.33 53.33 \\n\\n\\n== Q3: you don't perform clipping ==\\nWe did perform clipping in steps 1--4 of the paper, where we generate adversarial examples from a Gaussian distribution. In contrast, Ilyas et al. (2018)\\u2019s performs the clipping of gradients due its employment of PGD attack.\"}",
"{\"title\": \"Please specify the exact differences\", \"comment\": \"I appreciate the additional experiments, thanks! Could you specify exactly what the difference between NES and your attack is? If I understand you correctly, then the difference is (1) you are using a smaller standard deviation for sampling and (2) you don't perform clipping. However, the standard deviation is merely a hyperparameter of NES and should be tuned for optimal attack efficiency. Second, the clipping is necessary in any real world scenario where you don't have full access to the model but can only query it with images, right?\"}",
"{\"title\": \"NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK\", \"review\": \"Summary: In this paper the authors discuss a black-box method to learn\\nadversarial inputs to DNNs which are \\\"close\\\" to some nominal example\\nbut nevertheless get misclassified. The algorithm essentially tries to\\nlearn the mean of a joint Gaussian distribution over image\\nperturbations so that the perturbed image has high likelihood of being\\nmisclassified. The method takes the form of zero-th order gradient\\nupdates on an objective measuring to what degree the perturbed example\\nis misclassified. The authors test their method against 10 recent DNN\\ndefense mechanisms, which showed higher attack-success rates than\\nother methods. Additionally the authors looked at transferrability of\\nthe learned adversarial examples.\", \"feedback\": \"As noted before, this paper shares many similarities with\\n\\n[1] \\\"Black-box Adversarial Attacks with Limited Queries and Information\\\" (https://arxiv.org/abs/1804.08598)\\n\\nand the authors have responded to those similarities in two follow-ups. I have reviewed these results and their \\nmethod does appear to improve over [1]. However, I am still reluctant to admit these additions to the original submission, \\nmainly because dropping [1] in the original submission seems to be a fairly major omission of one of the most relevant competitors out there. In its current form, the apparent redundancies distract significantly from the paper, and to remedy this, the paper would have to change significantly in order to relate it properly to [1] clear is needed. I'd be curious on the ACs thoughts on this. \\n\\nI appreciate the authors' claim that their method can breach many of the popular defense methods out there, but we \\nalso see that many of the percentages in Figure 1 converge to 1. On the one hand this suggests that all defense methods \\nare in some sense equally bad, but on the other, it could also just reflect on the fact that the thresholds are chosen \\n\\\"too large\\\". I understand that many of the thresholds were inherited from previous work, but it would nevertheless help if the authors showed some example adversarial images to help baseline this Figure.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Experimental comparison with \\\"Black-box Adversarial Attacks with Limited Queries and Information\\\"\", \"comment\": \"With the open-source code released by Ilyas et al. (2018), we have evaluated their method on attacking three defense methods: SAP and Therm for CIFAR10 and Randomization for ImageNet. The results (success rate vs. number of optimization iterations) are shown in the tables below. We have also tested larger sample size and higher number of iterations for NES, and yet the results remain about the same.\\n\\nIlyas A, Engstrom L, Athalye A, Lin J. Black-box Adversarial Attacks with Limited Queries and Information. arXiv preprint arXiv:1804.08598. 2018 Apr 23.\\n\\nThe inferior attacking results of (Ilyas et al., 2018) verify our conjecture above, i.e., due to the non-differentiable clip and projection operations and the fairly large Gaussian covariance, NES is *not* an efficient (and possibly a biased) estimator of the true gradients of PGD. As a result, NES is not able to approach PGD\\u2019s strong attack performance.\", \"table_1\": \"Success rate on attacking SAP (CIFAR10)\\n# of iterations 30 90 150 210 270 300 360 400\\nOurs 45.13 96.21 99.00 99.54 99.81 100 100 100\\nIlyas et al. (2018)'s 33.36 34.51 36.03 37.36 37.36 37.36 37.36 37.36\", \"table_2\": \"Success rate on attacking Therm (CIFAR10)\\n# of iterations 30 90 150 210 270 300 360 400\\nOurs 67.38 96.38 98.92 99.53 99.74 99.89 100 100\\nIlyas et al. (2018)'s 59.22 83.32 83.82 84.32 85.33 85.33 85.33 85.33\", \"table_3\": \"Success rate on attacking Randomization (ImageNet)\\n# of iterations 30 90 150 210 270 300 360 400\\nOurs 21.54 78.58 90.02 95.41 95.5 95.5 95.5 95.5\\nIlyas et al. (2018)'s 3.33 4.56 6.77 8.5 8.5 8.5 8.5 8.5\"}",
"{\"title\": \"Good evaluation but important prior work was missed which substantially reduces novelty and makes a major rewrite necessary\", \"review\": [\"In this work the authors use a score-based adversarial attack (based on the natural evolution strategy (NES)) to successfully attack a multitude of defended networks, with success rates rivalling the best gradient-based attacks.\", \"As confirmed by the authors in a detailed and very open response to a question of mine, the attack introduced here is actually equivalent to [1]. While the attack itself is not novel (which will require a major revision of the manuscript), the authors point out the following contributions over [1]:\", \"Attack experiments here go way beyond Ilyas et al. in terms of Lp metrics, different defense models, different datasets and transferability.\", \"Different motivation/derivation of NES.\", \"Concept of adversarial distributions.\", \"Regression network for good initialization.\", \"Introduction of accuracy-iterations plots.\"], \"my_main_concerns_are_as_follows\": \"* The review of the prior literature, in particular on score-based and decision-based defences (the latter of which are not even mentioned), is very limited and is framed wrongly. In particular, the statement \\u201cHowever, existing black-box attacks are weaker than their white-box counterparts\\u201d is simply not true: as an example, the most prominent decision-based attack [2] rivals white-box attacks on vanilla DNNs as well as defended networks [3].\\n* The concept of adversarial distributions is not new but is common in the literature of real-world adversarials that are robust to transformations and perturbations (like gaussian noise), check for example [4]. In [4] the concept of _Expectation Over Transformation (EOT)_ is introduced, which is basically the generalised concept of the expectation over gaussian perturbations introduced in this work.\\n* While I like the idea of accuracy-iterations plots, the idea is not new, see e.g. the accuracy-iterations plot in [2] (sample-based, Figure 6), the loss-iterations plot in [5] or the accuracy-distortion plots in [3]. However, I agree that these type of visualisation or metric is not as widespread as it should be.\\n\\nHence, in summary the main contribution of the paper is the application of NES against different defence models, datasets and Lp metrics as well as the use of a regression network for initialisation. Along this second point it would be great if the authors would be able to demonstrate substantial gains in the accuracy-query metric. In any case, in the light of previous literature a major revision of the manuscript will be necessary.\\n\\n[1] Ilyas et al. (2018) \\u201cBlack-box Adversarial Attacks with Limited Queries and Information\\u201d (https://arxiv.org/abs/1804.08598) \\n[2] Brendel et al. (2018) \\u201cDecision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models\\u201d (https://arxiv.org/abs/1712.04248)\\n[3] Schott et al. (2018) \\u201cTowards the first adversarially robust neural network model on MNIST\\u201d (https://arxiv.org/abs/1805.09190)\\n[4] Athalye et al. (2017) \\u201cSynthesizing Robust Adversarial Examples\\u201d (https://arxiv.org/pdf/1707.07397.pdf)\\n[5] Madry et al. {2017) \\u201cTowards Deep Learning Models Resistant to Adversarial Attacks\\u201d (https://arxiv.org/pdf/1706.06083.pdf)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Differences from \\\"Black-box Adversarial Attacks with Limited Queries and Information\\\"\", \"comment\": \"That\\u2019s a great catch. Thank you very much! We should have read the paper before\\u2026 It is intriguing (and yet disappointing for us) to see that a similar approach has been proposed (Ilyas et al. 2018) by also resorting to the natural evolution strategy (NES), but it is not surprising. After all, derivative-free methods, such as NES, REINFORCE, and the zero-th order algorithms, are a natural choice for the blackbox attack.\\n\\nWhile we mainly attack up to 10 recently published defense methods by the proposed approach, Ilyas et al. (2018) focus on attacking a vanilla neural network under the constraints of limited queries and information (e.g., top k entries as opposed to the full output vector). \\n\\nOn the algorithmic aspect, both ours and Ilyas et al. (2018)\\u2019s employ NES as the optimization algorithm. However, we arrive at it via different routes and for different purposes. We assume a probabilistic generation process of the adversarial examples (Steps 1\\u20134, Section 2), which finds an adversarial example by a one-step addition to the input. In contrast, Ilyas et al. (2018)\\u2019s modeling assumption is that an adversarial example can be found by PGD, which iteratively updates the original input with a small learning rate until it becomes adversarial. To this end, we use NES to estimate the parameters of the distribution, while Ilyas et al. (2018) use NES to replace the true (stochastic) gradients in PGD. We contend that, due to the non-differentiable clip and projection operations and the fairly large Gaussian covariance, NES is *not* an efficient (and possibly a biased) estimator of the true gradients \\u2014 we are running experiments to empirically verify if this is true or not. \\n\\nIt is a conceptual change from the traditional attack methods (e.g., PGD) to the way of modeling the adversarial examples by a distribution. This change may enable some exciting future works. For instance, we can draw samples from the distribution to characterize the adversarial boundaries, efficiently do adversarial training, etc.\\n\\nAnother notable difference from (Ilyas et al. 2018)\\u2019s is that we train a regression neural network to find a good initialization for NES. Experiments verify the benefit of this regression network. \\n\\nOn the experimental aspect, we attack the recently proposed defense methods following the protocols set up in the original papers. As a result, we experiment with both CIFAR10 and ImageNet, both the $\\\\ell_2$ and $\\\\ell_infty$ distances, and different types of defenses (e.g., input randomization and discretization, ensembeling, denoising, etc.). In contrast, Ilyas et al. (2018) experiment with ImageNet with an $\\\\ell_\\\\infty$ distance. In addition, we examine the adversarial examples\\u2019 transferabilities across different defense methods. Unlike the findings about the transferability across vanilla neural networks, our results indicate several unique characteristics of the transferability of our adversarial examples for the defended neural networks (cf. Section 3.3). Finally, we plot the curves of the attack success rates versus the iteration numbers, a new evaluation scheme which is complementary to the final attack success rates.\"}",
"{\"title\": \"Comparison with state-of-the-art\", \"comment\": \"Could the authors elaborate as to how this attack differs from [1]? As far as I can see this work uses the same gradient estimate with Gaussian bases.\\n\\n[1] \\\"Black-box Adversarial Attacks with Limited Queries and Information\\\" (https://arxiv.org/abs/1804.08598)\"}",
"{\"title\": \"Difference to state of the art\", \"comment\": \"Could the authors elaborate as to how this attack differs from [1]? As far as I can see this work uses the same gradient estimate with Gaussian bases.\\n\\n[1] \\\"Black-box Adversarial Attacks with Limited Queries and Information\\\" (https://arxiv.org/abs/1804.08598)\"}",
"{\"comment\": \"Code released: https://github.com/gaussian-attack/Nattack\", \"title\": \"Code released: https://github.com/gaussian-attack/Nattack\"}"
]
} |
|
Hyxsl2AqKm | ON THE EFFECTIVENESS OF TASK GRANULARITY FOR TRANSFER LEARNING | [
"Farzaneh Mahdisoltani",
"Guillaume Berger",
"Waseem Gharbieh",
"David Fleet",
"Roland Memisevic"
] | We describe a DNN for video classification and captioning, trained end-to-end,
with shared features, to solve tasks at different levels of granularity, exploring the
link between granularity in a source task and the quality of learned features for
transfer learning. For solving the new task domain in transfer learning, we freeze
the trained encoder and fine-tune an MLP on the target domain. We train on the
Something-Something dataset with over 220, 000 videos, and multiple levels of
target granularity, including 50 action groups, 174 fine-grained action categories
and captions. Classification and captioning with Something-Something are challenging
because of the subtle differences between actions, applied to thousands
of different object classes, and the diversity of captions penned by crowd actors.
Our model performs better than existing classification baselines for SomethingSomething,
with impressive fine-grained results. And it yields a strong baseline on
the new Something-Something captioning task. Experiments reveal that training
with more fine-grained tasks tends to produce better features for transfer learning. | [
"Transfer Learning",
"Video Understanding",
"Fine-grained Video Classification",
"Video Captioning",
"Common Sense",
"Something-Something Dataset."
] | https://openreview.net/pdf?id=Hyxsl2AqKm | https://openreview.net/forum?id=Hyxsl2AqKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hyx9kE0gg4",
"rkef2JUcnQ",
"B1xz8jHq2X",
"SklNaNgcnQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770529561,
1541197738109,
1541196618101,
1541174460466
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1105/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1105/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1105/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1105/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents the empirical relation between the task granularity and transfer learning, when applied between video classification and video captioning. The key take away message is that more fine-grained tasks support better transfer in the case of classification---captioning transfer on 20BN-something-something dataset.\", \"pros\": \"The paper presents a new empirical study on transfer learning between video classification and video captioning performed on the recent 20BN-something-something dataset (220,000 videos concentrating on 174 action categories). The paper presents a lot of experimental results, albeit focused primarily on the 20BN dataset.\", \"cons\": \"The investigation presented by this paper on the effect of the task granularity is rather application-specific and empirical. As a result, it is unclear what generalizable knowledge or insights we gain for a broad range of other applications. The methodology used in the paper is relatively standard and not novel. Also, according to the 20BN-something-something leaderboard (https://20bn.com/datasets/something-something), the performance reported in the paper does not seem competitive compared to current state-of-the-art. There were some clarification questions raised by the reviewers but the authors did not respond.\", \"verdict\": \"Reject. The study presented by the paper is a bit too application-specific with relatively narrow impact for ICLR. Relatively weak novelty and empirical results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Relatively weak novelty and empirical results.\"}",
"{\"title\": \"Nice paper but somewhat limited novelty out of the specific video classification & captioning domain considered\", \"review\": \"This paper describes a multi-task video classification and captioning model applied to a fine-grained object relationship video dataset, for a range of different classification and captioning tasks at different levels of granularity. This paper also creates a new video action dataset around kitchen objects and actions. Finally, the paper includes an empirical study on both the multi-task performance and transfer learning performance between the two datasets considered.\", \"pros\": [\"This paper is clearly written and includes a thorough and well-laid out empirical component\", \"The contribution to the video action classification and captioning space seems like a worthwhile one\"], \"cons\": [\"The novelty of this paper mainly seems to be with respect to video classification and captioning; other methodological aspects and empirical themes are interesting but fairly standard more generally. The lack of experiments outside of one video action classification & captioning dataset (and one additional one for a transfer learning study) limit the empirical generality of the findings.\"], \"overall_take\": \"This paper's contributions seem of interest to the video classification and captioning community, but less so to a broader or more methodologically-focused one such as ICLR.\", \"notes\": [\"The comments on insufficiency of existing video classification tasks in Sec. 3 are interesting, but seem pretty restricted to that specific domain\", \"The model used is a fairly standard CNN + LSTM video encoder, plus a basic MTL network approach with hard parameter sharing between tasks, as is commonly used today. Similarly, the transfer learning approach---pre-training on one task, then freezing layers and fine-tuning---is a standard approach.\", \"The empirical findings are interesting---for example, that training on fine-grained tasks improves coarse-grained accuracy, that MTL training is helpful, etc---but (a) seem in general like known themes, and (b) have limited generality either way beyond the specific types of tasks considered in the dataset examined.\", \"In general, much of the paper is focused on details specific to this application domain, rather than to general methods or themes potentially interesting to the broader ICLR community\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting observation and dataset, but more analysis would be necessary\", \"review\": \"Summary\\nThis paper studied video classification and video caption generation problem.\\nEspecially, the paper tried few baseline architectures datasets used from pretraining features on recently proposed Something-something-V2 dataset and another newly proposed dataset.\\nThis paper argues that fine-grained annotation helps learning good features and enhance transfer learning results\\n\\nStrength\\nThere are some interesting observations in terms of transfer learning.\\nEspecially, comparison of fine-grained and coarse-grained dataset for transfer learning (Table 2), and effect of using caption and newly collected dataset for transfer learning (Figure 5) is interesting and the result is worth to be shared to the community.\\nIn addition, a new dataset that are carefully collected for transfer learning might be useful to make progress on video classification and captioning.\\n\\nWeakness\\nTo many tables with different neural network parameter settings look distracting and does not provide much information. Instead, focusing more on effect of dataset for transfer learning and providing more analysis on this aspect would make the main argument of this paper stronger.\\nFor example, effect of transfer could be studied on different dataset. If transfer learning with proposed dataset containing find-grained annotation / captions is useful, it might help boosting performance on other video recognition dataset as well.\\nProviding analysis on understanding the effect of fine-grained / captioning dataset for feature learning might help understanding as well.\\n\\nOverall rating\\nThis paper suggest interesting observations and useful dataset, but provides relatively less analysis on these observations. I believe providing more analysis on the dataset and effect of transfer would make the main argument of this paper stronger.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Too many results without proper analysis.\", \"review\": [\"Paper Summary - This paper presents an approach for fine-grained action recognition and video captioning. The authors train a model using both classification and captioning tasks and show that this improves performance on transfer learning tasks. The method is evaluated on the Something-Something v2 dataset as well as a new dataset (proposed in this paper). The authors also evaluate the benefit of using fine-grained action categories vs. coarse-grained action categories on transfer learning.\", \"Paper Strengths\", \"Comparing fine-grained vs. coarse-grained action categories for transfer learning is well motivated. Evaluating just this aspect in the context of video classification is helpful (Section 5.1). Establishing the baseline using linear classifiers for feature transfer makes the feature transfer result more robust. The authors have also done a good job of evaluating their method in the coarse-grained and fine-grained settings (Table 1, 2).\", \"The architectural and experimental design in this paper is well illustrated.\", \"The 20bn kitchen dataset has interesting categories about intention - pretending to use, using, and using & failing.\", \"The ablation in Table 1 is helpful in understanding the contribution of 3D vs. 2D convolutions.\", \"Paper Weaknesses\", \"I believe this paper tries to do too much and as a result fails to show results convincingly. There are too many results and not much focus on analyzing them. In my opinion, the experimental setup in the paper is weak to fully support the authors' claims.\", \"I now analyze the main contributions of this paper as outlined by the authors in Section 1.\", \"Label granularity and feature quality: To me this is the most interesting part of this paper and most related to its title. However, this is also the most under-analyzed aspect. The only result that the authors show is in Sec 5.1 and Fig 5. Apart from using the provided fine-grained vs. coarse-grained labels for evaluation, the authors do not perform many experiments in this domain and neither do they analyze these results. For example, the gain using fine-grained labels is not significant in Figure 5 (2Channel - CG vs. 2Channel - FG). The authors do not explain this aspect. Another missing baseline from Figure 5 is \\\"2Channel - Captions & CG actions\\\". This baseline is needed to understand the contribution of FG vs CG actions when also using captioning as additional supervision.\", \"Baselines for captioning: The authors do not provide any details for this task. If the intent is to establish baselines there needs to be more effort on analyzing design decisions - e.g. decoding, layers in LSTM. Captioning metrics such as CIDER and SPICE are missing.\", \"Captions as a source for transfer learning: This is poorly analyzed in this paper. 1) Can the captions be converted to \\\"tags\\\" and then used for supervision? What is the benefit of producing the full sequential text description over this simple approach? 2) Captions for transfer learning are only analyzed in Figure 5 without much explanation. It is hard to claim that captioning is the reason for performance gains without really analyzing it completely.\", \"20bn-kitchenware dataset - This dataset is explained in just one paragraph in Section 6. What is the motivation behind collecting this dataset as opposed to showing transfer learning on some other dataset?\", \"Missing references\", \"There has been work in understanding the effect of fine-grained categories in ImageNet transfer learning - What makes ImageNet good for transfer learning? Huh et al. What is the insight provided over this work?\", \"Minor comments\", \"Section 1: Figure 4 is referenced in points 1 & 3. I think you mean Figure 5.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJNceh0cFX | A RECURRENT NEURAL CASCADE-BASED MODEL FOR CONTINUOUS-TIME DIFFUSION PROCESS | [
"Sylvain Lamprier"
] | Many works have been proposed in the literature to capture the dynamics of diffusion in networks. While some of them define graphical markovian models to extract temporal relationships between node infections in networks, others consider diffusion episodes as sequences of infections via recurrent neural models. In this paper we propose a model at the crossroads of these two extremes, which embeds the history of diffusion in infected nodes as hidden continuous states. Depending on the trajectory followed by the content before reaching a given node, the distribution of influence probabilities may vary. However, content trajectories are usually hidden in the data, which induces challenging learning problems. We propose a topological recurrent neural model which exhibits good experimental performances for diffusion modelling and prediction. | [
"Information Diffusion",
"Recurrent Neural Network",
"Black Box Inference"
] | https://openreview.net/pdf?id=SJNceh0cFX | https://openreview.net/forum?id=SJNceh0cFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkxnuDNxlV",
"HJl6vZ-EJV",
"B1xiMN_dAm",
"Skx3-KvOC7",
"B1gDcrvdCm",
"HJgQO5qAnQ",
"SylvLhvph7",
"SkeyBm0VnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544730484116,
1543930212644,
1543173138906,
1543170307594,
1543169422557,
1541479018620,
1541401679171,
1540838199079
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1104/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1104/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1104/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1104/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1104/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1104/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1104/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1104/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduces a recurrent neural network approach for learning diffusion dynamics in networks. The main advantage is that it embeds the history of diffusion and incorporates the structure of independent cascades for diffusion modeling and prediction. This is an important problem, and the proposed approach is novel and provides some empirical improvements.\\nHowever, there is a lack of theoretical analysis, and in particular modeling choices and consequences of these choices should be emphasized more clearly. While there wasn't a consensus, a majority of the reviewers believe the paper is not ready for publication.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea, but paper not ready for publication\"}",
"{\"title\": \"Summary of the revision\", \"comment\": [\"To summary, the main changes in the revision are the following:\", \"A new evaluation measure on artificial datasets which considers the rate of correct choices of infectors. This highlights the good ability of our method to discover the paths of diffusion;\", \"New experiments on an additional artificial dataset with important hub nodes. In this setting, considering the history of diffusion is crucial to predict the future, which is clearly exhibited in the results (we obtain much better results than CTIC in this setting);\", \"An additional baseline in the experiments which corresponds to a recent method using attention without RNN (but whose results remain under those of our method);\", \"A further derivation of the ELBO, as suggested by reviewer 2, which enables an easier analysis of what is optimized (while remaining equivalent to the former gradient formulation);\", \"A global clarification of the model, with notably an improved presentation of the CTIC model that we build on, a simplification of some notations, and a full proofreading of the paper;\", \"More details about evaluation metrics and the tuning of the baselines;\", \"A fully re-written experimental analysis section;\", \"An additional section in appendix to discuss the conditioning of the models according to observed starts of episodes.\", \"Thanks again to all reviewers for their very relevant comments that enabled us to greatly improve our paper.\"]}",
"{\"title\": \"Review answer\", \"comment\": \"Thanks for your valuable comments and feedback. We attempted to much improve the clarity of our paper by adding further explanations and correcting typos in many places. New experiments have also been added.\", \"r\": \"\\\"Why are methods \\u2026\\\"\", \"a\": \"Wang2017b is actually used in our experiments. It is CYAN, for which we considered two versions. DeepCas and Wang2017a use a graph of relations as input, which we do not have in our setting. Assuming that the explicit relations of the network are not always available or not representative of the true communication channels of the network, the task is to discover diffusion relationships from scratch. In our task, the compared models do not use any graph as input. The models could have been run with the complete graph, but results would be very poor since no reinforcement mechanisms of relations is designed in these models. Also, similarly to Wang2018, these models do not output the infection time of the infected users, which is required to be fairly compared to the approaches considered in the paper. However, to complete the evaluation, we added experiments with an extension of Wang2018 where we added a time prediction mechanism similar to the temporal head of CYAN.\"}",
"{\"title\": \"Review answer\", \"comment\": \"Thanks for your valuable comments and feedback. We attempted to much improve the clarity of our paper by adding further explanations and correcting typos in many places. New experiments have also been added. Please find bellow our answers to your specific remarks.\", \"r\": \"\\\"The authors explain how they trained their own model but there is no mention on how they trained benchmark models\\\"\", \"a\": \"You are totally right, it is missing. Baseline models are trained on the same training set as our model following the methods proposed in their original paper. Our model and the baselines were tuned by a grid search process on a validation set for each dataset (whose size is given in the description of the datasets), although the best hyper-parameters obtained for Arti1 remained near optimal for the other ones. For every model with an embedding space (i.e., all except CTIC), we set its dimension to $d=50$ (larger dimensions induce a more difficult convergence of the learning without significant gain in accuracy). We added this explanation in the new version of the paper.\"}",
"{\"title\": \"Review answer\", \"comment\": \"Thanks for your valuable comments and feedback.\", \"r\": \"\\\"The paper can benefit from a proofreading.\\\"\", \"a\": \"Thanks, we indeed corrected serveral typos like this in the new version of the paper.\"}",
"{\"title\": \"This paper proposes a neural network architectures with locally dense and globally sparse connections. Using dense units a population-based evolutionary algorithm is used to find the sparse connections between modules.\", \"review\": \"The problem that the paper tackles is very important and the approach to tackle it id appealing. The idea of regarding the history as a tree looks very promising. However, it\\u2019s noteworthy that embedding to a vector could be useful too if the embedding espace is representative of the entire history and the timing of the events.\\n\\nUsing neural network if an interesting choice for capturing the influence probability and its timing.\\n\\nThe authors need to be clear about their contribution. Is the paper only about replacing the traditional parametric functions of influence and probability with deep neural networks? \\nThe experimental sections look rather mechanical. I would have put some results on the learned embedding. Or some demonstration of the embedded history or probability to intuitively convey the idea and how it works. This could have made the paper much stronger.\\n\\nIt was nice that the paper iterated and reviewed the possible inference and learning ways. There is one more way. Similar to [1] one can use MCMC with importance sampling on auxiliary variables to infer the hidden diffusion given the observed cascades in continuous-time independent cascade model.\\n\\nThe paper can benefit from a proofreading. There are a few typos throughout the paper such as:\\nReference is missing in section 2.1\", \"page_2_paragraph_1\": \"\\u201can neural attention mechanism\\u201d\\n\\n[1] Back to the Past: Source Identification in Diffusion Networks from Partially Observed Cascades, AISTATS 2015\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Poor presentation\", \"review\": \"The authors of this paper are proposing a neural network approach for learning diffusion dynamics in networks. The authors argue that the main advantage of their framework is that it incorporates the structure of independent cascades into the model which predicts the diffusion process.\\n\\nThe primary difficulty in reviewing this paper is the poor presentation of the paper. There are many typos and mistakes (e.g., the last paragraph of the paper starts with a sentence that does not make any sense), missing references (e.g., there is an empty parenthesis at the end of the second paragraph on the second page) and in at least two cases, there are references to a formula that is not in the manuscript (e.g., reference to formula 15 on line 3 of page 5). This issues makes reviewing this paper very difficult.\\n\\nIn the modeling section, authors use p(I|D) as q^D(.) in the Eq. 12, where p(I|D) is the conditional probability that a particular node infected an observed infected node first. Plugging p(I|D) in Eq. 12 and using decomposition of p(D ,I) used in Eq. 10, we arrive at a formulation which drops all p(I|D) terms. This results in an objective function which only involves infected nodes (and no term associated with the parent node), weighted by likelihood of each node j infecting the node at step i. This should make the training more simplified than what is discussed in Algorithm 2. Beyond this simplification, I am not clear if that is actually intended by the authors.\\n\\nThe experiments demonstrate a superior performance of the proposed model compared to alternative benchmarks. The authors explain how they trained their own model but there is no mention on how they trained benchmark models. However, given that the datasets used in the experiments were not used in the associated benchmark papers, it is necessary for authors to explain how they trained competing models.\\n\\nDue to several shortcomings of the paper, most important of which is on presentation of the paper, this manuscript requires a significant revision by the authors to reach the necessary standards for publication, moreover it would be helpful to clarify the modeling choices and consequences of these choices more clearly.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Incremental, writing could be much improved\", \"review\": \"The paper proposes a generative infection cascade model based on latent vector representations. The main idea is to use all possible paths of infections in the model.\", \"comments\": [\"The papers clarity could be much improved. It is not easy to follow, is overflowing with notation, and lengthy. Sec. 2.1 for example can easily be made much more concise. Secs. 3.1 and 3.2 are especially confusing. In the first equation in Sec. 3, what is \\\\phi with and without sub/superscript? In Eq. (2), what is k - a probability, or an index? And what is the formal definition \\\"infection\\\" and \\\"future\\\" in the description of k stating that it is \\\"the probability that u infects v in the future\\\"?\", \"The authors mention that the actual infectors in a diffusion process are rarely observed. While this might be true, in many types of data include infection attempts. This should be worthwhile to model - there are many works on reconstructing cascades from partial data.\", \"The authors note (rightly) the Eq. (9) is hard to solve, and propose a simple lower bound based on (what I think is) a decomposition assumption. Unless I misunderstood, this undermines the contribution of the structure of past infections. Could the authors please clarify?\", \"The results mention 5 (tables?), but only 4 are available, of which one appears floating on the last page.\", \"Why are methods discussed in the introduction (e.g., DeepCas, Wang 2017a,b 2018) not used as baselines?\"], \"minor\": [\"Wang 2017a and Wang 2017b are not the same Wang\", \"Several occurrences of empty parentheses - ()\", \"-\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyGcghRct7 | Random mesh projectors for inverse problems | [
"Konik Kothari*",
"Sidharth Gupta*",
"Maarten v. de Hoop",
"Ivan Dokmanic"
] | We propose a new learning-based approach to solve ill-posed inverse problems in imaging. We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements. This setting is common in geophysical imaging and remote sensing. We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable. Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces. We then combine the projections to form a final reconstruction by solving a deconvolution-like problem. We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse. | [
"imaging",
"inverse problems",
"subspace projections",
"random Delaunay triangulations",
"CNN",
"geophysics",
"regularization"
] | https://openreview.net/pdf?id=HyGcghRct7 | https://openreview.net/forum?id=HyGcghRct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJeSM7WblV",
"HyxbtBh0kE",
"B1xEMHn0kV",
"B1xxSs2nyE",
"HJx2u7-s14",
"H1gDZDAWkN",
"BkgMpADPCQ",
"Hkgez0wvAQ",
"HJxAoTvwCX",
"HJx7u3wPRm",
"HJe46svvRX",
"rkxR0Zh-0X",
"BkeFsHCiam",
"HygKVSRjTX",
"S1gHWHAjpm",
"HyxbyBRi6m",
"r1lmQNCjTX",
"rklYC70opQ",
"Hke127Csa7",
"SJg0kX0j6m",
"SklYOP5e67",
"BygsLNF62Q",
"Sygyv3zq3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544782605152,
1544631673433,
1544631564207,
1544502072207,
1544389492182,
1543788287311,
1543106233613,
1543106055952,
1543105958269,
1543105643497,
1543105468484,
1542730198427,
1542346145101,
1542346032636,
1542345981368,
1542345945417,
1542345754569,
1542345681080,
1542345639458,
1542345446286,
1541609329034,
1541407826834,
1541184599311
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1103/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1103/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1103/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1103/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1103/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a novel method of solving inverse problems that avoids direct inversion by first reconstructing various piecewise-constant projections of the unknown image (using a different CNN to learn each) and then combining them via optimization to solve the final inversion.\\nTwo of the reviewers requested more intuitions into why this two stage process would fight the inherent ambiguity. \\nAt the end of the discussion, two of the three reviewers are convinced by the derivations and empirical justification of the paper.\\nThe authors also have significantly improved the clarity of the manuscript throughout the discussion period.\\nIt would be interesting to see if there are any connections between such inversion via optimization with deep component analysis methods, e.g. \\u201cDeep Component Analysis via Alternating Direction Neural Networks\\n\\u201d of Murdock et al. , that train neural architectures to effectively carry out the second step of optimization, as opposed to learning a feedforward mapping.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"interesting direction for inverse problems\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for taking the time to read through all our responses. We are glad that you like our work.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for taking the time to read through our responses and for the positive assessment. We definitely intend to add the suggested information to the final version. We were perhaps a bit conservative trying to avoid a \\u201csignificant\\u201d change.\"}",
"{\"title\": \"Updated rating following the paper revision.\", \"comment\": \"I'd like to thank the authors for the revised version of the manuscript. I agree with the response that tackling linear inversion is of a more general interest than my initial review indicates, and is a good setting to study given the possibility of theoretical analysis. I also agree with the response the other review concern that the non-linearity is required for the inversion function, and also more positive about the presentation as the approach is presented much more clearly in the revised version.\\n\\nI am updating my rating primarily based upon the additional visualizations presented in the response regarding the performance of a simple ensemble method, and qualitative results showing the proposed method does better empirically. However, I do not think these results and a corresponding discussion are currently in the revised manuscript, and the comparison to the simple ensemble method is purely qualitative - I strongly encourage the authors to incorporate these results/discussion in the final version, and also add quantitative comparison to average predictions obtained via an ensemble.\"}",
"{\"title\": \"post rebuttal\", \"comment\": \"I read the reply to my review, the other reviews and the extended discussion on this paper. I am glad the OpenReview system is working well for this paper.\\n\\nMy current vote is on accepting this paper given that there is clearly extensive work put into it and it contains several interesting novel ideas.\"}",
"{\"title\": \"Nonlinearity is (still) required\", \"comment\": \">> \\u201cThank you for your detailed response. Yes, I have read every part of your multi-part responses.\\n\\n> That is certainly true, and something one could use when x lives in a known subspace. But our x does not live in a subspace (let alone a known one).\\n\\nI don't know if we're talking about that same thing. You say you are trying to estimate the projection of x onto a known subspace S, z = P_S x. z is in the subspace S, whether or not x is, and S is known (selected randomly in my strawman model).\\u201d\", \"response\": \"Per above, the model for x is nonlinear, A is underdetermined, which makes the inverse map nonlinear. Even composed with a projection it remains nonlinear.\"}",
"{\"title\": \"Summary of our detailed responses below\", \"comment\": \"TL;DR: Your proposed method is indeed equivalent to a linear oblique projection which we described in Appendix A. The oblique projection into a subspace can become arbitrarily bad when the subspace which we want to project into is not aligned well with the range of A^* (adjoint of A). In this response we explain why this is the case both mathematically and via numerical experiments for which we share the code.\\n\\n(A side remark: we hope that the reviewer was able to read our responses to all their other previous comments in Parts 2, 3 and 4 of our first response. Due to the way OpenReview displays comments, it may have been unclear that those were parts of our response. )\"}",
"{\"title\": \"Nonlinearity is required (Part 1)\", \"comment\": \">> \\u201cThank you for the clarification on how the method works; this cleared up some things. However, it's still not clear to me why this should work. I agree with the other reviewer that the ensemble hypothesis is one potential explanation, but the paper would be strengthened by more depth in this regard.\\u201d\", \"response\": \"Indeed, your proposed \\\\hat z_k is the oblique projection which is denoted P_{S}^{oblique} x in Appendix A and Figure 8. Further, your reconstruction (where we squared the norm):\\n\\n\\\\min x \\\\sum_k \\\\|P_{S_k} x - \\\\hat z_k \\\\|_2^2\\n\\nis the same as our (2), without the regularizer and constraints. To see this equivalence, assume without loss of generality that the columns of U_k are orthonormal so that P_{S_k} = U_k U_k^T. Since \\\\| . \\\\|_2 is unitarily invariant, left multiplication by orthonormal U_k does not change it so we can write the minimization as \\\\min_x \\\\sum_k \\\\|U_k^T x - \\\\hat v_k \\\\|_2^2. Noting that \\\\hat v_k is our q_\\\\lambda and U_k our B_\\\\lambda, and stacking the terms in the sum, we get the data term in (2).\\n\\nAnd yes, \\\\hat z_k can be arbitrarily bad. Let us try again to explain why this is the case (both mathematically and with numerical examples (see link https://tinyurl.com/obliqueandprojnetfigure ). In what follows N(A) will denote the nullspace of matrix A, R(A^*) the range of matrix A^*, where ^* denotes the adjoint (which is the transpose for real matrices). Superscript ^\\\\perp denotes the orthogonal complement.\\n\\nFirst, we note that in underdetermined inverse problems y = Ax + n, the role of any regularizer is to provide information about x in the nullspace of A. The unknown vector x has a component along the nullspace of A and along its orthogonal complement, N(A)^\\\\perp = R(A^*). The component along the orthogonal complement of N(A) is simply pinv(A)*y which is the orthogonal projection of x into R(A^*). \\n\\nThe only situation where linear methods can provide this nullspace information is when x is constrained to a *known* subspace. In this case the reconstruction is given by the oblique projection U pinv(A U) y (where the columns of U span the subspace) and there is no need for random projections. But this is not useful for us, because our x does not live in any subspace, let alone a known one. It is well known that most interesting signal classes (natural images, biomedical images, seismic images, anything with singularities such as edges) are not efficiently modeled by subspaces. That is why modern methods rely on sparse models, low-rank models, manifold models, and other non-linear models.\"}",
"{\"title\": \"Nonlinearity is required (Part 2)\", \"comment\": \"Let us now analyze your proposed reconstruction. We first look at the formula: \\\\hat v_k = pinv(A U_k)y, which corresponds to the expansion coefficients of the oblique projection in Appendix A. In general, how well the oblique projection \\\\hat z_k = U_k \\\\hat v_k approximates P_{S_k} x depends on the smallest principal angle between the subspaces R(A^*) and S_k. In the interesting case where this angle is close to pi/2 (i.e., where we are getting information about x in R(A^*)^\\\\perp = N(A)), the linear method fails spectacularly because the pseudoinverse explodes (please see further discussion below and numerical experiments).\\n\\n-- If S_k happens to lie completely within the nullspace of A, then the product A U_k is a zero matrix and pinv(A U_k) is also a zero matrix. Thus even if x in general has an arbitrarily large component in S_k, your estimate of this component will be zero.\\n-- A more common case: If N(A) intersects S_k only trivially (only at origin), but the smallest principal angle between the two subspaces is small (i.e., the smallest singular value sigma_min of A U_k is small), then pinv(A U_k) will be very large (in any norm) since 1 / sigma_min is large, and the point (P_S^oblique) x will diverge to infinity. To see this geometrically, imagine that S_\\\\lambda in Figure 8 is being rotated so that the angle between the R(A^*) and S_\\\\lambda approaches pi/2. The oblique projection point will travel to infinity because the projection always take place along the line orthogonal to R(A^*) (along the nullspace of A).\\n\\nA naive proposal to fix this by choosing subspaces so that the R(A^*) and S_k are close is not useful because those subspaces give the same information as pinv(A). \\u201cUseful\\u201d subspaces reveal information about x in N(A) and those are precisely the ones that cause trouble. We want to choose the subspaces independently of A. \\n\\nAnother proposal could be to regularize the pinv by strategies such as Tikhonov regularization, but these methods will not reinstate the nullspace components because x does not live in any subspace, and the overall reconstruction would again be forced to be in a certain subspace, as explained in more detail in what follows.\\n\\nLet us see how this shows up in your suggested minimization \\\\min x \\\\sum_k \\\\|P_{S_k} x - \\\\hat z_k \\\\|_2^2 (with squared norm). Any solution to this convex problem satisfies (by setting the gradient to zero):\\n\\n\\\\sum_k P_{S_k}^* (P_{S_k} \\\\hat x - \\\\hat z_k) = 0\\n\\nUsing the fact that P_{S_k} is an orthogonal projection, hence self-adjoint (P_{S_k} = P_{S_k}^*) and idempotent (P_{S_k}^2 = P_{S_k}), and that \\\\hat z_k already lives in S_k, we can write this as \\\\sum_k (P_{S_k} \\\\hat x - \\\\hat z_k) = 0, or (dividing both sides by the total number of subspaces so that we can think in terms of averages):\\n\\n(1/K) ( \\\\sum_k P_{S_k} ) \\\\hat x = (1/K) \\\\sum_k \\\\hat z_k \\n\\nFor a large enough number of random subspaces K, the matrix R = (1/K) ( \\\\sum_k P_{S_k} ) on the left-hand side becomes full rank. Since \\\\hat z_k = U_k pinv(A U_k) A x (up to noise), the right-hand side can be written\\n\\n(1/K) \\\\sum_k U_k pinv(A U_k) A x = {[ (1/K) \\\\sum_k U_k pinv(A U_k) ] A} x.\\n\\nThe row space of the matrix G = [ (1/K) \\\\sum_k U_k pinv(A U_k) ] A multiplying x on the rhs is the same as the row space of A, so G is low-rank (it is an oblique projection matrix on some subspace). This gives\\n\\n\\\\hat x = inv(R) G x,\\n\\nwhich can only be a good estimate if x is already in the range of inv(R) G x (a subspace). But again, x is not constrained to any particular subspace. \\n\\nAny linear reconstruction, no matter how regularized, can only produce results in a fixed subspace with dimension at most the number of rows of A (for any matrix B, rank(BA) is at most the number of rows in A, so its column space is a fixed low-dimensional subspace). The nullspace of A dictates what can and what cannot be recovered. On the other hand, our method can easily provide information in the nullspace because it explores non-linear correlations between the nullspace and range space components of x (via a manifold model).\\n\\nTo empirically support the mathematical fact that oblique projections and linear reconstruction can be arbitrarily bad, we simulate your proposed approach ( https://tinyurl.com/obliqueandprojnetfigure ). The code can be found at https://tinyurl.com/obliqueandprojnetcode . Note that the subspaces we use are the same that we used in the experiments in the manuscript.\"}",
"{\"title\": \"Nonlinearity is required (Part 3)\", \"comment\": \">> \\u201cIf I observe only a subset of entries in a vector that lies in a known subspace, under some conditions I can identify the original location in the subspace.\\u201d\", \"response\": \"In the light of the above discussion, we respectfully disagree. Low-rank matrix completion relies on the fact that we can identify the right low-dimensional subspace spanned by rank-1 matrices given sufficient measurements. If this subspace is already known then we agree with the reviewer\\u2019s example, but this is not the gist of low-rank matrix completion: it would only allow reconstructing special low-rank matrices that are linear combinations of some fixed rank-1 matrices.\\n\\nGenerally, in low-rank matrix completion, we do not know the low-rank matrix basis, which makes the problem nonlinear (analogously, we do not know the sparse support in sparse models). Identifying the basis of rank-1 matrices is analogous to support recovery with sparse priors. The algorithms for low-rank matrix recovery are therefore not linear: they use regularizers such as the nuclear norm optimized by nonlinear schemes such as iterative singular value thresholding. \\n\\nFurther, because of the particular structure of the measurement operator A here (\\u201creturn some entries\\u201d), we need conditions on x (the matrix to recover) related to the above example of subspaces with many zero entries. In particular, if the matrix is at once low-rank and sparse, it will be problematic for entrywise observations which will return those zeros with significant positive probability. That is why the guarantees in low-rank matrix completion from a few entries assume that the matrix is not simultaneously sparse and low rank (see, e.g., Section 1.1.1. and the paragraph before Theorem 1.3 in [1] where this is formulated as \\u201cincoherence of row and column spaces with the standard basis\\u201d). Clearly, even with many near-zero entries in the matrix the recovery is unstable.\\n\\n[1] Cand\\u00e8s, E.J. and Recht, B., 2009. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6), p.717.\"}",
"{\"title\": \"Nonlinearity is required (Part 4)\", \"comment\": \">> \\u201c2. What is wrong with my approach? Is there an example where it would fail spectacularly but your method would work? Why? How does it compare empirically to the proposed approach? In other words, within this general framework, what is the benefit of nonlinear estimates of the z_k's?\\u201d\", \"response\": \"Unfortunately, as we argue in this response (and empirical results provided), the linear method provides little insight into what the non-linear method is doing (beyond pointing to the need for nonlinearity). Since it cannot exploit interesting signal models, everything is dictated by the nullspace of A.\\n\\nWe agree with the reviewer that our approach opens up questions and research opportunities beyond the current manuscript and that various parts merit deeper study. We are excited by this research and intend to write about it in due time. But we also feel that the manuscript proposes a new, useful approach to regularization, that the discussions therein (strengthened by the previous and this round of reviewer\\u2019s comments) motivate the method well and provide mathematical intuitions for why the method does work. The 8-page manuscript and the appendix are tightly packed with problem description, mathematical motivations, intuitions, and numerical examples which show that the method outperforms strong baselines. It now has additional discussions about oblique vs orthogonal projections and the need for nonlinearity, and some additional numerical results in the appendices motivated by the reviewer\\u2019s concerns. Everything will be backed up by reproducible code (it already is but we cannot publish it due to anonymity). It would thus be very challenging to add significant new material to the current draft. We hope that the reviewer finds our explanations and this statement reasonable.\\n\\n>> \\u201c4. What is the role of TV regularization in the final estimation of x? I thought that the different subspace projections were providing a form of regularization, so I was surprised that additional regularization was required.\\u201d\\n\\nAs we discuss in the manuscript, the TV-norm regularization is not essential. In fact, for our SubNet (single network that estimates all subspace projections) reconstructions we do not use any regularization, as we already state in the experimental section of the manuscript (Section 4.1.1 Paragraph 3). We make this more explicit by adding a sentence when discussing Equation 2 in Section 3.2. Please note that in the original problem TV regularization did not give workable reconstructions (see Introduction, Figure 1 bottom row and Part 2 of the response to your initial review) so it is an example of how the reformulated inverse problem is better behaved. We use TV regularization for ProjNet reconstructions because we have coefficients for fewer subspaces (130 vs 350) than for SubNet which makes the problem slightly underdetermined. It is not essential for the method to work and even without it it outperforms the baseline as evidenced by SubNet reconstructions, but it does point to the possibility of using more sophisticated strategies in Stage 2, as noted by Reviewer 1.\"}",
"{\"title\": \"clarification was helpful\", \"comment\": \"Thank you for the clarification on how the method works; this cleared up some things. However, it's still not clear to me why this should work. I agree with the other reviewer that the ensemble hypothesis is one potential explanation, but the paper would be strengthened by more depth in this regard.\\n\\nIt would also help to see some concreteness to some of the explanations. I find Appendix A and Figure 8 difficult to follow. Is \\\\cal R(A*) the range of the adjoint of A? I can't find this defined anywhere. Likewise, I can't find a concrete definition of P_S^oblique.\\n\\nConsider this comparison point. Let S_k be a random subspace and U_k be a basis spanning it. Then z_k := P_{S_k} x = U_k v_k for some coefficient vector v_k. Thus one estimator of z_k is simply \\\\hat z_k = U_k \\\\hat v_k where \\\\hat v_k = pinv(AU_k)y. From these \\\\hat z_k's I could then estimate x via\\n\\\\min x \\\\sum_k \\\\|P_{S_k} x - \\\\hat z_k \\\\|_F\\n\\nI think the above approach is consistent with the spirit of your method, but based on linear estimators of z_k instead of CNNs. But this raises several questions:\\n\\n1. Is the \\\\hat z_k above (which I think might correspond to your P_S^oblique) consistent with your appendix A claim that the oblique projection can be arbitrarily bad? I find this difficult to interpret. If I observe only a subset of entries in a vector that lies in a known subspace, under some conditions I can identify the original location in the subspace. This fact is at the heart of low-rank matrix completion and it seems to contradict your claim about how difficult to can be to compute these projections. How do I interpret your claim in this setting? \\n\\n2. What is wrong with my approach? Is there an example where it would fail spectacularly but your method would work? Why? How does it compare empirically to the proposed approach? In other words, within this general framework, what is the benefit of nonlinear estimates of the z_k's?\\n\\n3. In my (admittedly possibly suboptimal) linear approach, do we have any insight into the role of the different orthogonal projections and how performance scales with the number of projections? Perhaps this could provide insight into how the nonlinear version works. \\n\\n4. What is the role of TV regularization in the final estimation of x? I thought that the different subspace projections were providing a form of regularization, so I was surprised that additional regularization was required.\"}",
"{\"title\": \"Clarification of our method (Part 4)\", \"comment\": \">> \\u201cThe proposed method isn't bad, and the idea is interesting. But I can't help but wonder whether it works just because what we're doing is denoising the least squares reconstruction, and regression on many random projections might be pretty good for that. Unfortunately, the experiments don't help with developing a deeper understanding.\\u201d\", \"response\": \"As we stress in the manuscript (Paragraph 3 of Introduction and Figure 1) we are precisely addressing the regime where the denoising or artifact removal paradigm fails. In Figure 1, we show that standard methods that would indeed correspond to denoising the least squares reconstruction, such as the TV-regularized least squares or non-negative least squares do not give a reasonable solution to our problem.\\n\\nWe feel the reviewer\\u2019s impression is based on their interpretation that we project x0 into random subspaces, but as we try to emphasize in our response, we are doing something very different. Estimating *orthogonal* projections of x (as opposed to x0) from few measurements cannot be interpreted as denoising, but rather as discovering different stable pieces of information about the conditional distribution of x which is supported on some a priori unknown low-dimensional structure, $\\\\mathcal{X}$, and the part of learning is to discover this structure (or rather, its projections into a set of random subspaces which is a simpler problem). We updated the manuscript to further emphasize this aspect in Section 3.1 and added Appendix A.\\n\\n[1] Jin, K.H., McCann, M.T., Froustey, E. and Unser, M., 2017. Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26(9), pp.4509-4522.\\n[2] Rivenson, Y., Zhang, Y., G\\u00fcnayd\\u0131n, H., Teng, D. and Ozcan, A., 2018. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light: Science & Applications, 7(2), p.17141.\\n[3] Sinha, A., Lee, J., Li, S. and Barbastathis, G., 2017. Lensless computational imaging through deep learning. Optica, 4(9), pp.1117-1125.\\n[4] Li, S., Deng, M., Lee, J., Sinha, A. and Barbastathis, G., 2018. Imaging through glass diffusers using densely connected convolutional networks. Optica, 5(7), pp.803-813.\"}",
"{\"title\": \"Clarification of our method (Part 3)\", \"comment\": \">> \\u201cAt the heart of this paper is the idea that for an L-Lipschitz function f : R^k \\u2192 R the sample complexity is O(L^k), so the authors want to use the random projections to essentially reduce L. However, the Cooper sample complexity bound scales with k like k^{1+k/2}, so the focus on the Lipschitz constant seems misguided. This isn't damning, but it seems like the piecewise-constant estimators are a sort of regularizer, and that's where we really get the benefits.\\u201d\", \"response\": \"We agree that it is favorable to train fewer networks. However, we already do propose the SubNet (motivated exactly by this concern) which requires training only a single network (see Section 4.1.1 Paragraph 5), and which performs on par with the collection of ProjNets and better than the baseline. Note that we are using the same number of samples to train the SubNet and the direct baseline, and only half of those samples to train *all* the ProjNets. We now mention the number of samples explicitly in Section 4 under Robustness to Corruption.\\n\\nWe are not quite certain that we understand the comment about equal quality triangulations. The experiments on different datasets showcase that we can train on arbitrary image datasets and obtain comparable reconstructions. We reiterate that our networks are not computing triangulations, only projections into these triangular subspaces. All triangulations are generated at random, independently of the datasets and the networks. \\n\\nThe reviewer\\u2019s idea of regression on 130 randomly-initialized convolutional networks is interesting and a possible avenue for further research. However, each network would approximate the same unstable, high variance map (see, for example, the response to Reviewer 2, and examples https://tinyurl.com/direct-new-seeds ). One important aspect of our randomization via random triangulations is that it gives interpretable, local measurements, equivalent to a new forward operator B with favorable properties (see the discussion in Section 3.2 and 3.3). It is not immediately clear how one would interpret the outputs of randomly initialized convolutional networks.\"}",
"{\"title\": \"Clarification of our method (Part 2)\", \"comment\": \">> \\u201cThe learning part of this algorithm is in step 2, where m different convolutional neural networks are used to learn m good projections. The projections correspond to computing a random Delaunay triangulation over the image domain and then computing pixel averages within each triangle. It's not clear exactly what the learning part is doing, i.e. what makes a \\\"good\\\" triangulation, why a CNN might accurately represent one, and what the shortcomings of truly random triangulations might be.\\u201d\", \"response\": \"While we agree with the reviewer that random projections are a known idea, as far as we know and as noted by Reviewer 2, this is the first work that attempts to regress the orthogonal projections of the target signal x into random subspaces. We believe that this contribution sets it apart from previous work, especially because computing these projections from measurements is a truly nonlinear problem unlike the more common fixed linear projections. The reason to regress P_S x instead of x is that it is a more stable task, and a \\u201cclever\\u201d way to achieve randomization while at the same time controlling stability and hardness of learning. The role of the network is to approximate this nonlinear operator that maps y to projections of x, rather than to speed up a simple linear projection of x0.\\n\\nWe also respectfully disagree that much of the inverseness is removed by taking the pseudoinverse. In fact, this is one of our main contributions: we state in several places in the manuscript (for example Paragraph 3 of Introduction), that we work in a highly undersampled regime where the pseudoinverse (or any other simple regularizer for that matter) cannot do a reasonable job and the role of learning cannot be seen as denoising or artifact removal (see for example Figure 1 bottom row). This is also illustrated in Section 4 with the non-negative least squares reconstructions shown in Figures 6 and 7.\"}",
"{\"title\": \"Clarification of our method (Part 1)\", \"comment\": \"The reviewer summarizes our method as\\n\\n>> \\u201cThis paper describes a novel method for solving inverse problems in imaging.\", \"the_basic_idea_of_this_approach_is_use_the_following_steps\": \"1. initialize with nonnegative least squares solution to inverse problem (x0)\\n2. compute m different projections of x0\\n3. estimate x from the m different projections by solving \\\"reformuated\\\" inverse problem using TV regularization.\\u201d\", \"response\": \"We have to respectfully disagree with this summary, especially because it informs the remainder of the reviewer\\u2019s comments. There seems to be a misunderstanding about Step 2 and many later comments appear to stem from it. Since this step is the crux of our proposed method, we begin by summarizing it here, with references to the relevant parts of the manuscript.\\n\\nInstead of computing m different projections of x0 as the reviewer suggests, we regress subspace projections of x, the true image (see Section 3.1.1, Paragraphs 3 and 4). To do so, we must train a nonlinear regressor, in our case a convolutional neural network. (The need for nonlinearity is explained below.) To make this point clearer in the manuscript, we updated Figure 2 to explicitly show that x0 is not fed into linear subspace projectors of itself, but rather used as data from which we estimate projections of x. Indeed, projecting x0 would not be very interesting since it would simply imply various linear ways of looking at x0 and the networks would not be doing any actual inversion or data modeling. \\n\\nAgain, what we actually do is that we compute *orthogonal* projections P_S x from y = Ax (or x0 = pinv(A)y or something similar) into a collection of subspaces {S_\\\\lambda}_{\\\\lambda=1}^{\\\\Lambda} (see Section 3.1.1, Paragraph 3). While projecting x0 is a simple linear operation, regressing projections of an unknown x from the measurement data y is not. To explain why we need nonlinear regressors, we added a new figure and a short discussion to the manuscript (please see the new Appendix A). For the reviewer\\u2019s convenience, we summarize the discussion here (although it might be easier to read in the typeset pdf version):\\n\\nSuppose that there exists a linear operator F \\\\in R^{N \\\\times M} which maps y (or pinv(A)y) to P_S x. The simplest requirement on such an F is consistency: if x already lives in the subspace S, then we would like to have F A x = x. Another way to write this is that for any x, not necessarily in S, we require FA FA x = FA x, which implies that FA = (FA)^2 is an idempotent operator. However, because range(F) = S \\\\neq range(A^*), it will in general not hold that (FA)^* = FA. This implies that FA is not an orthogonal projection, but rather an oblique one.\\n\\nAs we show in the new Figure 8 (Appendix A), this oblique projection can be an arbitrarily poor approximation of the actual orthogonal projection that we seek. The nullspace of this projection is precisely N(A) = range^\\\\perp(A^*). Similar conclusions can be drawn for any other (ad hoc) linear operator, which would not even be a projection.\\n\\nThere are various assumptions one can make to guarantee that the map from Ax to P_S x exists. We assume that the models live on a low-dimensional manifold (please see updated Section 3.1; this low-dimensional structure assumption has previously been a footnote), and that the measurements are in general position with respect to this manifold. Our future work involves making quantitative statements about this aspect of the method.\"}",
"{\"title\": \"Thank you for your insights on extending our work\", \"comment\": \"We are glad that the reviewer enjoyed the paper. Indeed one of the main ideas put forward is the separation into information that can be stably (but nonlinearly) extracted from the measurements in this very ill-posed, no ground truth regime, and information that requires a stronger regularizing idea which kicks in at stage 2. We find it encouraging that the reviewer\\u2019s comments on improving stage 2 are quite similar to our ideas on extending this work (we now mention this in the concluding remarks). Further, we now provide an additional discussion of why the method can work and why nonlinear regressors are necessary in Appendix A and an updated Section 3.1, as an effort to address the comments of other reviewers.\"}",
"{\"title\": \"Explanation for why it works and motivation for \\\"linear\\\" inverse problems (Part 2)\", \"comment\": \">> \\u201cRegarding why it works: While learning a single projection maybe more sample efficient, learning all of them s.t. the obtained x is accurate may not be. Given this, I'm not entirely sure why the proposed approach is supposed to work. One hypothesis is that the different learned CNNs that each predict a piecewise projection are implicitly yielding an ensembling effect, and therefore a more fair baseline to compare would be a 'direct-ensemble' where many different (number = number of projections) direct CNNs (with different seeds etc.) are trained, and their predictions ensembled.\\u201d\", \"response\": \"Recall that we are in a regime where we do not have access to a large ground-truth training dataset and the measurements are very sparse. For this reason, we cannot hope to get a method that reconstructs all the details of x. This is the motivation to split the problem in two stages: in the first stage we only estimate \\u201cstable\\u201d information, by learning a collection of nonlinear, but stable maps from y (or pinv(A)*y, or its non-negative least squares reconstruction) to projections of x. As shown experimentally, this strategy outperforms the baseline which uses the exact same number of measurements and training samples. In fact, all ProjNets are trained using half the number of samples as the baseline (we now make this more explicit in the manuscript).\\n\\nIn the second stage of computing x from the projections, in order to get a very accurate, detailed estimate, one would need to use more training samples, and those samples should correspond to ground truth images which we do not have. Furthermore, as Reviewer 1 suggests, this might involve new and better regularizers. \\n\\nWe agree with the reviewer\\u2019s hypothesis that the different learned CNNs are implicitly yielding an ensembling effect\\u2014that is a nice interpretation of the proposed method. However, because the direct inverse map from y to x is highly unstable, we design a randomization mechanism which is better behaved than just training neural networks with different seeds. The instability of the full inverse map y -> x (or x0 -> x) will result in large systematic errors that will not average out. To illustrate this, per reviewer\\u2019s suggestion, we trained ten new direct networks and repeated the erasure experiments (Figures 5b, 12, 13) for the case when p=1/8. If, for example, we consider the image in Figure 5b, we find that 9/10 direct network reconstructions look almost the same as the poor reconstruction shown in the manuscript (see: https://tinyurl.com/direct-new-seeds ), while one reconstruction looks a bit closer to the true x, but still quite wrong (much more so than the reconstructions from the ProjNets). Our randomization scheme operates by providing random, low-dimensional targets that are stable and have low variance so that the resulting estimates are close to their true values and the subsequent ensembling mechanism is deterministic (in the sense that it does not rely on \\u201cnoise\\u201d). We stress again that the total number of training samples used to train all ProjNets, or the single SubNet is the same or smaller than that used to train the direct baseline.\\n\\nMoreover, we point out that we train two different architectures\\u2014one that requires a different network for each subspace (ProjNet) and one that works for any subspace (SubNet). The success of SubNet and the fact that it outperforms the direct baseline suggests that the important idea is indeed that of estimating low-dimensional projections.\\n\\nAnother important aspect of our choice of randomization is that it leads to interpretable, local measurements. These correspond to a new, equivalent forward operator B with favorable properties (see Section 3.2, 3.3 and Proposition 1). It would be hard to interpret the output of randomly initialized direct networks in a similar way (for example, it is not clear what we should expect the output distribution to be).\\n\\n[1] Stefanov, P. and Uhlmann, G., 2009. Linearizing non-linear inverse problems and an application to inverse backscattering. Journal of Functional Analysis, 256(9), pp.2842-2866.\"}",
"{\"title\": \"Explanation for why it works and motivation for \\\"linear\\\" inverse problems (Part 1)\", \"comment\": [\">> \\u201cPros:\", \"The proposed approach is interesting and novel - I've not previously seen the idea of predicting different picewise constant projections instead of directly predicting the desired output (although using random projections has been explored)\", \"The presented results are quantitatively and qualitatively better compared to a direct prediction baseline\", \"The paper is generally well written, and interesting to read\\u201d\"], \"response\": \"While we agree with the reviewer that the central idea is more widely applicable, we wish to emphasize that what the reviewer calls a \\u201cparticular case of linear inversion\\u201d covers a very large variety of practically relevant problems. The list includes super-resolution, deconvolution, computed tomography, inverse scattering, synthetic aperture radar, seismic tomography, radio-interferometric astronomy, and many other problems.\\n\\nImportantly, the fact that the forward problem is linear (which is why the corresponding inverse problems are unfortunately called linear) does not at all imply that the sought inverse map which we are trying to learn (the solution operator) is linear. The inverse map of interest will not be linear for anything but the simplest Tikhonov regularized solution (and variations thereof). For instance, if x is modeled as sparse in a dictionary, the inverse map is nonlinear even though the vast majority of inverse problems regularized by sparsity are \\u201clinear\\\". The entire field of compressive sensing is concerned with linear inverse problems. With general manifold models for x, such as the one assumed in the paper, we depart further from linear inverse maps. We now state this more explicitly in Section 3.1 and a new Appendix A. The ability to adapt to such nonlinear prior models is part of the reason why CNNs perform well on related problems. Additionally, these nonlinear inverses may be arbitrarily ill-posed, which calls for ever more sophisticated regularizers. In this sense, we are looking at a very large class of hard, practically relevant problems, whose solution operators are nonlinear.\\n\\nWhile nothing prevents practical application of our proposed method to problems such as single-image depth estimation, one benefit of studying linear inverse problems is that as soon as we are in finite dimensions (e.g., a low-dimensional manifold in R^N and a finite number of measurements), and the forward operator is injective, Lipschitz stability is guaranteed (refer added citation: [1]). Injectivity can be generically achieved with a sufficient number of measurements that depends only on the manifold dimension.\\n\\nIn applications such as depth estimation from a single image it is less straightforward to obtain similar guarantees. Namely, injectivity fails as one can easily construct cases where the same 2D depth map corresponds to multiple 2D images. So, while in practice our method might give good results, the justification would require additional work.\"}",
"{\"title\": \"Summary of responses to reviewers\", \"comment\": \"We thank the reviewers for taking the time to read the paper and prepare their comments. All are informative and they made us aware of the parts of presentation that might have been confusing; we hope that our updates make the manuscript clearer.\\n\\nWith some of the comments, though, we have to respectfully disagree. We explain this in the responses to individual reviewers. Here we only summarize a few main points, before addressing the individual reviewers\\u2019 comments in detail.\\n\\n-- In our method we solve a linear inverse problem y = Ax + n which is very ill posed, without having access to ground truth training data. To do so, we train a non-linear regressor (a neural net) which maps y to orthogonal projections of x into random subspaces with an arbitrarily chosen training dataset. To simplify network structure, we precompute x0 which can be an application of a pseudoinverse of A to y, a non-negative least squares solution or some other simple estimator. Importantly, because the measurements are few and the problem is very ill posed, x0 is a very bad estimate of x.\\n\\n-- We do not project x0 into random subspaces as Reviewer 3 suggests\\u2014this is achieved by a simple linear operator and would be of limited interest. We rather compute *orthogonal* projections of x from x0. As we elaborate in the updated manuscript (see Appendix A) and in the response to Reviewer 3, this cannot be achieved by a linear operator and it requires training a nonlinear regressor (in our case, a neural network).\\n\\n-- The term \\u201clinear inverse problems\\u201d only implies that the forward operators are linear. In most interesting applications, the inverse operators are arbitrarily nonlinear. This is the case already with standard sparsity-based methods. In our case, since we do not know where x lives, the nonlinear modeling is achieved by learning. Many, if not most practical imaging problems have (approximately) linear forward operators: examples are synthetic aperture radar, seismic tomography, radio-interferometric astronomy, MRI, CT, etc. While certainly many are only approximately linear (or fully nonlinear), linearization techniques are at the core of both practical algorithms and theoretical analysis. The latter is true even for questions of uniqueness and stability as discussed beautifully in [1]. In this sense we are looking at a very important and large class of nonlinear operators to be learned, and we do not see our discussion of linear inverse problems as a harsh limitation. That said, our method could be applied to other problems such as depth sensing, as suggested by Reviewer 2, but the justification would require additional work. For example, the Lipschitz stability (which we have per [1]) would not be guaranteed. The fact that an inverse exists for the imaging tasks we consider is given by injectivity on \\\\mathcal{X}, which is a low-dimensional structure (a manifold) embedded in R^N. In the original manuscript this assumption was in a footnote which is now expanded into a short discussion in Section 3.1. We elaborate this further in the response to Reviewer 2.\\n\\n-- Our method can be interpreted as a randomization or an ensembling method. But unlike strategies such as randomizing the seed when training many neural networks to directly estimate x, which will be hampered by the instability of the problem and the fact that we do not have ground truth data, we use a particular randomization scheme where we randomize the learning target. That way we a) have a clear model for randomization which tells us exactly how to use the individual projection estimates, and b) make each individual member of the problem ensemble stable.\\n\\n[1] Stefanov, P. and Uhlmann, G., 2009. Linearizing non-linear inverse problems and an application to inverse backscattering. Journal of Functional Analysis, 256(9), pp.2842-2866.\"}",
"{\"title\": \"Interesting method, but limited demonstrations and unclear reason for working\", \"review\": \"Summary:\\nGiven an inverse problem, we want to infer (x) s.t. Ax = y, but in situations where the number of observations are very sparse, and do not enable direct inversion. The paper tackles scenarios where 'x' is of the form of an image. The proposed approach is a learning based one which trains CNNs to infer x given y (actually an initial least square solution x_init is used instead of y).\\n\\nThe key insight is that instead of training to directly predict x, the paper proposes to predict different piecewise constant projections of x from x_init , with one CNN trained for each projection, each projection space defined from a random delaunay triangulation, with the hope that learning prediction for each projection is more sample efficient. The desired x is then optimized for given the predicted predicted projections.\", \"pros\": [\"The proposed approach is interesting and novel - I've not previously seen the idea of predicting different picewise constant projections instead of directly predicting the desired output (although using random projections has been explored)\", \"The presented results are quantitatively and qualitatively better compared to a direct prediction baseline\", \"The paper is generally well written, and interesting to read\"], \"cons\": \"While the method is interesting, it is apriori unclear why this works, and why this has been only explored in context of linear inverse problems if it really does work.\\n\\n- Regarding limited demonstration: The central idea presented here is is generally applicable to any per-pixel regression task. Given this, I am not sure why this paper only explores it in the particular case of linear inversion and not other general tasks (e.g. depth prediction from a single image). Is there some limitation which would prevent such applications? If yes, a discussion would help. If not, it would be convincing to see such applications.\\n\\n- Regarding why it works: While learning a single projection maybe more sample efficient, learning all of them s.t. the obtained x is accurate may not be. Given this, I'm not entirely sure why the proposed approach is supposed to work. One hypothesis is that the different learned CNNs that each predict a piecewise projection are implicitly yielding an ensembling effect, and therefore a more fair baseline to compare would be a 'direct-ensemble' where many different (number = number of projections) direct CNNs (with different seeds etc.) are trained, and their predictions ensembled.\\n\\n\\nOverall, while the paper is interesting to read and shows some nice results in a particular domain, it is unclear why the proposed approach should work in general and whether it is simply implicitly similar to an ensemble of predictors.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"novel method for inverse problems\", \"review\": \"This paper proposes a novel method of solving ill-posed inverse problems and specifically focuses on geophysical imaging and remote sensing where high-res samples are rare and expensive.\\nThe motivation is that previous inversion methods are often not stable since the problem is highly under-determined. To alleviate these problems, this paper proposes a novel idea: \\ninstead of fully reconstructing in the original space, the authors create reconstructions in projected spaces. \\nThe projected spaces they use have very low dimensions so the corresponding Lipschitz constant is small. \\nThe specific low-dimensional reconstructions they obtain are piecewise constant images on random Delaunay trinagulations. This is theoretically motivated by classical work (Omohundro'89) and has the further advantage that the low-res reconstructions are interpretable. One can visually see how closely they capture the large shapes of the unknown image. \\n\\nThese low-dimensional reconstructions are subsequently combined in the second stage of the proposed algorithm, to get a high-resolution reconstruction. The important aspect is that the piecewise linear reconstructions are now treated as measurments which however are local in the pixel-space and hence lead to more stable reconstructions. \\n\\nThe problem of reconstruction from these piecewise constant projections is of independent interest. Improving this second stage of their algorithm, the authors would get a better result overall. For example I would recommend using Deep Image prior as an alternative technique of reconstructing a high-res image from multiple piecewise constant images, but this can be future work. \\n\\nOverall I like this paper. It contains a truly novel idea for an architecture in solving inverse problems. The two steps can be individually improved but the idea of separation is quite interesting and novel.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Unclear why this should work\", \"review\": \"This paper describes a novel method for solving inverse problems in imaging.\", \"the_basic_idea_of_this_approach_is_use_the_following_steps\": \"1. initialize with nonnegative least squares solution to inverse problem (x0)\\n2. compute m different projections of x0\\n3. estimate x from the m different projections by solving \\\"reformuated\\\" inverse problem using TV regularization.\\n\\nThe learning part of this algorithm is in step 2, where m different convolutional neural networks are used to learn m good projections. The projections correspond to computing a random Delaunay triangulation over the image domain and then computing pixel averages within each triangle. It's not clear exactly what the learning part is doing, i.e. what makes a \\\"good\\\" triangulation, why a CNN might accurately represent one, and what the shortcomings of truly random triangulations might be.\\n\\nMore specifically, for each projection the authors start with a random set of points in the image domain and compute a Delaunay triangulation. They average x0 in each of the Delaunay triangles. Then since the projection is constant on each triangle, the projection into the lower-dimensional space is given by the magnitude of the function over each of the triangular regions. Next they train a convolutional neural network to approximate the above projection. The do this m times. It's not clear why the neural network approximation is necessary or helpful. \\n\\nEmpirically, this method outperforms a straightforward use of a convolutional U-Net to invert the problem.\\n\\nThe core novelty of this paper is the portion that uses a neural network to calculate a projection onto a random Delaunay triangulation. The idea of reconstructing images using random projections is not especially new, and much of the \\\"inverse-ness\\\" of the problem here is removed by first taking the pseudoinverse of the forward operator and applying it to the observations. Then the core idea at the heart of the paper is to speed up this reconstruction using a neural network by viewing the projection onto the mesh space as a set of special filter banks which can be learned.\", \"at_the_heart_of_this_paper_is_the_idea_that_for_an_l_lipschitz_function_f\": \"R^k \\u2192 R the sample complexity\\nis O(L^k), so the authors want to use the random projections to essentially reduce L. However, the Cooper sample complexity bound scales with k like k^{1+k/2}, so the focus on the Lipschitz constant seems misguided.\\nThis isn't damning, but it seems like the piecewise-constant estimators are a sort of regularizer, and that's where we\\nreally get the benefits.\\n\\nThe authors only compare to another U-Net, and it's not entirely clear how they even trained that U-Net. It'd be nice to see if you get any benefit here from their method relative to other approaches in the literature, or if this is just better than inversion using a U-Net. Even how well a pseudoinverse does would be nice to see or TV-regularized least squares.\\n\\nPractically I'm quite concerned about their method requiring training 130 separate convolutional neural\\nnets. The fact that all the different datasets give equal quality triangulations seems a bit odd, too. Is\\nit possible that any network at all would be okay? Can we just reconstruct the image from regression\\non 130 randomly-initialized convolutional networks? \\n\\nThe proposed method isn't bad, and the idea is interesting. But I can't help but wonder whether it works just because what we're doing is denoising the least squares reconstruction, and regression on many random projections might be pretty good for that. Unfortunately, the experiments don't help with developing a deeper understanding.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1z9ehAqYX | Shrinkage-based Bias-Variance Trade-off for Deep Reinforcement Learning | [
"Yihao Feng",
"Hao Liu",
"Jian Peng",
"Qiang Liu"
] | Deep reinforcement learning has achieved remarkable successes in solving various challenging artificial intelligence tasks. A variety of different algorithms have been introduced and improved towards human-level performance. Although technical advances have been developed for each individual algorithms, there has been strong evidence showing that further substantial improvements can be achieved by properly combining multiple approaches with difference biases and variances. In this work, we propose to use the James-Stein (JS) shrinkage estimator to combine on-policy policy gradient estimators which have low bias but high variance, with low-variance high-bias gradient estimates such as those constructed based on model-based methods or temporally smoothed averaging of historical gradients. Empirical results show that our simple shrinkage approach is very effective in practice and substantially improve the sample efficiency of the state-of-the-art on-policy methods on various continuous control tasks.
| [
"bias-variance trade-off",
"James-stein estimator",
"reinforcement learning"
] | https://openreview.net/pdf?id=S1z9ehAqYX | https://openreview.net/forum?id=S1z9ehAqYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxrcWIWeN",
"S1xJU7v5A7",
"ryes-mv50X",
"BkeuBMPcCm",
"SylGrdqKC7",
"SklWpVwT2m",
"rylMEf6qn7",
"ryevzrzchm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544802700541,
1543299910970,
1543299842964,
1543299648310,
1543247930127,
1541399737119,
1541227050241,
1541182735067
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1102/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1102/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1102/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1102/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1102/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1102/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1102/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1102/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper introduces the use of J-S shrinkage estimator in policy optimization, which is new and promising. The results also show the potential. That said, reviewers are not fully convinced that in its current stage the paper is ready for publication. The approach taken here is essentially a combination existing techniques. While it is useful, more work is probably needed to strengthen the contribution. A few directions have been suggested by reviewers, including theoretical guarantees and stronger empirical support.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Nice work with potential, but contributions need to be strengthened\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"We would like to thank the reviewer's valuable comments.\\nWe believe our work is the first one to utilize shrinkage estimators to improve deep RL algorithms. We will add more solid empirical experiments and more discussion about both the details and the theoretical properties in the RL settings in the next version.\"}",
"{\"title\": \"Reponses to AnonReviewer1\", \"comment\": \"Thank you for the valuable review and suggestions for improving the paper. Followings are the detail responses to the questions.\\n\\n*--Is 450k steps enough for Mujoco Tasks\\nWe only evaluate the Reacher with 450k steps since the environment is very simple and 450k steps are sufficient to learn a good policy. We use 4500k or more samples for all the other tasks according to their difficulty. \\nWe find the current steps is enough to demonstrate the advantages of our method, but will further add steps to further confirm the results. \\n\\n*--Performance for PPO-MBS\\nIn Figure 1, we demonstrated that our proposed method learns faster and can reduce variance during the training process. It is natural that the variance gap between our proposed method and baseline becomes smaller during the training since $\\\\alpha$ increases on walker2d and the combined gradient estimator is more similar to the unbiased gradient. As for PPO-MBS vs constant $\\\\alpha$, the adaptive $\\\\alpha$ is close to $\\\\alpha=0.5$ during the training process. This also shows that we can adaptively get better performance than fixed coefficient without hyper-tuning. \\n\\n\\n*--Why does $\\\\alpha$ increase on Walker2d and Hopper, but decrease on Swimmer and Reacher? \\n\\nThis might be because that the trajectory lengths of Walker2d and Hopper increase during the training process, which increases the sample complexity gradually; this makes it difficult for the dynamic model to learn the real transition process and hence yields an increasing $\\\\alpha$. This may also indicate that our adaptive strategy can check the performance of the learned dynamic model automatically. \\n\\n*--Comparing and Combining PPO-MBS and PPO-STS\\nWe will add results on comparing and combining PPO-MBS and PPO-STS in the revised version. The combination of these two approaches can be done straightforwardly with multiple shrinkage methods, which we will also investigate. \\n\\nAt last we would like to thank the reviewer for pointing our potential issues and we will improve them in our next version.\"}",
"{\"title\": \"Reponses to AnonReviewer2\", \"comment\": \"We would like to thank reviewer's valuable comments. As for theoretical gurantees, JS estimator was originally motivated for normal distributions for which nice inequalities can be established to guarantee the decrease of MSE. However, it is easily applicable to more general cases and provides a simple yet powerful strategy for addressing the {very challenge of bias-variance trade-off} that is crucial in many components of RL. The goal of this work is to fill this gap between RL and JS literature. Note that the decrease of MSE in our experiments is already a direct evidence of the effectiveness of JS in RL. In the future we will try to use recent extended efficient shrinkage estimators in parametric models (hansen, 2016), which relaxes the normal distribution to general asymptotic distributions.\\n\\nHansen, Bruce E. Efficient shrinkage in parametric models.Journal of Econometrics, 190(1):115\\u2013132, 2016.\"}",
"{\"title\": \"I agree with the concerns raised by the other reviewers\", \"comment\": \"I still think this is a very interesting, novel and relevant idea that desires attention. However, on the same time, I agree with the points raised by the other two reviewers which are all well-motivated and relevant concerns. I am fine with either decision on this paper and I am not willing to champion the paper further.\"}",
"{\"title\": \"Direct Application of a Well Established Statistical Method without Theoretical Gurantees and Very Limited Emprical Support\", \"review\": \"The paper claims that a combination of policy gradients calculate by different RL algorithms would provide better objective values. Main focus of the work is to devise and adaptive combination scheme for policy gradient estimators. Authors claim that by using the statistical shrinkage estimators combining different gradients that have different bias-variance trade-off would provide better mean-square error than each of those individual gradient estimators. The key observations made by the authors are that gradients computed by on-policy methods would provide nearly unbiased estimators with very high variance while the gradients obtained by the off-policy methods in particular model based approaches would provide highly biased estimators with low variance. Proposed statistical tool to combine gradients is James-Steim shrinkage estimator. JS estimator provides strong theoretical guarantees for Gaussian cases but some practical heuristics tor more complex non-Gaussian cases. Authors do not discuss whether the JS estimator actually suitable for this task given the fact that strong assumptions of the underlying statistical approach is violated. They also do not go into any discussion about theoretical guarantees nor they provide any exposures or intuitions about that. The scope of the experiments is very limited. Given the fact that there is no theory behind the claims and the lack of strong evidence I believe this paper does not cut the requirements for publication.\\n\\nTo improve please add significantly more empirical evidence, provide more discussion about theoretical ground work and discussion about the suitability of the JS estimators when its required assumptions are not satisfied.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Promising, yet inconclusive evaluations\", \"review\": [\"This paper presents two algorithms that improve on PPO by using James-Stein (JS) shrinkage estimator. The first algorithm, PPO-MBS, combines low bias of on-policy methods with low variance of model-based RL algorithms. The second, PPO-STS, uses JS to create a statistical momentum and reduce variance of the PPO algorithm. Both algorithms are evaluated on Mujoco environments aiming to show improvements in average cumulative reward, reduced bias and variance.\", \"The paper\\u2019s topic is highly relevant, as the authors point out, the current state of the art in RL is largely divided between model-based and model-free methods. This paper aims at bridging the gap, and taking advantage of both sides. The writing is clear and concise, with all the math properly introduced. The proposed methods are interesting and novel. As far as I am aware, this is solid and unexplored approach with potentially significant impact.\", \"There are two concerns with this paper. First, the evaluation results are promising, yet not fully convincing. It appears that 5 million steps is not sufficient for the policy convergence for any of the tasks (average reward keeps increasing). At the same time the variance gap (Figure 1) is reducing.\", \"As the training continues, does the variance for PPO-MBS become larger?\", \"Similar question for the average reward - the advantage of the PPO-MBS vs. comparison methods seems to be reducing. What happens with the additional training?\", \"How do the trajectories and behavior of the Walker2D (and other) look with PPO-MBS vs. others - is the higher average reward indicative of the qualitatively better behavior?\", \"Why do Walker2D and Hopper increase \\\\alpha over time, while Swimmer and Reacher lean more towards mode model based policy over time?\", \"Is there something significant about the structure of the problems?\", \"Similar questions arise from the evaluation of the PPO-STS - it appears that the training is even less complete in this case, rendering the conclusions about the quality of the learned policy at convergence invalid.\", \"Why are Humanoid and Ant not evaluated on PPO-MBS? The authors should extend the training to convergence on all problems, and present the results including movies of example trajectories of all environments for both algorithms in the supplementary material.\", \"Second, the paper introduces two competing methods. While, the authors compare them directly, they do not discuss how the two methods relate to each other. In the Reacher and Hopper task seems like all three methods perform about the same, and in the Hopper PPO-STS even performs worse than the baseline. This makes it difficult to assess the significance of the exposition. When should one be used, and when the other one is better?\", \"The authors should consider either a) splitting the paper into two focusing each paper on a single algorithm with more in-depth evaluations and discussions, or b) combining the two algorithms into ones. In any case, further analysis that illuminates when the methods should be used, and how they improve the training are needed.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well motivated use of James-Stein estimator to RL problems\", \"review\": \"The paper suggest a shrinkage-based estimator (James-Stein estimator) to compute policy gradients in reinforcement learning to reduce the variance by trading some bias. Two versions are suggested: The on-policy gradients is shrinked either towards (i) model based gradient, or towards (ii) a delayed average of previous on-policy gradients. Empirically, both methods have better performance than the baseline.\\n\\nThe paper is clearly written and well motivated. Some details are lacking that would be of interest to the reader and to make the results reproducible. For example how is \\\\hat Q estimated? The trick that is referred to in the end of page about only simulating short horizon trajectories deserves more detail. I would suggest providing more details, in the text and/or in the two algorithms.\\n\\nThe authors claim that JS estimator for gradient estimation in RL has not been used before. I am also not aware of any other work, but have also not been looking after that line of work. The paper seems to be a good contribution to the ever increasing literature of how to improve deep RL.\", \"minors\": \"\\\\hat \\\\theta on RHS in eq (7) should be \\\\bar \\\\theta ? Otherwise, what is \\\\hat \\\\theta?\\nsection 4.2 Ww -> We\\n\\n======= After revision =========\\n\\nI still think this is a very interesting, novel and relevant idea that desires attention. However, on the same time, I agree with the points raised by the other two reviewers which are all well-motivated and relevant concerns. Therefore, I join the view that the paper is not yet ready for publication but I do encourage the authors to improve their work and resubmit to another venue.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1xcx3C5FX | A Statistical Approach to Assessing Neural Network Robustness | [
"Stefan Webb",
"Tom Rainforth",
"Yee Whye Teh",
"M. Pawan Kumar"
] | We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability. | [
"neural network verification",
"multi-level splitting",
"formal verification"
] | https://openreview.net/pdf?id=S1xcx3C5FX | https://openreview.net/forum?id=S1xcx3C5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkxoGgQzlN",
"HJeKazQjC7",
"HJgESA-i0m",
"r1lDreZjAX",
"BylSNxbjAm",
"S1gKOXHc0Q",
"SkxO_sApa7",
"rkgd_qCpTQ",
"S1lNxt0aTQ",
"B1xZ6uCaT7",
"Hke5mvR6p7",
"HkeQCAq6hQ",
"rJxoeNq92X",
"HkxcI2vu37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544855570906,
1543348928918,
1543343675520,
1543340095496,
1543340077294,
1543291760820,
1542478704461,
1542478448446,
1542478060100,
1542478009099,
1542477602343,
1541414602934,
1541215219466,
1541074001655
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1101/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1101/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1101/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1101/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1101/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1101/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": [\"Strengths\"], \"the_paper_addresses_an_important_topic\": \"how to bound the probability that a given \\u201cbad\\u201d event occurs for a neural network under some distribution of inputs. This could be relevant, for instance, in autonomous robotics settings where there is some environment model and we would like to bound the probability of an adverse outcome (e.g. for an autonomous aircraft, the time to crash under a given turbulence model). The desired failure probabilities are often low enough that direct Monte Carlo simulation is too expensive. The present work provides some preliminary but meaningful progress towards better methods of estimating such low-probability events, and provides some evidence that the methods can scale up to larger networks. It is well-written and of high technical quality.\\n\\n* Weaknesses\\n\\nIn the initial submission, one reviewer was concerned that the term \\u201cverification\\u201d was misleading, as the methods had no formal guarantees that the estimated probability was correct. The authors proposed to revise the paper to remove reference to verification in the title and the text, and afterwards all reviewers agreed the work should be accepted. The paper also may slightly overstate the generality of the method. For instance, the claim that this can be used to show that adversarial examples do not exist is probably wrong---adversarial examples often occupy a negligibly small portion of the input space. There was also concern that most comparisons were limited to naive Monte Carlo.\\n\\n* Discussion\\n\\nWhile there was initial disagreement among reviewers, after the discussion all reviewers agree the paper should be accepted. However, we remind the authors to implement the changes promised during the discussion period.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"well-written paper addressing timely question\"}",
"{\"title\": \"Keep my current evaluation\", \"comment\": \"I thank the authors for diligently addressing the reviewers' comments and for revising the paper. My concerns have been sufficiently addressed and I would like to maintain my current positive evaluation of this paper.\"}",
"{\"title\": \"Thanks for the update. Most of my concerns are resolved.\", \"comment\": \"Dear Paper 1101 Authors,\\n\\nThanks for the clarification. The new abstract and title look much better than before, and the updated discussion section will greatly help readers understand the paper without misconstruction. It resolves most of my concerns.\\n\\nI will update the rating of this paper. Make sure to prepare the final revision of the paper based on the updates proposed above.\\n\\nThanks,\\nPaper1101 AnonReviewer2\"}",
"{\"title\": \"Updated discussion section\", \"comment\": \"\\u201cWe have introduced a new measure for the intrinsic robustness of a neural network, and have validated its utility on several datasets from the formal verification and deep learning literatures. Our approach was able to exactly emulate formal verification approaches for satisfiable properties and provide high confidence, accurate predictions for properties which were not. The two key advantages it provides over previous approaches are: a) providing an explicit and intuitive measure for how robust networks are to satisfiable properties; and b) providing improved scaling over classical approaches for identifying unsatisfiable properties.\\n\\t\\t\\t\\t\\t\\nDespite providing a more informative measure of how robust a neural network is, our approach may not be appropriate in all circumstances. In situations where there is an explicit and effective adversary, instead of inputs being generated by chance, we may care more about how far away the single closest counterexample is to the input, rather than the general prevalence of counterexamples. Here our method may fail to find counterexamples because they reside on a subset with probability less than Pmin; the counterexamples may even reside on a subset of the input space with measure zero with respect to the input distribution. On the other hand, there are many practical scenarios, such as those discussed in the introduction, where either it is unrealistic for there to be no counterexamples close to the input, the network (or input space) is too large to realistically permit formal verification, or where potential counterexamples are generated by chance rather than by an adversary. We believe that for these scenarios our approach offers significant advantages to formal verification approaches.\\n\\t\\t\\t\\t\\t\\nGoing forward, one way the efficiency of our approach could be improved further is by using a more efficient base MCMC kernel in our AMLS estimator, that is replace line 12 in Algorithm 1 with a more efficient base inference scheme. The current MH scheme was chosen on the basis of simplicity and the fact it already gave effective empirical performance. However, using more advanced inference approaches, such as gradient-based approaches like Langevin Monte Carlo (LMC) (Rossky et al., 1978) and Hamiltonian Monte Carlo (Neal, 2011), could provide significant speedups by improving the mixing of the Markov chains, thereby reducing the number of required MCMC transitions.\\u201d\"}",
"{\"title\": \"Updates to ensure that the paper is not misconstrued\", \"comment\": \"Thank you for your follow up comments, praise, and suggestions. We are happy to take on board your constructive criticism and make edits to ensure that the paper is not misconstrued in any way. Unfortunately, the revision period for the paper ended last night so we are not able to update the submission itself, but we detail the changes we have already made locally to the paper in response to your suggestions below. We hope these address your concerns and look forward to hearing your thoughts.\\n\\n1) We have changed the title to \\\"A Statistical Approach to Assessing Neural Network Robustness\\\" and removed any references to \\\"statistical verification\\\" throughout the paper. We have further made small edits throughout to ensure it is crystal clear we are not proposing a new approach for formal verification.\\n\\n2) We have updated the abstract to the below to ensure there is no potential confusion.\\n\\n\\\"We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.\\\"\\n\\n3) We wholeheartedly agree that using gradient-based sampling methods in the place of the MH sampler could give noticeable empirical improvements -- this is something we were already looking into as a component of some follow-up work. The MH sampler was chosen on the basis of simplicity and the fact it already gave sufficiently effective empirical performance. We have added a paragraph to the discussion to highlight this point (see the updated discussion at the end of this reply).\\n\\n4) We agree that further discussion on the relative advantages/disadvantages of the approach compared to formal verification, and, in particular, the respective scenarios where each is preferable, would strengthen the paper. To this end, we have added a paragraph to the discussion on this point (see below).\"}",
"{\"title\": \"Thank you for the response. I like this paper but unfortunately find its current representation misleading.\", \"comment\": \"Thanks for clarifying that the paper is on finding \\\"the probability of the event that the property is violated\\\". In that interpretation, the results of this paper makes more sense. I also appreciate your effort on adding new experiments on more datasets. However, my major concerns still remain.\\n\\nAdversarial examples are about the worst case scenario, rather than the average case that can be represented by sampling. Many networks are pretty robust to very large Gaussian perturbations, but not robust to very tiny adversarial noise that is crafted using gradient ascent.\\n\\nMore precisely, adversarial examples can live in a subspace which measures 0. For example, for a certain network with a 10-dimensional input (x_1, ..., x_10), all of its adversarial examples can be found only when x_1 = 0 (we can see x_1 as a \\\"kill switch\\\" of the network, when x_1 =/= 0 it behaves normally, and when x_1==0 it behaves badly). There are still infinite many adversarial examples in the hyperplane of x_1=0, and they can be arbitrarily close to normal examples (when x_1 has a small value). But using a sampling based approach, we can find these violations with a probability of 0 (as they measure 0).\\n\\nThus, even if we cannot find any violations using a sampling based approach, we can hardly argue that the network illustrated above is robust. Especially, adversarial examples may lie in a low dimensional subspace, but the entire sampling space can be very high dimensional, so it is very inefficient to find violations in this way. Using AMLS might alleviate this issue, but cannot complete solve the problem.\\n\\nOn the other hand, one benefit of sampling based method is that they can possibly scale to larger models/datasets. Current formal verification methods for neural networks only work on small networks.\\n\\nHowever, if we have to resort to sampling to find violations rather than using formal verification, there might be better and simpler ways that worth investigating more in this paper. For example, we can run simple Monte Carlo to sample K points, and run N steps of PGD on each points to find violations. Gradient based methods can possibly have a better chance on finding violations lying in a low dimensional subspace. So using MC based approach without gradient knowledge (or some form of Hamiltonian) might not be the most efficient way here.\\n\\nIn my opinion, when using sampling based approaches for the application of verification, we should be very careful. For example, (Weng et al. 2018b) used sampling to estimate the robustness of neural networks; it is not even claimed to be a \\\"verification\\\" method, rather just an \\\"estimation\\\" of robustness. Even though, (Goodfellow, 2018) attacked it by showing that this sampling based method may fail silently (which is expected as this method does not have a guarantee), and advocate not using this method.\\n\\nDespite all my concern above, I actually like this paper because it is well motivated, includes extensive experimental results, and has a nice application of the AMLS method. If the paper were advertised differently (for example, as a method to \\\"estimate\\\" network robustness), I would recommend to accept this paper. But in this current form, I feel it misleading because \\\"statistical verification\\\" seems to be a new approach of formal verification, and gives people a strong feeling that it is capable to replacing other formal verification methods. Thus, I had to keep my original rating for now. I encourage the authors to rephrase the paper in a more conservative manner, investigate more on the HMC based sampling approach, and discuss the potential limitations and drawbacks when using sampling. This will become a very good paper.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We are glad you found our work interesting and novel, and thank you for your helpful suggestions for improving the writing. We have taken them on board in the revised paper, making a number of edits.\\n\\n1. \\\"In the introduction, \\\"the classical approach\\\" is mentioned but to be the latter is \\ninsufficiently covered. Some more detail would be welcome.\\\"\\n\\nWe have added a reference to what we mean by the classical approach in the related works section.\\n\\n2. \\\"page 2, \\\"predict the probability\\\": rather employ \\\"estimate\\\" in such context?\\\"\\n\\nWe have changed \\u201cpredict\\u201d to \\u201cestimate\\u201d.\\n\\n3. \\\"'linear piecewise': 'piecewise linear'?\\\"\\n\\nThis was a typo and we have corrected this phrase.\\n\\n4. \\\"What is 'an exact upper bound'?\\\"\\n\\nWe mean that it is a true upper bound instead of just being a stochastic estimate of an upper bound (while, on the other hand, Weng et al\\u2019s approach is stochastic estimate of a lower bound). However, we agree that the \\u201cexact\\u201d is superfluous and have removed it.\\n\\n5. \\\"I am not an expert but to me 'the density of adversarial examples' calls for further \\nexplanation.\\\"\\n\\nWe think perhaps \\u201cthe prevalence of adversarial examples\\u201d would be a better phrase and have corrected this. We mean that the input model density is integrated over for our metric to calculate the volume of counterexamples in a subset of the input domain, relative the overall volume of that input domain.\\n\\n6. \\\"From page 3 onwards: I was truly confused by the use of [x] throughout the text \\n(e.g. in Equation (4)). x is already present within the indicator, no need to add yet \\nanother instance of it.\\\"\\n\\nIn retrospect, we agree that this was confusing and have removed the [x] notation from the indicator function.\\n\\n7. \\\"In related work, no reference to previous work on \\\"statistical\\\" approaches to NN \\nverification. Is it actually the case that this angle has never been explored so far?\\\"\", \"as_far_as_we_are_aware_this_is_correct\": \"we have not been able to find any prior work which aims to estimate the statistical prevalence of counterexamples.\\n\\n8. \\\"In page 6, what is meant by 'more perceptually similar to the datapoint'?\\\"\\n\\nWe mean that the minimal adversarial distortion for models on CIFAR-10 is known to typically be much smaller than for MNIST. The result of this is that an adversarial example on MNIST will often have visual salt-and-pepper noise, whereas an adversarial example for CIFAR-10 typically is indistinguishable to the naked eye from its unperturbed datapoint.\\n\\n9. \\\"In the appendix: the MH acronym should better be introduced, as should the notation \\ng(x,|x') if not done elsewhere (in which case a cross-reference would be welcome). \\nBesides this, writing \\\"the last samples\\\" requires disambiguation (using \\\"respective\\\"?).\\\"\\n\\nWe have added to this description so that it is less terse and more carefully introduces the notation, including changing \\u201clast samples\\u201d to \\u201cfinal samples\\u201d and adding in a reference for further reading.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank you for your useful feedback and suggestions for additional experiments, and are glad you found the connection we draw between verification and rare event estimation to be an interesting idea.\\n\\n1. \\\"How does the performance of the proposed method scale wrt scalability? It will be useful to do an ablation study, i.e. keep the input model fixed and slowly increase the dimension.\\\"\\n\\nThis is a great question and something we have been looking into. As a first step, we have run a new experiment at a higher scale with the CIFAR-100 dataset and a far larger DenseNet-40/40 architecture as discussed in the response to Reviewer 1. We see our approach still performs very effectively on this larger problem, for which most existing verification approaches would struggle due to memory requirements (see also our new comparisons in Section 6.4). We are now working on doing an ablation study on the size of the input dimension x, but it is unlikely we will be finished with this before the end of the rebuttal period due to the fact that it will require a very large number of runs to generate. \\n\\n2. \\\"Did you experiment with other MH proposal beyond a random walk proposal?\\\"\\n\\nThat\\u2019s an excellent idea and a topic for future research. We didn\\u2019t experiment with a MH proposal beyond a random walk because this was the simplest thing to try and it already worked well in practice. As well as different proposals, we have also been thinking about the possibility to instead use a more advanced Langevin Monte Carlo approach to replace the MH, which we expect to mix more quickly as the chains are guided by the gradient information.\\n\\n3. \\\"What is the performance of the proposed method against 'universal adversarial examples'?\\\"\\n\\n\\u201cUniversal adversarial examples\\u201d refers to a method for constructing adversarial perturbations that generalize across data points for a given model, often generalizing across models too. Our method does not give a measure of robustness with respect to a particular attack method - it is attack agnostic. It measures in a sense the \\u201cvolume\\u201d of adversarial examples around a given input, and so if this is negligible then the network is robustness to any attack for that subset of the input space, whether by a universal adversarial example or another method. All the same, investigating the use of our approach in a more explicitly adversarial example setting presents an interesting opportunity for future work.\\n\\n4. \\\"The most interesting question is whether this method gives reasonable robustness estimates even for large networks such as AlexNet?\\\"\\n\\nThis is an important point to address. As previously mentioned, we have extended the experiment of section 6.3 to use the much larger DenseNet-40/40 architecture on CIFAR-100 and we see that our method still performs admirably. See the updated paper and our response to Reviewer 1 above.\\n\\n5. \\\"Please provide some intuition for this line in Figure 3: 'while the robustness to perturbations of size epsilon=0.3 actually starts to decrease after around 20 epochs.'\\\"\\n\\nThe epsilon used during the training method of Wong and Kolter (ICML 2018) is annealed from 0.01 at epoch 0 to 0.1 at epoch 50. It\\u2019s interesting from Figure 5 that the network is made robust to epsilon = 0.1 and 0.2 by training to be robust using a much smaller epsilon. The network appears to become less robust for epsilon = 0.3 as the training epsilon reaches 0.1. So this a counterintuitive result that training using a smaller epsilon may be better for overall robustness. One hypothesis for this is that the convex outer adversarial polytope is insufficiently tight for larger epsilon. Another hypothesis may be that training with a lower epsilon has a greater effect on the adversarial gradient at an input, as the training happens on a perturbation closer to that input.\\n\\n6. \\\"A number of attack and defense strategies have been proposed in the literature. Isn't it possible to use the proposed method to quantify the increase in the robustness towards an attack model using a particular defense strategy? If it is possible to show that the results of the proposed method match the conclusions from these papers, then this will be an important contribution.\\\"\\n\\nIt is possible to quantify the increase in robustness using a particular defense strategy, as we do in section 6.4 for the robust training method of Wong and Kolter (ICML 2018). We find that our method is in agreement with theirs. To quantify the increase in \\u201crobustness\\u201d with respect to a particular attack method, you can simply record the success of the attack method over samples from the test set as the training proceeds. This will not, however, be a reliable measure of robustness as the network can be trained to be resistant to the attack method in question while not being resistant to attack methods yet-to-be devised (the adversarial \\u201carms race\\u201d). We believe that what we really desire is an attack agnostic robustness measure, such as the method in our work.\"}",
"{\"title\": \"Response to Reviewer #2 (1/2)\", \"comment\": \"We thank Reviewer 1 for their critical appraisal and helpful suggestions.\\n\\n1. \\\"Instead of finding a formal proof for a property that gives a True/False answer, this\\npaper proposes to take a sufficiently large number of samples around the input\\npoint point and estimate the probability that a violation can be found. \\\"\\n\\nWe would like to make it clear that our method is less about finding a probability that violation can be found and more about trying to provide more information that just this true/false answer. In particular, establishing how prevalent violations are, rather than just the usual binary information that a single violation exists. As such, our motivation is not to provide an approximation of classical formal verification methods, but to go beyond them and establish additional important information. Note the important distinction here between \\u201cthe probability of the event that the property is violated\\u201d (as per our abstract), which is to do with the proportion of samples which are violations, compared with \\u201cthe probability that a violation can be found\\u201d, which is to do with the probability that any of the samples are violations.\\n\\n2. \\\"I have doubts on applying the proposed method to higher dimensional inputs. In\\nsection 6.3, the authors show an experiments in this case, but only on a dense\\nReLU network with 2 hidden layers, and it is unknown if it works in general.\\nHow does the number of required samples increases when the dimension of input\\n(x) increases?\\\"\\n\\nWe agree this is an important point to address, and have extended section 6.3 to include an experiment with a DenseNet-40/40 architecture (with approx. 2 million parameters) for the CIFAR-100 dataset, producing a plot similar to Figure 2. The values agree with the naive (unbiased) Monte Carlo estimates where they can be feasibly calculated, similar to the existing results, thereby establishing the estimates still have low bias for this more difficult problem. Furthermore, the variability in the results was very low (so much that it is not perceptible on the plot), thereby showing that the approach gives very low variance. Together we believe this provides strong evidence that our approach is able to scale to large architectures.\\n\\n3. \\\"Formally, if there exists a violation (counter-example) for a certain property,\\nand given a failure probability p, what is the upper bound of number of samples\\n(in terms of input dimension, and other factors) required so that the\\nprobability we cannot detect this violation with probability less than p?\\nWithout such a guarantee, the proposed method is not very useful because we\\nhave no idea how confident the sampling based result is. Verification needs\\nsomething that is either deterministic, or a probabilistic result with a small\\nand bounded failure rate, otherwise it is not really a verification method.\\\"\\n\\nWe want to stress that we are not claiming to perform formal verification or even an approximation of it. Namely, as alluded to before, we are not predicting a failure probability, but the prevalence of violations. We believe this an advantage of the method as by relaxing the assumptions of formal verification, we are able to give not just a binary answer as to whether a neural network it is robust or not to a property, but a more informative quantitative measure telling how robust. We show empirically the bias/variance of our estimate is low in the experimental section. All the same, it is interesting to note that it should be possible in principle to derive the type of bounds that you speak of for UNSAT properties, by using appropriate learning theory techniques (e.g. https://arxiv.org/abs/1810.08240). We think that this forms a very interesting direction for future work, but that it is beyond the scope of the current paper.\"}",
"{\"title\": \"Response to Reviewer #2 (2/2)\", \"comment\": \"4. \\\"The experiments of this paper lack comparisons to certified verification\\nmethods. There are some scalable property verification methods that can give a\\nlower bound on the input perturbation (see [1][2][3]). These methods can\\nguarantee that when epsilon is smaller than a threshold, no violations can be\\nfound. On the other hand, adversarial attacks give an upper bound of input\\nperturbation by providing a counter-example (violation). The authors should\\ncompare the sampling based method with these lower and upper bounds. For\\nexample, what is log(I) for epsilon larger than upper bound?\\\"\\n\\nThe three references and the follow-up work that you cite give different methods for obtaining a certificate-of-guarantee that a datapoint is robust in a fixed epsilon l_\\\\infty ball, with varying levels of scalability/generality/ease-of-implementation. For those datapoints where they can produce such a certificate, the minimal adversarial distortion is lower-bounded by that fixed epsilon.\\n\\nThis is important work to be sure, but we view it as predominantly orthogonal to ours, for which we define robustness differently, as the \\u201cvolume\\u201d of adversarial examples rather than the distance to a single adversarial example. We actively argue that the minimal adversarial distortion is not a reliable measure of neural network robustness in many scenarios, as it is dictated by the position of a single violation, and conveys nothing about the amount of violations present.\\n\\nDespite these being two different definitions of robustness, to try and demonstrate some comparisons between the two, we extended experiment 6.4 (already using Wong and Kolter (ICML 2018) [3]) and compared the fraction of samples for which I = P_min to the fraction that could be certified by Wong and Kolter for epsilon in {0.1, 0.2, 0.3}. We found that it wasn\\u2019t possible to calculate the certificate of Wong and Kolter for epsilon = 0.2/0.3 for all epochs, or epsilon = 0.1 before a certain epoch, due to its exorbitant memory usage. This significant memory gain thus indicates that our approach may still have advantages when used as a method for approximately doing more classical verification, even though this was not our aim. Please see the updated paper for full details.\\n\\n5. \\\"Additionally, in section 6.4, the results in Figure 2 also does not look very\\npositive - it unlikely to be true that an undefended network is predominantly\\nrobust to perturbation of size epsilon = 0.1. Without any adversarial training,\\nadversarial examples (or counter-examples for property verification) with L_inf\\ndistortion less than 0.1 (at least on some images) should be able to find.\\\"\\n\\nYou are correct that without any robustness training it is possible to find adversarial examples with distortion less than 0.1 for some inputs. This is indeed what our results show in Figure 5 in the appendices, illustrating our metric for individual samples. You can see for several samples that were not initially robustness to eps=0.1 perturbations (log(I) > log(P_min)), the value of log(I) decreases steadily as the robust training procedure is applied.\\n\\nIt does appear, however, that the network is predominantly robust to perturbations smaller than 0.1 before robustness training. The curves in Figure 3 plot the values of our measure log(I) between the 25th and 75th percentile for a number of samples. This shows that the network is already robust to perturbations of size eps=0.1 for more than about 75% of samples before the training procedure of Kolter and Wong is applied.\\n\\nAll the same, we agree that the original Figure 3 was confusing in this respect, and have rerun this experiment with a lower minimum threshold for log(I) to make the point clearer in the graph. With this lower value of log(P_min), we see the 75 percentile of log(I) over the samples quickly decrease as robustness training proceeds for eps=0.2. Notably, however, log(I) is incredibly small before any of this training for eps=0.1, demonstrating how it is important to not only think in terms of whether any violations are present, but also how many: here less the proportion of violating samples is less than 10^-100 at eps=0.1 for most of the datapoints.\"}",
"{\"title\": \"To Our Reviewers\", \"comment\": \"We would like to thank our reviewers for taking the time to read and evaluate our work and were glad to receive your detailed feedback, which we believe will improve the paper. To these ends, we have uploaded a revised version of the paper, with two additional experiments and a number of edits to address the concerns raised. In particular, we have added a new experiment with a substantially larger architecture to demonstrate the scaling of the approach, and adapted our final experiment to better demonstrate both the behavior of our approach and highlights the links and differences with classical verification approaches.\\n\\nPlease see our replies to each reviewer for our responses to individual points.\"}",
"{\"title\": \"Ok to accept after discussion\", \"review\": \"Verifying the properties of neural networks can be very difficult. Instead of\\nfinding a formal proof for a property that gives a True/False answer, this\\npaper proposes to take a sufficiently large number of samples around the input\\npoint point and estimate the probability that a violation can be found. Naive\\nMonte-Carlo (MC) sampling is not effective especially when the dimension is\\nhigh, so the author proposes to use adaptive multi-level splitting (AMLS) as a\\nsampling scheme. This is a good application of AMLS method.\\n\\nExperiments show that AMLS can make a good estimate (similar quality as naive\\nMC with a large number of samples) while using much less samples than MC, on\\nboth small and relatively larger models. Additionally, the authors conduct\\nsensitivity analysis and run the proposed algorithm with many different\\nparameters (M, N, pho, etc), which is good to see.\", \"i_have_some_concerns_on_this_paper\": \"I have doubts on applying the proposed method to higher dimensional inputs. In\\nsection 6.3, the authors show an experiments in this case, but only on a dense\\nReLU network with 2 hidden layers, and it is unknown if it works in general.\\nHow does the number of required samples increases when the dimension of input\\n(x) increases? \\n\\nFormally, if there exists a violation (counter-example) for a certain property,\\nand given a failure probability p, what is the upper bound of number of samples\\n(in terms of input dimension, and other factors) required so that the\\nprobability we cannot detect this violation with probability less than p?\\nWithout such a guarantee, the proposed method is not very useful because we\\nhave no idea how confident the sampling based result is. Verification needs\\nsomething that is either deterministic, or a probabilistic result with a small\\nand bounded failure rate, otherwise it is not really a verification method.\\n\\nThe experiments of this paper lack comparisons to certified verification\\nmethods. There are some scalable property verification methods that can give a\\nlower bound on the input perturbation (see [1][2][3]). These methods can\\nguarantee that when epsilon is smaller than a threshold, no violations can be\\nfound. On the other hand, adversarial attacks give an upper bound of input\\nperturbation by providing a counter-example (violation). The authors should\\ncompare the sampling based method with these lower and upper bounds. For\\nexample, what is log(I) for epsilon larger than upper bound?\\n\\nAdditionally, in section 6.4, the results in Figure 2 also does not look very\\npositive - it unlikely to be true that an undefended network is predominantly\\nrobust to perturbation of size epsilon = 0.1. Without any adversarial training,\\nadversarial examples (or counter-examples for property verification) with L_inf\\ndistortion less than 0.1 (at least on some images) should be able to find. It\\nis better to conduct strong adversarial attacks after each epoch and see what\\nare the epsilons of adversarial examples.\", \"ideas_on_further_improvement\": \"The proposed method can become more useful if it is not a point-wise method.\\nIf given a point, current formal verification method can tell if a property is\\nhold or not. However, most formal verification method cannot deal with a input\\ndrawn from a distribution randomly (for example, an unseen test example). This\\nis the place where we really need a probabilistic verification method. The\\nsetting in the current paper is not ideal because a probabilistic estimate of\\nviolation of a single point is not very useful, especially without a guarantee\\nof failure rates.\\n\\nFor finding counter-examples for a property, using gradient based methods might\\nbe a better way. The authors can consider adding Hamiltonian Monte Carlo to\\nthis framework (See [4]).\", \"references\": \"There are some papers from the same group of authors, and I merged them to one.\\nSome of these papers are very recent, and should be helpful for the authors\\nto further improve their work.\\n\\n[1] \\\"AI2: Safety and Robustness Certification of Neural Networks with Abstract\\nInterpretation\\\", IEEE S&P 2018 by Timon Gehr, Matthew Mirman, Dana\\nDrachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev \\n\\n(see also \\\"Differentiable Abstract Interpretation for Provably Robust Neural\\nNetworks\\\", ICML 2018. by Matthew Mirman, Timon Gehr, Martin Vechev. They also\\nhave a new NIPS 2018 paper \\\"Fast and Effective Robustness Certification\\\" but is\\nnot on arxiv yet)\\n\\n[2] \\\"Efficient Neural Network Robustness Certification with General Activation\\nFunctions\\\", NIPS 2018. by Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui\\nHsieh, Luca Daniel. \\n\\n(see also \\\"Towards Fast Computation of Certified Robustness for ReLU Networks\\\",\\nICML 2018 by Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh,\\nDuane Boning, Inderjit S. Dhillon, Luca Danie.)\\n\\n[3] Provable defenses against adversarial examples via the convex outer\\nadversarial polytope, NIPS 2018. by Eric Wong, J. Zico Kolter.\\n\\n(see also \\\"Scaling provable adversarial defenses\\\", NIPS 2018 by the same authors)\\n\\n[4] \\\"Stochastic gradient hamiltonian monte carlo.\\\" ICML 2014. by Tianqi Chen,\\nEmily Fox, and Carlos Guestrin.\\n\\n============================================\\n\\nAfter discussions with the authors, they agree to revise the paper according to our discussions and my primary concerns of this paper have been resolved. Thus I increased my rating.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea for quantitatively estimating the robustness of a network. Would like to see more comprehensive large-scale experiments.\", \"review\": [\"Given a network and input model for generating adversarial examples, this paper presents an idea to quantitatively evaluate the robustness of the network to these adversarial perturbations. Although the idea is interesting, I would like to see more experimental results showing the scalability of the proposed method and for evaluating defense strategies against different types of adversarial attacks. Detailed review below:\", \"How does the performance of the proposed method scale wrt scalability? It will be useful to do an ablation study, i.e. keep the input model fixed and slowly increase the dimension.\", \"Did you experiment with other MH proposal beyond a random walk proposal? Is it possible to measure the diversity of the samples using techniques such as the effective sample size (ESS) from the SMC literature?\", \"What is the performance of the proposed method against \\\"universal adversarial examples\\\"?\", \"The most interesting question is whether this method gives reasonable robustness estimates even for large networks such as AlexNet?\", \"Please provide some intuition for this line in Figure 3: \\\"while the robustness to perturbations of size \\u000f = 0:3 actually starts to decrease after around 20 epochs.\\\"\", \"A number of attack and defense strategies have been proposed in the literature. Isn't it possible to use the proposed method to quantify the increase in the robustness towards an attack model using a particular defense strategy? If it is possible to show that the results of the proposed method match the conclusions from these papers, then this will be an important contribution.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very interesting paper with a nice methodological tansfer between rare event estimation and NN verification\", \"review\": \"This is a paper of the verification of neural networks, i.e. check their robustness,\\nand the main contribution here is to tackle it as a statistical problem adressed with \\nmulti-level splitting Monte Carlo approach. I found the paper well motivated and original, \\nresulting in a publishable piece of research up to a few necessary adjustments. These \\nconcern principally notation issues and some potential improvements in the writing. \\nLet me list below some main remarks along the text, including also some typos. \\n\\n* In the introduction, \\\"the classical approach\\\" is mentioned but to be the latter is \\ninsufficiently covered. Some more detail would be welcome. \\n\\n* page 2, \\\"predict the probability\\\": rather employ \\\"estimate\\\" in such context? \\n\\n* \\\"linear piecewise\\\": \\\"piecewise linear\\\"? \\n\\n* what is \\\"an exact upper bound\\\"? \\n\\n* In related work, no reference to previous work on \\\"statistical\\\" approaches to NN \\nverification. Is it actually the case that this angle has never been explored so far?\\n\\n* I am not an expert but to me \\\"the density of adversarial examples\\\" calls for further \\nexplanation. \\n\\n* From page 3 onwards: I was truly confused by the use of [x] throughought the text \\n(e.g. in Equation (4)). x is already present within the indicator, no need to add yet \\nanother instance of it. Here and later I suffered from what seems to be like an awkward \\nattempts to stress dependency on variables that already appear or should otherwise \\nappear in a less convoluted way. \\n\\n* In Section 4, it took me some time to understand that the considered metrics do not \\nrequire actual observations but rather concern coherence properties of the NN per se. \\nWhile this follows from the current framework, the paper might benefit from some more \\nexplanation in words regarding this important aspect. \\n\\n* In page 6, what is meant by \\\"more perceptually similar to the datapoint\\\"? \\n\\n* In the discussion: is it really \\\"a new measure\\\" that is introduced here? \\n\\n* In the appendix: the MH acronym should better be introduced, as should the notation \\ng(x,|x') if not done elsewhere (in which case a cross-reference would be welcome). \\nBesides this, writing \\\"the last samples\\\" requires disambiguation (using \\\"respective\\\"?).\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Hye9lnCct7 | Learning Actionable Representations with Goal Conditioned Policies | [
"Dibya Ghosh",
"Abhishek Gupta",
"Sergey Levine"
] | Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all the underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making -- that are "actionable". These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, eliminating the need for explicit reconstruction. We show how these learned representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning. | [
"Representation Learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=Hye9lnCct7 | https://openreview.net/forum?id=Hye9lnCct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1loIzEmxE",
"r1lwxQpc1N",
"S1lyiF78AX",
"HJeOi_mLRQ",
"H1ghTFkVR7",
"HkexM-C7RQ",
"Bke-uQ1WR7",
"rylLCj0e0Q",
"S1gtlIA6TQ",
"r1x_0rCT6m",
"HJlHiBC6pm",
"B1gKDS0TpQ",
"HkxTyBRTaX",
"ByeyV10-TX",
"r1gAKUmA3Q",
"r1lO0VLPhm",
"SyewvpG7sX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1544925779399,
1544372974858,
1543022998524,
1543022752050,
1542875588125,
1542869256179,
1542677352979,
1542675405969,
1542477297166,
1542477264422,
1542477212656,
1542477152953,
1542477028955,
1541689126930,
1541449350228,
1541002447708,
1539677534952
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1099/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1099/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1099/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1099/AnonReviewer1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1099/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1099/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"To borrow the succinct summary from R1, \\\"the paper suggests a method for generating representations that are linked to goals in reinforcement learning. More precisely, it wishes to learn a representation so that two states are similar if the\\npolicies leading to them are similar.\\\" The reviewers and AC agree that this is a novel and worthy idea.\\n\\nConcerns about the paper are primarily about the following.\\n(i) the method already requires good solutions as input, i.e., in the form of goal-conditioned policies, (GCPs)\\nand the paper claims that these are easy to learn in any case.\\nAs R3 notes, this then begs the question as to why the actionable representations are needed.\\n(ii) reviewers had questions regarding the evaluations, i.e., fairness of baselines, additional comparisons, and \\nadditional detail. \\n\\nAfter much discussion, there is now a fair degree of consensus. While R1 (the low score) still has a remaining issue with evaluation, particularly hyperparameter evaluation, they are also ok with acceptance. The AC is of the opinion that hyperparameter tuning is of course an important issue, but does not see it as the key issue for this particular paper. \\nThe AC is of the opinion that the key issue is issue (i), raised by R3. In the discussion, the authors reconcile the inherent contradiction in (i) based on the need of additional downstream tasks that can then benefit from the actionable representation, and as demonstrated in a number of the evaluation examples (at least in the revised version). The AC believes in this logic, but believes that this should be stated more clearly in the final paper. And it should be explained\\nthe extent to which training for auxiliary tasks implicitly solve this problem in any case.\\n\\nThe AC also suggests nominating R3 for a best-reviewer award.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"good idea; general consensus\"}",
"{\"title\": \"Extent of concerns addressed\", \"comment\": \"The authors have addressed my main concerns, though there is still the problem (quite general in deep RL papers) of the experimental validation not being truly unbiased. There is no attempt to make a train/test separation of environments, for example. I wouldn't mind it being accepted, but it does propagate a general trend, which I would like to see diminished.\\n\\nAt the very least, the authors should emphasize that their experimental results are only exploratory and should not be taken to represent a clear benefit of one method over another, even though the direction itself seems promising. The paper also doesn't address the practicality of using this approach in unknown environments. An unbiased analysis where the hyperparameters (including perhaps possible goal-directed policies) are obtained from different simulated environments to the ones the approach is tested on would have been really nice in this regard.\"}",
"{\"title\": \"Response to Reviewer 3: Regarding the Goal-Conditioned Policy\", \"comment\": \"Thank you for your response and helpful suggestions! The question you raise about the necessity of something beyond a goal-conditioned policy is a valuable one, and we answer it below. We have updated the discussion in Section 4 to reflect the same. We have also added an additional comparison to directly using goal conditioned policies in Section 6.6.\\n\\nAlthough GCPs trained to reach one state from another are feasible to learn, they possess some fundamental limitations (added discussion in Section 4): they do not generalize very well to new states, and they are limited to solving tasks expressible as just reaching a particular goal. The unifying explanation for why ARCs are useful over just a GCP is that the learned representation generalizes better than the GCP - to new tasks and to new regions of the environment. In our experimental evaluation, we show that ARCs can help solve tasks that cannot be expressed as goal reaching (Section 6.6, 6.7) and they enable learning policies on larger regions to which GCPs do not generalize (Section 6.5).\\n\\nAs the GCP is trained with a sparse reaching reward, it is unaware of possible reward structures in the environment, making it hard to adapt to tasks which are not simple goal reaching, such as the \\u201creach-while-avoid\\u201d task in Section 6.6. For this task, following the GCP directly would cause the ant to walk through the red region and incur a large negative reward; a comparison we now explicitly add to Section 6.6. Tasks which cannot be expressed as simply reaching a goal are abundant in real life scenarios such as navigation with preferences or manipulation with costs on quality of motion, and fast learning on such tasks (as ARC does) is quite beneficial. We have explicitly emphasized this discussion in Section 4.1, and made the limitations of simple goal reaching clear at the start of Section 4. \\n\\nIn the tasks for Section 6.5, the GCP trained on the 2m region (in green) does not achieve high performance on the larger region of 8m, even when finetuned on the environment using the provided reward (Fig 8). However, shaping the reward function using ARCs enables learning beyond the the original GCP, showing that ARCs generalize better to this new region, and potentially can lead to learning progressively harder GCPs via bootstrapping. \\n\\nWe agree that the discussion would greatly benefit from an introductory paragraph putting things into context, we have added this discussion at the beginning of Section 4. Please let us know if this resolves the issues you brought up. If not, we\\u2019re happy to address any other concerns you might have.\"}",
"{\"title\": \"Author Response: Adding Clarifications on Hyperparameters\", \"comment\": \"Thank you for your response! We are not sure we fully understand your concern about hyperparameter tuning, and were hoping for some additional clarifications regarding this. We have added additional details to the paper regarding hyperparameter tuning in Appendix D. We do not have many hyperparameters to tune for ARCs - the only free parameter is the size of the latent dimension, and for the downstream tasks, we tune the weight of the shaping term for reward shaping and the number of clusters for HRL for each comparison method on each task.\\n\\nThe size of the latent dimension is selected by performing a sweep on the downstream reward-shaping task for each domain and method. For the reward-shaping task, for each domain and comparison method, the parameter controlling the relative scaling of the shaped reward is selected according to a coarse hyperparameter sweep. The number of clusters for k-means for the hierarchy experiments is similarly selected for each domain and comparison method, although we found that all tasks and methods worked well with the same number of clusters. As you note, this is standard in deep reinforcement learning research, and we are simply following standard practice. Importantly, we give all methods a fair chance by tuning each comparison method separately. While we could certainly adjust hyperparameters differently, we did not find overall that hyperparameters were a major issue for our method. We would appreciate if you could clarify whether you are concerned about this issue in particular, and what a reasonably fair alternative might be?\\n\\nIf you believe that the issues in the paper have been addressed, we would appreciate it if you would revise your original review, or else point out what remaining issues you see with the paper or experimental evaluation.\"}",
"{\"title\": \"Serious improvement, but some points are remaining\", \"comment\": \"The authors have done a large effort in addressing a lot of our concerns, particularly regarding experimental details, how to learn goal conditioned policies (GCPs), and the related work section has been improved. The paper is now better and I will increase my score accordingly when appropriate.\\n\\nHowever, the fact that GCPs need to be learned in advance before the ARC representation can be learned still raises a major concern that needs further clarification, probably with some impact on the introduction and the positioning of the paper.\", \"the_question_is_the_following\": \"if GCPs are learned and authors considers it is rather \\\"easy\\\" to do so, why do we need something more? If you take the title of Sections 4.1 and 6.6, why \\\"leveraging actionable representation as feature for learning policies\\\" if you already learned policies ? The last paragraph suggests that policies learned in ARC space will generalize \\\"beyond GCPs\\\". Since GCPs limitations have not been made clear, this point is still vague.\\n\\nIn Section 4, the authors suggest three other valuable answers to this question: reward shaping, doing HRL, or clustering in ARC space (by the way, the latter could be used to help the former). My feeling is that treating those three points in addition to the one above is somewhat dispersive, the paper is trying to make to many points, at least a unifying perspective is missing.\\n\\nTo me, the paper lacks between Section 4 and Section 4.1 an introductiory text which should contain the last paragraph of Section 3 and would motivate the work more clearly with respect to the above issue.\\n\\nIf the paper does not get finally accepted at ICLR, I would suggest the authors to reconsider their positionning with respect to the perspective above and put forward a clearer message about what actionable representations really bring in a context where you already have \\\"good enough\\\" GCPs.\\n\\nFinally, the perspective mentioned in the discussion of interleaving ARC learning and GCP learning would of course change the picture about the above issue, I appreciate that the authors kept that for their last sentence.\"}",
"{\"title\": \"Some clarifications about the experiment design\", \"comment\": \"This looks much better in terms of the details. I think that there's a minor weakness remaining, quite common in many deep learning RL papers: It seems that you are tuning the hyperparameters of the algorithms in the same environments in which they are testing them (you do not specify exactly how the tuning is done). While this is OK for preliminary results, it does have a biasing effect when trying to compare different methods.\\n\\nSo, for the moment this appears to be weak evidence in favour of this representation, but it is not entirely convincing.\"}",
"{\"title\": \"Author Response: Added Clarifications about Fair Comparisons\", \"comment\": \"Thank you for your interest in our paper and for your insightful comments.\\n\\nYou are correct that the ARC representation requires us to assume that we can train a goal-conditioned policy in the first place. For our experiments, the GCP was trained with TRPO using a sparse reward (see Section 6.2 and Appendix A.1) -- obtaining such a policy is not especially difficult, and existing methods are quite capable of doing so [1,2,3]. We therefore believe that this assumption is reasonable. \\n\\nTo ensure the comparisons are fair, every representation learning method that we compare to is trained using the same data (Section 6.2, 6.3). All representations are trained on a dataset of trajectories collected from the goal-conditioned policy, and we have updated the paper with full details of the training scheme (Section 6.3, Appendix A.2, B).\\n\\nWe also ensure that our experiments fairly account for the data required to train the GCP.\\n- In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7. \\n- In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller, so ARC incurs no additional sample cost. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even with substantially more samples.\\n- In the experiment for learning non goal-reaching tasks (Section 6.6), the ARC representation can be re-used across many different tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of 100 tasks to demonstrate this amortization, and will update the paper with results. \\n\\n[1] Nair, Pong, Dalal, Bahl, Lin, and Levine. Visual reinforcement learning with imagined goals.NIPS 2018\\n[2] Pong, Gu, Dalal, and Levine. Temporal difference models: Model-free deep rl for model-based control. ICLR 2018\\n[3] Andrychowicz, Wolski, Ray, Schneider, Fong, Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. (2017). NIPS 2017\"}",
"{\"title\": \"detailed author responses provided; revised version posted; reviewers: please advise further\", \"comment\": \"Thanks to all for the detailed reviews and review responses.\", \"i_could_summarize_the_reviews_as\": \"interesting ideas; needs evaluations that take into account original construction of the goal-directed policies; more details. The authors have provided detailed responses.\\nA revised version is available; see the \\\"show revisions\\\" link, for either the revised PDF, or a comparison that highlights the revisions (I can recommend this).\\n\\nReviewers (and anonymous commenter), your further thoughts would be most appreciated.\\n-- area chair\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your insightful comments and suggestions! We have made many changes based on the comments provided by reviewers, which are summarized below. We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to revise their score or request additional changes that would alleviate their concerns.\", \"new_comparisons\": \"We have added two more comparisons as suggested - with model based RL methods ([5] Nagabandi et al) and learning representations via inverse dynamics models ([4] Burda et al). These have been described in Section 6.3 and added to plots in Fig 7, 8, 10. We have also added a new comparison to learning from scratch for the reward shaping experiment (Section 6.5, Fig 7).\", \"lack_of_details\": \"We apologize for the lack of clarity in the submission! We have updated the main text and added an appendix with additional details of the ARC representation and the experimental setup: how a goal-conditioned policy is trained (Sec 6.2, Appendix A.1), how the ARC representation is learned (Sec 6.2, Appendix A.2) , and how the methods are evaluated on downstream applications (Sec 6.5-7, Appendix A.3-6). We increased analysis of the performance of ARC and comparison methods for all the downstream applications (Sec 6.5-6.7), and added a discussion of how all methods are trained (Sec 6.3, Appendix A.2, B)\", \"requirement_for_goal_conditioned_policy\": [\"The ARC representation is extracted from a goal-conditioned policy (GCP), requiring us to assume that we can train such a GCP. This assumption was explicit in our submission, but we have emphasized it more now by editing Section 1 and Section 3. For our experiments, the GCP was trained with existing RL methods using a sparse task-agnostic reward (Section 6.2, Appendix A.1) -- obtaining such a policy is not especially difficult, and existing methods are quite capable of doing so [1,2,3]. We therefore believe that this assumption is reasonable. We also ensure that our experiments fairly account for the data required to train the GCP.\", \"In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7.\", \"In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even when provided with substantially more samples.\", \"In learning non goal-reaching tasks (Section 6.6), ARC representation can be re-used across many tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of tasks to demonstrate this, and will update the paper.\"]}",
"{\"title\": \"Response to Reviewer 3 (Continued)\", \"comment\": \"Find responses to particular comments below:\", \"related_work\": \"-> We cite and discuss all the papers mentioned in the related work section (Section 5). We additionally added comparison (Fig 7,8,10) to using inverse dynamics models and model-based RL methods, as discussed above. \\n\\n\\u201cShouldn't finally D_{act} be a distance between goals rather than between states?\\u201d\\n> D_{act} is indeed the actionable distance between goals, but given that the goal and the state space are the same the learned representation can be effectively used as a state representation as seen in Section 6.6.\\n\\n\\u201cin Fig. 5, in the four room environment, ARC gets 4 separated clusters. How can the system know that transitions between these clusters are possible?\\u201d\\n-> We have added a discussion in Section 6.6 to clarify this. We use model free RL to train the high level policy which directly outputs clusters as described in Section 4.4. This high level policy does not need to explicitly model the transitions between clusters, that is handled by the low level goal reaching policy, and the high-level policy is trained model-free. \\n\\n\\u201cIndeed, the end-effector must be correctly positioned so that the block can move. Does ARC capture this important constraint?\\u201d\\n-> ARC does not completely ignore the end effector position, this is evidenced from the fact that the blue region in Fig 6 is not a point but is an entire area. What ARC captures is that moving the block induces a greater difference in actions than inducing the arm. Moving the block to different positions requires the arm to move to touch the block and push it to the goal, while moving the arm to different positions can be done by directly moving it to the desired position. While both things are captured, the block is emphasized over the end-effector.\\n\\n\\u201cIn Section 6.4, Fig.7 a, ARC happens to do better than the oracle. why?\\u201d\\n-> The oracle comparison is a hand-specified reward shaping - we have updated Section 6.5 and Figure 7 to make this point clear. It is likely that the ARC representation is able to find an even better reward shaping, although the difference is fairly small. \\n\\n\\u201cfrom Fig.7, VIME is not among the best performing methods. Why insist on this one?\\u201d\\n-> We intended to emphasize that ARC is able to outperform a method that is purely designed for better exploration, not just other methods for representation learning. The discussion in Section 6.5 has been appropriately altered.\\n\\n[1] Nair, Pong, Dalal, Bahl, Lin, and Levine. Visual reinforcement learning with imagined goals.NIPS 2018\\n[2] Pong, Gu, Dalal, and Levine. Temporal difference models: Model-free deep rl for model-based control. ICLR 2018\\n[3] Andrychowicz, Wolski, Ray, Schneider, Fong, Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. (2017). NIPS 2017\\n[4] Burda, Edwards, Pathak, Storkey, Darrell, and Efros. Large-scale study of curiosity-driven learning. arXiv preprint\\n[5] Nagabandi, Kahn, Fearing and Levine. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning. ICRA 2018\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your insightful comments and suggestions! We have made many changes based on the comments provided by reviewers, which are summarized below. We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to request additional changes that would alleviate their concerns.\", \"new_comparisons\": \"We have added a model-based RL algorithm planning with MPC (Nagabandi et al.), as a comparison to learning features. On the \\u201creach-while-avoid\\u201d task (Fig 8), model-based RL struggles compared to a model-free policy with ARC because of challenges such as model-bias, limited exploration and short-horizon planning. The updated plot and corresponding discussion have been added to Section 6.6. We have also added a comparison to representations from inverse dynamics models (Burda et al), described in Section 6.3.\", \"lack_of_details\": \"We apologize for the lack of clarity in the submission! We have updated the main text and added an appendix with additional details of the ARC representation and the experimental setup: how a goal-conditioned policy is trained (Sec 6.2, Appendix A.1), how the ARC representation is learned (Sec 6.2, Appendix A.2) , and how the methods are evaluated on downstream applications (Sec 6.5-7, Appendix A.3-6). We have added a discussion of how all comparisons are trained, and measures taken to ensure fairness (Sec 6.3, Appendix A.2, B)\", \"requirement_for_goal_conditioned_policy\": [\"The ARC representation is extracted from a goal-conditioned policy (GCP), requiring us to assume that we can train such a GCP. This assumption was explicit in our submission, but we have emphasized it more now by editing Section 1 and Section 3. For our experiments, the GCP was trained with existing RL methods using a sparse task-agnostic reward (Section 6.2, Appendix A.1) -- obtaining such a policy is not especially difficult, and existing methods are quite capable of doing so [1,2,3]. We therefore believe that this assumption is reasonable, and have added this to the paper in Section 3.\", \"We also ensure that our experiments fairly account for the data required to train the GCP.\", \"In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7.\", \"In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller, so ARC incurs no additional sample cost in comparison. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even when provided with substantially more samples.\", \"In the experiment for learning non goal-reaching tasks (Section 6.6), the ARC representation can be re-used across many different tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of 100 tasks to demonstrate this amortization, and will update the paper with results.\"]}",
"{\"title\": \"Response to Reviewer 2 (Continued)\", \"comment\": \"Find responses to particular questions and comments below:\\n\\u201cShould run and show on a longer horizon.\\u201d\\n-> We have updated Figure 8 accordingly. All methods converge to the same average reward.\\n\\n\\u201cAs the goal-conditional policy is quite similar to the original task of navigation, it is important to know for how long it was trained and taken into account.\\u201d\\n-> We have added these details in Appendix A.1. It is important to note that for the task in Section 6.6, simply using a goal reaching policy would be unable to solve the task, since it has no notion of other rewards, like regions to avoid (shown in red in Fig 8), and would pass straight through the region. \\n\\n\\u201ceq.1 what is the distribution over s?\\u201d \\n-> It is the distribution over all states over which the goal-conditioned policy is trained. This is done by choosing uniformly from states on trajectories collected with the goal-conditioned policy as described in Section 6.2 and Appendix A.2. \\n\\n\\u201c How is the distance approximated?\\u201d\\n-> In our experimental setup, we parametrize the action distributions of GCPs with Gaussian distributions - for this class of distributions, the KL divergence, and thus the actionable distance, can be explicitly computed (Appendix A.1).\\n\\n\\u201cHow many clusters and what clustering algorithm?\\u201d\\n-> We use k-means for clustering, with distance in ARC space as the metric. We perform a hyperparameter sweep over the number of clusters for each method, and thus varies across tasks and methods. We have added this clarification to Section 4.4 and Section 6.6. \\n\\n\\nThe author state they add experimental details and videos via a link to a website.\\n> OpenReview does not provide a mechanism for submitting supplementary materials. Providing supplementary materials via an external link is the instruction provided by the conference organizers -- we would encourage the reviewer to check with the AC if they are concerned.\\n\\n[1] Nair, Pong, Dalal, Bahl, Lin, and Levine. Visual reinforcement learning with imagined goals.NIPS 2018\\n[2] Pong, Gu, Dalal, and Levine. Temporal difference models: Model-free deep rl for model-based control. ICLR 2018\\n[3] Andrychowicz, Wolski, Ray, Schneider, Fong, Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. (2017). NIPS 2017\\n[4] Burda, Edwards, Pathak, Storkey, Darrell, and Efros. Large-scale study of curiosity-driven learning. arXiv preprint\\n[5] Nagabandi, Kahn, Fearing and Levine. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning. ICRA 2018\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your insightful comments and suggestions! We have made many changes based on the comments provided by reviewers, which are summarized below. We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to revise their score or request additional changes that would alleviate their concerns.\", \"new_comparisons\": \"We have added two more comparisons - with model based RL methods ([1] Nagabandi et al) and learning representations via inverse dynamics models ([2] Burda et al). These have been described in Section 6.3 and added to plots in Fig 7, 8, 10. We have also added a new comparison to learning from scratch for the reward shaping experiment (Section 6.5, Fig 7).\", \"lack_of_details\": \"We apologize for the lack of clarity in the submission! We have updated the main text and added an appendix with additional details of the ARC representation and the experimental setup: goal-conditioned policy (GCP) training (Sec 6.2, Appendix A.1), ARC representation learning (Sec 6.2, Appendix A.2) , downstream evaluation (Sec 4, 6.5-6.7, Appendix A.3-6). We have added a discussion of how all comparisons are trained, and measures taken to ensure fairness (Sec 6.3, Appendix A.2, B). We have clarified the algorithm and task descriptions in Section 4 and Section 6.\", \"fairness_of_comparisons\": [\"To ensure the comparisons are fair, every comparison representation learning method is trained using the same data, and we have updated the paper to emphasize this (Section 6.2, 6.3). All representations are trained on a dataset of trajectories collected from the goal-conditioned policy, similar to the (A) scheme proposed by AnonReviewer1. We have updated the paper to include full details of the training scheme for all methods (Section 6.3, Appendix A.2, B).\", \"We also ensure that our experiments fairly account for the data required to train the GCP.\", \"In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7.\", \"In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller, so ARC incurs no additional sample cost. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even with substantially more samples.\", \"In the experiment for learning non goal-reaching tasks (Section 6.6), the ARC representation can be re-used across many different tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of 100 tasks to demonstrate this amortization, and will update the paper with results.\"], \"find_responses_to_particular_questions_and_comments_below\": \"\\u201cHow is the data collected to obtain the goal-directed policies in the first place?\\u201d\\n-> We train a goal-conditioned policy with TRPO using a task-agnostic sparse reward function. We have updated the paper to reflect this (Section 6.2, Appendix A.1).\\n\\n\\u201cwhy is this particular metric used to link the feature representation to policy similarity?\\u201d\\n-> We add an explicit discussion of this in Section 3. We link feature representation to policy similarity by this metric, because it directly captures the notion that features should represent elements of the state which directly affect the actions. The KL divergence between policy distributions allows us to embed goal states which induce similar actions similarly into feature space. \\n\\n\\n[1] Nagabandi, Kahn, Fearing and Levine. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning. ICRA 2018\\n[2] Burda, Edwards, Pathak, Storkey, Darrell, and Efros. Large-scale study of curiosity-driven learning. arXiv preprint\"}",
"{\"title\": \"A good idea, but suffers from lack of clarity\", \"review\": \"The paper suggests a method for generating representations that are linked to goals in reinforcement learning. More precisely, it wishes to learn a representation so that two states are similar if the policies leading to them are similar.\\n\\nThe paper leaves quite a few details unclear. For example, why is this particular metric used to link the feature representation to policy similarity? How is the data collected to obtain the goal-directed policies in the first place? How are the different methods evaluated vis-a-vis data collection? The current discussion makes me think that the evaluation methodology may be biased. Many unbiased experiment designs are possible. Here are a few:\\n\\nA. Pre-training with the same data\\n\\n1. Generate data D from the environment (using an arbitrary policy).\\n2. Use D to estimate a model/goal-directed policies and consequenttly features F. \\n3. Use the same data D to estimate features F' using some other method.\\n4. Use the same online-RL algorithm on the environment and only changing features F, F'.\\n\\nB. Online training\\n\\n1. At step t, take action $a_t$, observe $s_{t+1}$, $r_{t+1}$\\n2. Update model $m$ (or simply store the data points)\\n3. Use the model to get an estimate of the features \\n\\nIt is probably time consuming to do B at each step t, but I can imagine the authors being able to do it all with stochastic value iteration. \\n\\nAll in all, I am uncertain that the evaluation is fair.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"This is a nicely written paper, with some interesting and natural ideas about learning policy representations. Simplifying, the main idea is to consider two states $s_1,s_2$ similar if the corresponding policies $\\\\pi_1,\\\\pi_2$ for reaching $s_1, s_2$ are similar.\\n\\nHowever, it is unclear how this idea can be really applied when the optimal goal-directed policies are unknown. The algorithm, as given, relies on having access to a simulator for learning those policies in the first place. This is not necessarily a fatal fault, as long as the experiments compare algorithms in a fair and unbiased manner. How were the data collected in the first place for learning the representations? Was the same data used in all algorithms?\", \"title\": \"Nice work, though perhaps not very applicable\"}",
"{\"title\": \"Paper lacks many important details.\", \"review\": \"The paper presents a method to learn representations where proximity in euclidean distance represents states that are achieved by similar policies. The idea is novel (to the best of my knowledge), interesting and the experiments seem promising. The two main flaws in the paper are the lack of details and missing important experimental comparisons.\", \"major_remarks\": [\"The author state they add experimental details and videos via a link to a website. I think doing so is very problematic, as the website can be changed after the deadline but there was no real information on the website so it wasn\\u2019t a problem this time.\", \"While the idea seems very interesting, it is only presented in very high-level. I am very skeptical someone will be able to reproduce these results based only on the given details. For example - in eq.1 what is the distribution over s? How is the distance approximated? How is the goal-conditional policy trained? How many clusters and what clustering algorithm?\", \"Main missing details is about how the goal reaching policy is trained. The authors admit that having one is \\u201ca significant assumption\\u201d and state that they will discuss why it is reasonable assumption but I didn\\u2019t find any such discussion (only a sentence in 6.4).\", \"While the algorithm compare to a variety of representation learning alternatives, it seems like the more natural comparison are model-based Rl algorithms, e.g. \\u201cNeural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning\\u201d. This is because the representation tries to implicitly learn the dynamics so it should be compared to models who explicitly learn the dynamics.\", \"As the goal-conditional policy is quite similar to the original task of navigation, it is important to know for how long it was trained and taken into account.\", \"I found Fig.6 very interesting and useful, very nice visual help.\", \"In fig.8 your algorithm seems to flatline while the state keeps rising. It is not clear if the end results is the same, meaning you just learn faster, or does the state reach a better final policy. Should run and show on a longer horizon.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Quite interesting idea, but unsufficiently mature piece of research\", \"review\": \"In this paper, the authors propose a new approach to representation learning in the context of reinforcement learning.\\nThe main idea is that two states should be distinguished *functionally* in terms of the actions that are needed to reach them,\\nin contrast with generative methods which try to capture all aspects of the state dynamics, even those which are not relevant for the task at hand.\\nThe method of the authors assumes that a goal-conditioned policy is already learned, and they use a Kullback-Leibler-based distance\\nbetween policies conditioned by these two states as the loss that the representation learning algorithm should minimize.\\nThe experimental study is based on 6 simulated environments and outlines various properties of the framework.\\n\\nOverall, the idea is interesting, but the paper suffers from many weaknesses both in the framework description and in the experimental study that make me consider that it is not ready for publication at a good conference like ICLR.\\n\\nThe first weakness of the approach is that it assumes that a learned goal-conditioned policy is already available, and that the representation extracted from it can only be useful for learning \\\"downstream tasks\\\" in a second step. But learning the goal-conditioned policy from the raw input representation in the first place might be the most difficult task. In that respect, wouldn't it be possible to *simultaneously* learn a goal-conditioned policy and the representation it is based on? This is partly suggested when the authors mention that the representation could be learned from only a partial goal-conditioned policy, but this idea definitely needs to be investigated further.\\n\\nA second point is about unsufficiently clear thoughts about the way to intuitively advocate for the approach. The authors first claim that two states are functionally different if they are reached from different actions. Thinking further about what \\\"functionally\\\" means, I would rather have said that two states are functionally different if different goals can be reached from them. But when looking at the framework, this is close to what the authors do in practice: they use a distance between two *goal*-conditioned policies, not *state*-conditioned policies. To me, the authors have established their framework thinking of the case where the state space and the goal space are identical (as they can condition the goal-conditioned policy by any state=goal). But thinking further to the case where goals and states are different (or at least goals are only a subset of states), probably they would end-up with a different intuitive presentation of their framework. Shouldn't finally D_{act} be a distance between goals rather than between states?\\n\\nSection 4 lists the properties that can be expected from the framework. To me, the last paragraph of Section 4 should be a subsection 4.4 with a title such as \\\"state abstraction (or clustering?) from actionable representation\\\". And the corresponding properties should come with their own questions and subsection in the experimental study (more about this below).\\n\\nAbout the related work, a few remarks:\\n- The authors do not refer to papers about using auxiliary tasks. Though the purpose of these works is often to supply for additional reward signals in the sparse reward context, then are often concerned with learning efficient representations such as predictive ones.\\n- The authors refer to Pathak et al. (2017), but not to the more recent Burda et al. (2018) (Large-scale study of curiosity-driven learning) which insists on the idea of inverse dynamical features which is exactly the approach the authors may want to contrast theirs with. To me, they must read it.\\n- The authors should also read Laversanne-Finot et al. (2018, CoRL) who learn goal space representations and show an ability to extract independently controllable features from that.\\n\\nA positive side of the experimental study is that the 6 simulated environments are well-chosen, as they illustrate various aspects of what it means to learn an adequate representation. Also, the results described in Fig. 5 are interesting. A side note is that the authors address in this Figure a problem pointed in Penedones et al (2018) about \\\"The Leakage Propagation problem\\\" and that their solution seems more convincing than in the original paper, maybe they should have a look.\", \"but_there_are_also_several_weaknesses\": [\"for all experiments, the way to obtain a goal-conditioned policy in the first place is not described. This definitely hampers reproducibility of the work. A study of the effect of various optimization effort on these goal-conditioned policies might also be of interest.\", \"most importantly, in Section 6.4, 6.5 and 6.6, much too few details are given. Particularly in 6.6, the task is hardly described with a few words. The message a reader can get from this section is not much more than \\\"we are doing something that works, believe us!\\\". So the authors should choose between two options:\", \"either giving less experimental results, but describing them accurately enough so that other people can try to reproduce them, and analyzing them so that people can extract something more interesting than \\\"with their tuning (which is not described), the framework of the authors outperforms other systems whose tuning is not described either\\\".\", \"or add a huge appendix with all the missing details.\", \"I'm clearly in favor of the first option.\"], \"some_more_detailed_points_or_questions_about_the_experimental_section\": [\"not so important, Section 6.2 could be grouped with Section 6.1, or the various competing methods could be described directly in the sections where they are used.\", \"in Fig. 5, in the four room environment, ARC gets 4 separated clusters. How can the system know that transitions between these clusters are possible?\", \"in Section 6.3, about the pushing experiment, I would like to argue against the fact that the block position is the important factor and the end-effector position is secundary. Indeed, the end-effector must be correctly positioned so that the block can move. Does ARC capture this important constraint?\", \"Globally, although it is interesting, Fig.6 only conveys a quite indirect message about the quality of the learned representation.\", \"Still in Fig. 6, what is described as \\\"blue\\\" appears as violet in the figures and pink in the caption, this does not help when reading for the first time.\", \"In Section 6.4, Fig.7 a, ARC happens to do better than the oracle. The authors should describe the oracle in more details and discuss why it does not provide a \\\"perfect\\\" representation.\", \"Still in Section 6.4, the authors insist that ARC outperforms VIME, but from Fig.7, VIME is not among the best performing methods. Why insist on this one? And a deeper discussion of the performance of each method would be much more valuable than just showing these curves.\", \"Section 6.5 is so short that I do not find it useful at all.\", \"Section 6.6 should be split into the HRL question and the clustering question, as mentioned above. But this only makes sense if the experiments are properly described, as is it is not useful.\", \"Finally, the discussion is rather empty, and would be much more interesting if the experiments had been analyzed in more details.\"], \"typos\": \"\", \"p1\": \"that can knows => know\", \"p7\": \"euclidean => Euclidean\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJgYxn09Fm | Learning Implicitly Recurrent CNNs Through Parameter Sharing | [
"Pedro Savarese",
"Michael Maire"
] | We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy.
Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. Though considering only the aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures.
Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set. | [
"deep learning",
"architecture search",
"computer vision"
] | https://openreview.net/pdf?id=rJgYxn09Fm | https://openreview.net/forum?id=rJgYxn09Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1eEebvBlE",
"H1lWxZDEAm",
"SklIGLjQAm",
"HJxuxUsQA7",
"Sye4qrs7Am",
"BygqPHo70Q",
"HyxiNEjQCQ",
"S1eeeNoXCX",
"BygndrCp3m",
"Hke2lCb92Q",
"S1gMfAgqh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545068779550,
1542906089074,
1542858253584,
1542858223618,
1542858124428,
1542858082051,
1542857779201,
1542857704062,
1541428595807,
1541180916310,
1541176841697
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1098/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1098/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1098/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1098/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1098/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1098/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1098/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1098/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1098/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1098/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1098/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposed an interesting approach to weight sharing among CNN layers via shared weight templates to save parameters. It's well written with convincing results. Reviewers have a consensus on accept.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"a promising idea\"}",
"{\"title\": \"Thank you for the detailed and satisfactory reply.\", \"comment\": \"Thank you for addressing all my comments in detail, your replies satisfactorily cover all the issues I raised. I look forward to reading the final version of the manuscript.\"}",
"{\"title\": \"[1/4] Thank you. We appreciate the detailed feedback and questions.\", \"comment\": \"1 - It would be interesting to explore how often the layer parameters converge to similar weights and how similar. To this end I suggest to plot a 2d heatmap representing the similarity matrices between every pair of layers.\\n\\nThanks for the suggestion; we agree that this is a worthwhile experiment. The LSM matrices (Figure 4 & 6) already capture the similarity between every pair of layer for each stage: S_i,j is the absolute cosine similarity between the alpha parameters of layers i and j, so S_i,j ~ 1 (white) indicate that layers i and j have learned similar alphas (up to scaling). However, exploring how often this alignment occurs is definitely relevant for the work. We will aim to include such experimental results in the final version of the paper.\\n\\n\\n2 - Figure 1 is not of immediate interpretability. Especially for the middle figure, what does the dotted box represent? What is the difference between weights and templates? Also it\\u2019s unclear which of the three options corresponds to the proposed method. I would have thought the middle one, but the text seems to indicate it is the rightmost one instead.\\n\\nThe middle figure attempts to represent the case where filters W are generated from coefficients alpha and templates T through some function W = f(alpha, T). The dotted box represents that, unlike in the left figure (illustrating a network without parameter sharing), the weights are no longer parameters, but are generated from the coefficients alpha and the templates. The distinction between weights and templates is that each has a different purpose: the weights are used as filters for the convolutional layers, while templates are used to generate weights (and are shared among many layers).\", \"our_method_is_illustrated_by_both_the_middle_and_the_right_figures\": \"for the mapping W = f(alpha, T), whenever f(.) is a linear function, the middle and right figures become equivalent due to associativity (explained in more detail in section 3.1). They offer two perspectives for our technique. We added a more detailed explanation to the caption of Figure 1. We updated the caption to make this more explicit.\\n\\n\\n3 - How are the alphas initialized? How fast are their transitions? Do they change smoothly over training? Do they evolve rapidly and plateau to a fixed value or keep changing during training? It would be really interesting to plot their value and discuss their evolution.\\n\\nIn our ablation experiments, we observed that typical initializations (normal, uniform) are not good choices for the alphas. On the other hand, sparse and orthogonal initializations work better; we used orthogonal in the reported experiments. We added a discussion on initialization to the appendix.\\n\\nThe transitions are fast and happen mostly in the first 20 epochs of training, forming the patterns present in the Layer Similarity Matrices. After that point, typically similar layers become more similar and dissimilar layers become less similar, in a smooth fashion. We will add an example of evolution of a LSM during training in the appendix for the final version.\"}",
"{\"title\": \"[2/4] Thank you. We appreciate the detailed feedback and questions.\", \"comment\": \"4 - While the number of learned parameters is indeed reduced when the templates are shared among layers - which could lead to better sample efficiency - I am not sure whether the memory footprint on GPU would change (i.e., I believe that current frameworks would allocate the same kernel n-times if the same template was shared by n layers, but I am not certain). Although the potential reduction of the number of trainable parameters is an important result by itself, I wonder if what you propose would also allow to run bigger models on the same device or not, without heavy modifications of the inner machineries of pyTorch or Tensorflow. Can you comment on this? Also note that the soft sharing regularization scheme that you propose can be of great interest for FPGA hardware implementations, that benefit a lot from module reuse. You could mention that in the paper.\\n\\nThanks for pointing out the potential for FPGA implementations.\", \"there_are_at_least_two_settings_where_our_method_can_lead_to_infrastructural_advantages\": \"(i) During training PyTorch and Tensorflow both store the generated weights for each layer (to compute gradients in the backward pass), but at test time this is not the case: once the weights have been generated and used for the convolution, the memory is freed, and the only parameters that have to be stored are the alphas and the templates (which are only allocated once, regardless of how many layers use them). Returning to the question of training, we believe that dedicated implementations could lead to significant memory savings. For example, instead of storing the weights generated for each layer, training could recompute them in the backward pass (such recomputation is a cheap operation).\\n\\n(ii) In the distributed training setting with multiple machines, gradients have to be communicated at each optimization round. In the case where our method offers roughly 50% parameter reduction (Tables 1 and 2), communication cost would also be decreased by 50%.\\n\\n\\n5 - Sec 4.1, the number of layers in one group is defined as (L-4)/3. It\\u2019s unclear to me where the 4 comes from. Also on page 6, k = (L-2)/3 - 2 is said to set one template per layer. I thought the two formulas would be the same in that case. What am I missing? Is it possible that one of the two formulas contain a typo (I believe that at the very least it should be either (L-2) or (L-4) in both cases)?\\n\\nThe 4 is specific for Wide ResNets, where we have a first convolution with 3 input channels (RBG of the input image) and 3 convolutions with 1x1 kernels in skip-connections (one for every first skip-connection of each stage). These layers do not participate in sharing since no other layers in the network have the same parameter shapes.\\nThe k = (L-2)/3 - 2 from page 6 is indeed a typo which we fixed -- thanks for pointing it out.\\n\\n\\n6 - Sec 4.1, I find the notation SWRN-L-w-k and SWRN-L-w confusing. My suggestion is to set k to be the total number of templates (as opposed to the number of templates *per group of layers*), which makes it much easier to relate to, and most importantly allows for an immediate comparison with the capacity of the vanilla model. As a side effect, it also makes it very easy to spot the configuration with one template per layer (SWRN-L-w-L) thus eliminating the need for an ad-hoc notation to distinguish it.\\n\\nWe agree with your suggestion and will revise the notation in a forthcoming update of the paper. However, note that the configuration with one template per layer will not be denoted by SWRN-L-w-L since some layers do not participate in parameter sharing (see above).\\n\\n\\n7 - The authors inspect how similar the template combination weights alphas are among layers. It would also be interesting to look into what is learned in the templates. CNN layers are known to learn peculiar and somewhat interpretable template matching filters. It would be really interesting to compare the filters learned by a vanilla network and its template sharing alternative. Also, I would welcome an analysis of which templates gets chosen the most at each layer in the hierarchy. It would be compelling if some kind of pattern of reuse emerged from learning. \\n\\nWe agree that inspecting the learned behavior of our shared networks and contrasting with standard CNNs in a worthwhile effort. Given that visualizing and understanding CNNs is itself an area of active research, performing a thorough analysis in this respect is probably beyond the scope of the current paper; it would require substantial additional experiments.\\n\\nAs for which templates get chosen the most, we have plots of the alphas themselves, which indicate which templates are most important to each layer. We will add them to the final version.\"}",
"{\"title\": \"[3/4] Thank you. We appreciate the detailed feedback and questions.\", \"comment\": \"8 - Sec 4.4, it is unclear to me what can be the contribution of the 1x1 initial convolution, since it will see no context and all the information at the pixel level can be represented by a binary bit. Also, are the 3x3 convolutions \\u201csame\\u201d convolutions? If not, how are the last feature maps upscaled to be able to predict at the original resolution?\\n\\nWe wanted all 3x3 convolutions to participate in parameter sharing, and to do so they need to have the same number of input and output channels. Without the initial 1x1 convolution, the next convolution (the first 3x3 one) would have 2 input channels instead of 32, like all other 3x3 convolutions. The 1x1 kernel size was chosen since the only purpose of this convolution is to change the number of channels.\\n\\nAll 3x3 convolutions are \\u201csame\\u201d, with outputs having the same spatial resolution as the input.\\n\\n\\n9 - At the end of sec 4.4 the authors claim that the SCNN is \\u201calso advantaged over a more RNN-like model\\u201d. I fail to understand how to process this sentence, but I have a feeling that it\\u2019s incorrect to make any claims to the performance of \\u201cRNN-like models\\u201d as such a model was not used as a baseline in the experiments in any way. Similarly, in the conclusions I find it a bit stretched to claim that you can \\u201cgain a more flexible form of behavior typically attributed to RNNs\\u201d. While it\\u2019s true that the proposed network can in theory learn to reuse the same combination of templates, which can be mapped to a network with recursion, the results in this direction don\\u2019t seem strong enough to draw any conclusion and a more in-depth comparison against RNN performance would be in order before making any claim in this direction.\\n\\nWe will rephrase our claims to make them more precise. By \\u2018RNN-like model\\u2019 we meant the SCNN with lambda_R = 0.01 (recurrence regularizer), where all 20 layers become maximally similar in the first 10 epochs (all elements in the LSM get very close to 1), meaning that the filters used by all 20 convolutions are (roughly) the same (up to scaling). While in practice this can be seen as a RNN, we agree that our wording should be more specific. We added this observation (on the LSM of the SCNN with the recurrence regularizer) to the manuscript.\"}",
"{\"title\": \"[4/4] Thank you. We appreciate the detailed feedback and questions.\", \"comment\": [\"MINOR\", \"Sec3: I wouldn\\u2019t say the parameters are shared among layers in LSTMs, but rather among time unrolls.\", \"We changed to \\u201cshared among all time steps\\u201d in the manuscript.\", \"One drawback of the proposed method is that the layers are constrained to have the same shape. This is not a major disadvantage, but is still a constraint that would be good to make more explicit in the description of the model.\", \"We added an explicit mention of this constraint in Section 3 (paragraph following Equation 1).\", \"Sec3, end of page 3: does the network reach the same accuracy as the vanilla model when k=L? Also, does the network use all the templates? How is the distribution of the alpha weights across layers in this case?\", \"Yes, in all our experiments having one template per layer resulted in better accuracy than the vanilla model (for example, refer to Table 1, models WRN 28-10 and SWRN 28-10). In all our experiments, each template is used by at least one layer, however some layers do not use all templates (having one component of the coefficient vector alpha very close to zero). As mentioned previously, we will add plots of the coefficients to the final version.\", \"Sec3.1, the V notation makes the narrative unnecessarily heavy. I suggest to drop it and refer directly to the templates T. Also the second part of the section, with examples of templates, doesn\\u2019t add much in my opinion and would be better depicted with a figure.\", \"Sec3.1, the e^(i) notation can be confused with an exp. I suggest to replace it with the much more common 1_{i=j}.\", \"We removed the V notation and refer to templates directly. We have also removed the first example (\\\\alpha^(i) = e^(i)) for simplicity (and as the second one is more relevant for our work).\", \"Figure 2 depicts the relation between the LSM matrix and the topology of the network. This should be declared more clearly in the caption, in place of the ambiguous \\u201ccapturing implicit recurrencies\\u201d. Also, the caption should explain what black/white stand for as well, and possibly quickly describe what the LSM matrix is. Also, it would be more clear that the network in the middle is equivalent to that on the right if the two were somehow connected in the figure. To this end they could, e.g., share a single LSM matrix among them. Finally, if possible try and put the LSM matrices on top of the related network, so that it\\u2019s clear which network they refer to. Sec 3.2 should also refer to Fig2 I believe.\", \"We updated Figure 2 and its caption (due to space constraints we could not place the LSM matrices on top of the networks, so we added dashed lines to make it clear which network each LSM matrix corresponds to).\", \"Table 1: I suggest to leave the comment on the results out of the caption, since it\\u2019s already in the main text.\", \"We shortened the discussion on the results both in the caption of Table 1 and in Section 4.1.\", \"Table 2: rather than using blue, I suggest to underline the overall best results, so that it\\u2019s visible even if the paper is printed in B&W.\", \"We changed from blue to underline to indicate best results.\", \"Fig 3, I would specify that it\\u2019s better viewed in color\", \"Added to the caption.\", \"Discussion: I feel the discussion of Table 1 is a bit difficult to follow. It could be made easier by reporting the difference in test error against the corresponding vanilla model (e.g., \\u201cimproves the error rate on CIFAR10 by 0.26%\\u201d, rather than reporting the performance of both models)\"], \"we_updated_the_manuscript\": \"instead of reporting the errors of both models, we report the relative error decrease (we believe the absolute error decrease might not be meaningful since the scale of the errors is no longer reported in the discussion).\\n\\n- Fig 4, are all the stages the same and is the network in the left one such stages? If so, update the caption to make it clear please.\\n\\nThe diagram on the left represents the architecture of each stage of the network, and all 3 stages have the same topology. We updated the caption to mention this explicitly.\\n\\n- Fig 4, which lambda has been used? Is it the same for all stages?\\n\\nThe recurrence regularizer has not been applied to any experiment except for the last one (Section 4.4): the patterns observed in the LSM, which enabled folding, have emerged naturally during training.\\n\\n- Fig 5, specify that the one on the right is the target grid. Also, I believe that merging the two figures would make it easier to understand (e.g., some of the structure in the target comes from how the obstacles are placed, which requires to move back and forth from input to target several times to understand)\\n\\nWe have merged the two figures together.\\n\\n- Sec 4.4, space permitting, I would like to see at least one sample of what kind of shortest path prediction the network can come up with.\\n\\nWe will aim to add at least one example in the final version of the manuscript.\"}",
"{\"title\": \"We thank you for the feedback, and address specific points below.\", \"comment\": \"1 - The way of parameter sharing is similar to the filter prediction method proposed in Rebuff et al\\u2019s work, where they model a convolutional layer\\u2019s parameters as a linear combination of a bank of filters and use that to address difference among multiple domains.\\n\\nThank you for pointing out the work of Rebuffi et al. There are similarities in some technical aspects, as both our work and theirs involve learning a bank of filters and mixing coefficients. However, the overall goal of our work is entirely different from that of Rebuffi et al., as are the additional technical tools (e.g. layer similarity matrix, network folding) that we develop.\\n\\nRebuffi et al. focus on domain-adaptability and transfer-learning, while we focus on parameter reduction and architecture discovery. Because of this, we believe the findings of both papers (ours and Rebuffi et al.) are extremely complementary, as together they show the potential of \\u2018template learning\\u2019 both for the multi-domain setting (as it offers better domain-adaptation) and for single-domain (as it offers better performance, parameter reduction, and potentially simpler models). Note that we introduce novel tools for manipulating single-domain networks, yielding the ability to fold them into recurrent forms, that have no parallel in Rebuffi et al. We will add a discussion of the work of Rebuffi et al. in the final version of our manuscript.\\n\\n2 - However, they only experiment with one or two templates and advantage on accuracy and model size over other methods is not very clear.\\n\\nActually, our experiments include models with the number of templates ranging from 1 to 20 (per sharing group). More specifically, the CIFAR experiments (refer to Tables 1, 2 and Figure 3) consist of models with between 1 and 6 templates: the SWRN-L-w-k models where k is omitted (e.g. SWRN 28-10) have one template per sharing layer, meaning k=6 for 28-layered models. As we believe that omitting k to represent one template per layer can lead to confusion, we will revise this notation in a forthcoming update of the paper, \\n\\nFor these same experiments, we focused on the regime with few templates since our goal is to decrease parameter redundancy: we believe one of our most important findings is that networks with k=1 or k=2 manage to outperform the same models with larger k, as the latter have significantly more capacity.\\n\\nIn our last experiments (in Section 4.4), the two SCNN models have a total of 20 templates shared among 20 convolutional layers. In this setting, the SCNN significantly outperforms the CNN, and adapts faster to curriculum changes (Figure 5b, compare blue and orange curves).\\n\\nAs for advantage on accuracy and model size, Table 1 shows that we can achieve both performance increase and parameter reduction: the SWRN 28-18-2 model not only outperforms SWRN 28-18, but also has less than half of its parameters. It also outperforms the ResNeXt model, which, even though it has bottleneck layers, still has more parameters than SWRN 28-18-2. We would also like to point out that all results are average of 5 runs (except the curves in Figure 5b).\"}",
"{\"title\": \"Thank you for the feedback and specific questions, which we address in detail below.\", \"comment\": \"With respect to novelty, we do not believe there is any existing work that utilizes a parameter sharing scheme toward the objective we accomplish here: training a deep network and then folding it into a recurrent form. Please also see our detailed reply to AnonReviewer2.\\n\\n1- Regarding the coefficient alpha, I'm not sure how cosine similarity is computed. I have the impression that each layer has its own alpha, which is a scalar. How is cosine similarity computed on scalars?\\n\\nEach layer i has its own alpha parameter, denoted by alpha^(i) in the manuscript (refer to equation 1 on page 3), but each alpha is a k-dimensional vector, where k is the number of templates to which that layer has access. For the SWRN-L-w-k models we use in the experiments, the same k denotes the number of templates each layer can use, so the dimensionality of each alpha ranges from 1 (in this case it\\u2019s just a scalar) to 6 in our experiments. \\n\\nWe made alpha bold in the current version of the manuscript to clarify that it is a vector (except when k=1).\\n\\n2 - In the experiments, there's no mentioning of the regularization terms for alpha, which makes me think it is perhaps not important? What is the generic setup?\\n\\nWe tried applying L2 regularization to the alpha parameters in our initial experiments, but observed a performance drop, so all reported experiments have no regularization on the alphas.\\n\\nAs for the recurrence regularizer described in the end of Section 3.2, where we regularize the Layer Similarity Matrix, it was only used for the experiments in Section 4.4 -- more specifically, the \\u201cSCNN, lambda_R = 0.01\\u201d model depicted in Figure 5. It was not used for any other experiments, meaning that the observed patterns (e.g. the Layer Similarity Matrices in Figure 4) emerge naturally during optimization, where neither the alphas nor the LSMs had any regularization.\"}",
"{\"title\": \"Interesting approach to weight sharing among CNN layers via shared weight templates, well written, convincing results.\", \"review\": [\"The manuscript introduces a novel and interesting approach to weight sharing among CNNs layers, by learning linear combinations of shared weight templates. This allows parameter reduction, better sample efficiency. Furthermore, the authors propose a very simple way to inspect which layers choose similar combinations of template, as well as to push the network toward using similar combinations at each layer. This regularization term has a clear potential for computation reuse on dedicated hardware. The paper is well written, the method is interesting, the results are convincing and thoroughly conducted. I recommend acceptance.\", \"1) It would be interesting to explore how often the layer parameters converge to similar weights and how similar. To this end I suggest to plot a 2d heatmap representing the similarity matrices between every pair of layers.\", \"2) Figure 1 is not of immediate interpretability. Especially for the middle figure, what does the dotted box represent? What is the difference between weights and templates? Also it\\u2019s unclear which of the three options corresponds to the proposed method. I would have thought the middle one, but the text seems to indicate it is the rightmost one instead.\", \"3) How are the alphas initialized? How fast are their transitions? Do they change smoothly over training? Do they evolve rapidly and plateau to a fixed value or keep changing during training? It would be really interesting to plot their value and discuss their evolution.\", \"4) While the number of learned parameters is indeed reduced when the templates are shared among layers - which could lead to better sample efficiency - I am not sure whether the memory footprint on GPU would change (i.e., I believe that current frameworks would allocate the same kernel n-times if the same template was shared by n layers, but I am not certain). Although the potential reduction of the number of trainable parameters is an important result by itself, I wonder if what you propose would also allow to run bigger models on the same device or not, without heavy modifications of the inner machineries of pyTorch or Tensorflow. Can you comment on this? Also note that the soft sharing regularization scheme that you propose can be of great interest for FPGA hardware implementations, that benefit a lot from module reuse. You could mention that in the paper.\", \"5) Sec 4.1, the number of layers in one group is defined as (L-4)/3. It\\u2019s unclear to me where the 4 comes from. Also on page 6, k = (L-2)/3 - 2 is said to set one template per layer. I thought the two formulas would be the same in that case. What am I missing? Is it possible that one of the two formulas contain a typo (I believe that at the very least it should be either (L-2) or (L-4) in both cases)?\", \"6) Sec 4.1, I find the notation SWRN-L-w-k and SWRN-L-w confusing. My suggestion is to set k to be the total number of templates (as opposed to the number of templates *per group of layers*), which makes it much easier to relate to, and most importantly allows for an immediate comparison with the capacity of the vanilla model. As a side effect, it also makes it very easy to spot the configuration with one template per layer (SWRN-L-w-L) thus eliminating the need for an ad-hoc notation to distinguish it.\", \"7) The authors inspect how similar the template combination weights alphas are among layers. It would also be interesting to look into what is learned in the templates. CNN layers are known to learn peculiar and somewhat interpretable template matching filters. It would be really interesting to compare the filters learned by a vanilla network and its template sharing alternative. Also, I would welcome an analysis of which templates gets chosen the most at each layer in the hierarchy. It would be compelling if some kind of pattern of reuse emerged from learning.\", \"8) Sec 4.4, it is unclear to me what can be the contribution of the 1x1 initial convolution, since it will see no context and all the information at the pixel level can be represented by a binary bit. Also, are the 3x3 convolutions \\u201csame\\u201d convolutions? If not, how are the last feature maps upscaled to be able to predict at the original resolution?\", \"9) At the end of sec 4.4 the authors claim that the SCNN is \\u201calso advantaged over a more RNN-like model\\u201d. I fail to understand how to process this sentence, but I have a feeling that it\\u2019s incorrect to make any claims to the performance of \\u201cRNN-like models\\u201d as such a model was not used as a baseline in the experiments in any way. Similarly, in the conclusions I find it a bit stretched to claim that you can \\u201cgain a more flexible form of behavior typically attributed to RNNs\\u201d. While it\\u2019s true that the proposed network can in theory learn to reuse the same combination of templates, which can be mapped to a network with recursion, the results in this direction don\\u2019t seem strong enough to draw any conclusion and a more in-depth comparison against RNN performance would be in order before making any claim in this direction.\", \"MINOR\", \"Sec3: I wouldn\\u2019t say the parameters are shared among layers in LSTMs, but rather among time unrolls.\", \"One drawback of the proposed method is that the layers are constrained to have the same shape. This is not a major disadvantage, but is still a constraint that would be good to make more explicit in the description of the model.\", \"Sec3, end of page 3: does the network reach the same accuracy as the vanilla model when k=L? Also, does the network use all the templates? How is the distribution of the alpha weights across layers in this case?\", \"Sec3.1, the V notation makes the narrative unnecessarily heavy. I suggest to drop it and refer directly to the templates T. Also the second part of the section, with examples of templates, doesn\\u2019t add much in my opinion and would be better depicted with a figure.\", \"Sec3.1, the e^(i) notation can be confused with an exp. I suggest to replace it with the much more common 1_{i=j}.\", \"Figure 2 depicts the relation between the LSM matrix and the topology of the network. This should be declared more clearly in the caption, in place of the ambiguous \\u201ccapturing implicit recurrencies\\u201d. Also, the caption should explain what black/white stand for as well, and possibly quickly describe what the LSM matrix is. Also, it would be more clear that the network in the middle is equivalent to that on the right if the two were somehow connected in the figure. To this end they could, e.g., share a single LSM matrix among them. Finally, if possible try and put the LSM matrices on top of the related network, so that it\\u2019s clear which network they refer to. Sec 3.2 should also refer to Fig2 I believe.\", \"Table 1: I suggest to leave the comment on the results out of the caption, since it\\u2019s already in the main text.\", \"Table 2: rather than using blue, I suggest to underline the overall best results, so that it\\u2019s visible even if the paper is printed in B&W.\", \"Fig 3, I would specify that it\\u2019s better viewed in color\", \"Discussion: I feel the discussion of Table 1 is a bit difficult to follow. It could be made easier by reporting the difference in test error against the corresponding vanilla model (e.g., \\u201cimproves the error rate on CIFAR10 by 0.26%\\u201d, rather than reporting the performance of both models)\", \"Fig 4, are all the stages the same and is the network in the left one such stages? If so, update the caption to make it clear please.\", \"Fig 4, which lambda has been used? Is it the same for all stages?\", \"Fig 5, specify that the one on the right is the target grid. Also, I believe that merging the two figures would make it easier to understand (e.g., some of the structure in the target comes from how the obstacles are placed, which requires to move back and forth from input to target several times to understand)\", \"Sec 4.4, space permitting, I would like to see at least one sample of what kind of shortest path prediction the network can come up with.\"], \"a_few_typos\": [\"End of 3.2: the closer elements -> the closer the elements\", \"Parameter efficiency: the period before re-parametrizing should probably be a comma?\", \"Fig 4, illustration of stages -> illustration of the stages\", \"End of pag7, an syntetic -> a syntetic\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"a promising proposal that exploits the over-parameterization nature of neural nets to reduce the model size\", \"review\": \"This work is motivated by the widely recognized issue of over-parameterization in modern neural nets, and proposes a clever template sharing design to reduce the model size. The design is sound, and the experiments are valid and thorough. The writing is clear and fluent.\\n\\nThe reviewer is not entirely sure of the originality of this work. According to the sparse 'related work' section, the contribution is novel, but I will leave it to the consensus of others who are more versed in this regard.\\n\\nThe part that I find most interesting is the fact that template sharing helps with the optimization without even reducing the number of parameters, as illustrated in CIFAR from Table 1. The trade-off of accuracy and parameter-efficiency is overall well-studied in CIFAR and ImageNet, although results on ImageNet is not as impressive. \\n\\nRegarding the coefficient alpha, I'm not sure how cosine similarity is computed. I have the impression that each layer has its own alpha, which is a scalar. How is cosine similarity computed on scalars?\\n\\nIn the experiments, there's no mentioning of the regularization terms for alpha, which makes me think it is perhaps not important? What is the generic setup?\\n\\nIn summary, I find this work interesting, and with sufficient experiments to backup its claim. On the other hand, I'm not entirely sure of its novelty/originality, leaving this part open to others.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea of using parameter sharing scheme to explore network structure; experiments can be stronger\", \"review\": \"Authors propose a parameter sharing scheme by allowing parameters to be reused across layers. It further makes connection between traditional CNNs with RNNs by adding additional regularization and using hard sharing scheme.\\n\\nThe way of parameter sharing is similar to the filter prediction method proposed in Rebuff et al\\u2019s work, where they model a convolutional layer\\u2019s parameters as a linear combination of a bank of filters and use that to address difference among multiple domains.\\n\\nSylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi, Learning multiple visual domains with residual adapters, NIPS 2017.\\n\\nThe discussion on the connection between coefficients for different layers and a network\\u2019s structure and visualization of layer similarity matrix is interesting. Additional regularization can further encourage a recurrent neural network to be learned. \\n\\nHowever, they only experiment with one or two templates and advantage on accuracy and model size over other methods is not very clear.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkxtl3C5YX | Understanding & Generalizing AlphaGo Zero | [
"Ravichandra Addanki",
"Mohammad Alizadeh",
"Shaileshh Bojja Venkatakrishnan",
"Devavrat Shah",
"Qiaomin Xie",
"Zhi Xu"
] | AlphaGo Zero (AGZ) introduced a new {\em tabula rasa} reinforcement learning algorithm that has achieved superhuman performance in the games of Go, Chess, and Shogi with no prior knowledge other than the rules of the game. This success naturally begs the question whether it is possible to develop similar high-performance reinforcement learning algorithms for generic sequential decision-making problems (beyond two-player games), using only the constraints of the environment as the ``rules.'' To address this challenge, we start by taking steps towards developing a formal understanding of AGZ. AGZ includes two key innovations: (1) it learns a policy (represented as a neural network) using {\em supervised learning} with cross-entropy loss from samples generated via Monte-Carlo Tree Search (MCTS); (2) it uses {\em self-play} to learn without training data.
We argue that the self-play in AGZ corresponds to learning a Nash equilibrium for the two-player game; and the supervised learning with MCTS is attempting to learn the policy corresponding to the Nash equilibrium, by establishing a novel bound on the difference between the expected return achieved by two policies in terms of the expected KL divergence (cross-entropy) of their induced distributions. To extend AGZ to generic sequential decision-making problems, we introduce a {\em robust MDP} framework, in which the agent and nature effectively play a zero-sum game: the agent aims to take actions to maximize reward while nature seeks state transitions, subject to the constraints of that environment, that minimize the agent's reward. For a challenging network scheduling domain, we find that AGZ within the robust MDP framework provides near-optimal performance, matching one of the best known scheduling policies that has taken the networking community three decades of intensive research to develop.
| [
"reinforcement learning",
"AlphaGo Zero"
] | https://openreview.net/pdf?id=rkxtl3C5YX | https://openreview.net/forum?id=rkxtl3C5YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syx3uOdVxV",
"Bklrkne90m",
"HJxewogcRX",
"S1xqVig50Q",
"HyljDMze6m",
"Hylj7vZsh7",
"SJlYT1Uw27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545009268205,
1543273436860,
1543273304515,
1543273265682,
1541575266939,
1541244706908,
1541001153050
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1097/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1097/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1097/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1097/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1097/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1097/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1097/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This work examines the AlphaGo Zero algorithm, a self-play reinforcement learning algorithm that has been shown to learn policies with superhuman performance on 2 player perfect information games. The main result of the paper is that the policy learned by AGZ corresponds to a Nash equilibrium, that and that the cross-entropy minimization in the supervised learning-inspired part of the algorithm converges to this Nash equillibrium, proves a bound on the expected returns of two policies under the and introduces a \\\"robust MDP\\\" view of a 2 player zero-sum game played between the agent and nature.\\n\\nR3 found the paper well-structured and the results presented therein interesting. R2 complained of overly heavy notation and questioned the applicability of the results, as well as the utility of the robust MDP perspective (though did raise their score following revisions).\\n\\nThe most detailed critique came from R1, who suggested that the bound on the convergence of returns of two policies as the KL divergence between their induced distributions decreases is unsurprising, that using it to argue for AGZ's convergence to the optimal policy ignores the effects introduced by the suboptimality of the MCTS policy (while really interesting part being understanding how AGZ deals with, and whether or not it closes, this gap), and that the \\\"robust MDP\\\" view is less novel than the authors claim based on the known relationships between 2 player zero-sum games and minimax robust control. \\n\\nI find R1's complaints, in particular with respect to \\\"robust MDPs\\\" (a criticism which went completely unaddressed by the authors in their rebuttal), convincing enough that I would narrowly recommend rejection at this time, while also agreeing with R3 that this is an interesting subject and that the results within could serve as the bedrock for a stronger future paper.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Contains interesting results, but certain key aspects were criticized and these criticisms were not addressed in the rebuttal.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you very much for your time reviewing our paper. It appears that certain main claims were misinterpreted, and we would like to take this opportunity to address the concerns:\\n\\n1. Heavy on notations: the main concern in the review seems to be the notations. However, we would like to emphasize that this is a theoretical paper, aiming for a quantitative understanding of AlphaGo Zero, which is currently lacking. As you mentioned, we propose a formal framework to study and understand AGZ. To this end, certain level of mathematical rigor is absolutely necessary. We chose to use precise notations to make sure that our formal framework is mathematically correct and meaningful. Because the study was on two-play games, some statements do become more complex. However, these are unavoidable. Finally, we would like to point out that there are many literatures on games such as [1] that do rely on ``heavy\\u2019\\u2019 notation to make precise mathematical claims. \\n\\nFor reasons mentioned above, we are unable to understand exactly what the reviewer\\u2019s criticism precisely is about. Specifically, we would appreciate if the reviewer can provide feedback on what parts of the paper weren\\u2019t clear, as we believe it will help us improve the quality of the paper. \\n\\n2. Casting MDP as a two player game: Here we were not deliberately trying to make the paper heavy on notation. The purpose is to show that this different viewpoint can lead to different ways of learning robust policies for agent. Without formally writing down the connection, we cannot show rigorously that our theorems extend to this robust MDP case. As mentioned in 1., this level of mathematical rigor is required for theoretical studies. \\n\\n3. Gamma < 1: Most of the theoretical studies in RL study the infinite horizon problem with discounting (gamma < 1). This paper follows the same trend. Like many theoretical papers, this setting makes the formulation clean and analysis possible. Without discounting, the analysis for many algorithms is almost impossible or becomes very involved in notation. Actually, most of the successful algorithms were analyzed for this setting first, such as TRPO [2]. They may then be applied in practice for non-discounted settings. For example, although many games are finite-horizon in nature and rewards are not discounted, those algorithms can still be applied in practice successfully.\\n\\n4. Experiments: We would like to emphasize that the main purpose of the paper is to provide a formal theoretical framework to analyze AGZ and to extend the algorithm to robust MDP problems. The value of those theoretical contributions should not be overlooked. We use the experiment to serve as a proof of concept that this formal framework can be applied to solve robust MDP problems. We are not attempting to claim that AGZ is the only method that can solve these tasks. Instead, the main message is that applying the AGZ robust MDP formulation is feasible in practice, and in such a challenging problem, it achieves similar performances of the state-of-the-art algorithms (after decades of research).\\n\\n----------\\n\\nThank you for pointing out the typos. Here are additional comments related to other small concerns:\\np2/appendix:\\nIn fact, the original paper does not contain concise pseudo code. There are some concise discussions though. Even it does, we believe that giving pseudo code in our paper helps to set up the proper background and increase overall readability, without referring the readers to carefully read the original paper.\\n\\np2(2):\\nNote that, by definition, \\\\pho is the state density under a particular policy. Therefore, \\\\pho_pi^* is the state density visited under the optimal policy, and should not depend on pi at all. The more precise definition and derivation of the results are given in Section 3 and Appendix D, with additional discussions on the probability measure in Appendix F.\", \"p4\": \"No, it is not V_\\\\pi(s_0). Note that the initial state is distributed according to some initial distribution. R(\\\\pi) is in fact the expected reward under $\\\\pi$, i.e., expected value over V_\\\\pi(s_0). We use letter R with the hope that this quantity is related to reward, and use \\\\pi to clarify that this is the reward related to the policy. In fact, this type of notation does appear in other literature, eg. [3]. We could use other notation if it helps.\", \"p5\": \"This is meant to precisely and formally introduce the mathematics of the robust MDP formulation. Please also refer to comment 2 above.\\n\\n\\n\\n[1] Perolat, Julien, et al. \\\"Approximate dynamic programming for two-player zero-sum markov games.\\\" International Conference on Machine Learning (ICML 2015). 2015.\\n[2] Schulman, John, et al. \\\"Trust region policy optimization.\\\" International Conference on Machine Learning. 2015.\\n[3] Pinto, Lerrel, et al. \\\"Robust adversarial reinforcement learning.\\\" ICML(2017).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your encouraging comments. We agree with your suggestions and we will revise our paper accordingly. We will also comment on the gap between our analysis and AGZ in the introduction to make it clearer, and discuss potential future work (e.g., considering approximation errors due to MCTS and the value function) in the conclusion.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for the detailed comments.\\n\\nOur goal is to develop a quantitative understanding of AlphaGo Zero (AGZ), moving beyond the intuitive justification for the algorithms in the original work. We believe that a rigorous mathematical analysis is crucial to provide a solid foundation for understanding AGZ and similar algorithms. This requires developing (i) a precise mathematical model, (ii) a quantitative performance bound within the model. \\n\\nOur work takes an important step in this direction by modeling AGZ\\u2019s self-play and its supervised learning algorithm accurately. In particular, we use the turn-based game model to capture the self-play aspect. We develop a quantitative bound in terms of cross-entropy loss in supervised learning, which is the \\u201cmetric\\u201d of choice in AGZ. While the cross-entropy loss seems intuitive, using it as a quantitative performance measure requires careful thought. For example, in Appendix F (page 19, 2nd paragraph), we discussed a scenario where this intuition is incorrect under a careless measure. That is, seemingly \\u201cobvious\\u201d algorithms can fail in the absence of a rigorous mathematical proof. \\n\\nWe agree that there is a gap between AGZ and our model. As mentioned in our paper, MCTS converges to the optimal policy for both classical MDPs and stochastic games. Hence in this paper, we model the AGZ\\u2019s MCTS policy by the optimal policy, and mainly focus on the other two key ingredients of AGZ, self-play and supervised learning. It will be interesting to study how the error between MCTS and the optimal policy affects the iterative algorithm. This is a research direction we think is worth pursuing in the future.\\n\\nWe also agree with the reviewer that some of our statements might be too strong. We will revise accordingly. Instead of ``immediate justification``, we believe this work does provide a first-step, formal framework towards a better theoretical understanding. We will also revise the title, perhaps to ``applying AGZ`` so as to make the connection to MDP more clear in our paper.\"}",
"{\"title\": \"The results in the paper are relatively straightforward and there is a clear gap.\", \"review\": \"This paper seeks to understand the AlphaGo Zero (AGZ) algorithm and extend the algorithm to regular sequential decision-making problems. Specifically, the paper answers three questions regarding AGZ: (i) What is the optimal policy that AGZ is trying to learn? (ii) Why is cross-entropy the right objective? (iii) How does AGZ extend to generic sequential decision-making problems? This paper shows that AGZ\\u2019s optimal policy is a Nash equilibrium, the KL divergence bounds distance to optimal reward, and the two-player zero-sum game could be applied to sequential decision making by introducing the concept of robust MDP. Overall the paper is well written. However, there are several concerns about this paper.\\n\\nIn fact, the key results obtained in this paper is that minimizing the KL-divergence between the parametric policy and the optimal policy (Nash equilibrium) (using SGD) will converge to the optimal policy. It is based on a bound (2), which states that when the KL-divergence between a policy and the optimal policy goes to zero then the return for the policy will approach that of the optimal policy. This bound is not so surprising because as long as certain regularity condition holds, the policies being close should lead to the returns being close. Therefore, it is an overclaim that the KL-divergence bound (2) provides an immediate justification for AGZ\\u2019s core learning algorithm. As mentioned earlier, the actual conclusion in Section 4.2 is that minimizing the KL-divergence between the parametric policy and the optimal policy by SGD will converge to the optimal policy, which is straightforward and is not what AlphaGo Zero does. This is because there is an important gap: the MCTS policy is not the same as the optimal policy. The effect of the imperfection in the target policy is not taken into account in the paper. A more interesting question to study is how this gap affect the iterative algorithm, and whether/how the error in the imperfect target policy accumulates/diminishes so that iteratively minimizing KL-divergence with imperfect \\\\pi* (by MCTS) could still lead to optimal policy (Nash equilibrium).\\n\\nFurthermore, the robust MDP view of the AGZ in sequential decision-making problems is not so impressive either. It is more or less like a reformulation of the AGZ setting in the MDP problem. And it is commonly known that two-player zero-sum game is closely related to minimax robust control. Therefore, it cannot be called as \\u201cgeneralizing AlphaGo Zero\\u201d as stated in the title of the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting insights about alphaGo Zero and a nice case-study.\", \"review\": \"This paper analyzes the AlphaGo Zero algorithm by showing that the optimal policy corresponds to a Nash equilibrium. The authors then show that the equilibrium corresponds to a KL-minimization. Finally, the show on a classical scheduling task.\\n\\nOn the positive side, the paper is well written and structured. The results presented are very interesting, specially showing that stochastic approximation of a KL-divergence minimization. The case-study is also interesting, although does not improve current state-of-the-art. On the negative side, I think the relevance and novelty of the results should be explained better.\\n\\nFor example, it is not clear the strong emphasis on the robust MDP formalization and the fact that MCTS finds a Nash equilibrium. The MDP formalization is rather straightforward. Also, MCTS has been used extensively to find Nash equilibria in both perfect and imperfect games, e.g., \\\"Online monte carlo counterfactual regret minimization for search in imperfect information games\\\". Maybe the authors can elaborate more on the significance/relevance of this contribution.\\n\\nBesides, the power of AlphaGo Zero resides in the combination of the MCTS together with the compact representation learning of the value functions. The presented analysis seems to neglect the error term corresponding to the value function.\", \"there_are_other_minor_details\": [\"Eq(2). notation: \\\\forall s is missing\", \"Theorem 2 should be Theorem 1\", \"\\\"there are constraints per which state can transition\\\"\", \"\\\"P1 is agent\\\" -> \\\"P1 is the agent\\\"\", \"\\\"Pinker\\\" -> \\\"Pinsker\\\"\", \"C_R in Eq(5) is not introduced.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"heavy on notations, limited impact applicability / experimental results\", \"review\": \"The paper proposes a formal framework to claim that Alpha Zero might converges to a Nash equilibrium. The main theoretical result is that the reward difference between a pair of policy and the Nash policy is bounded by the expected KL of these policy on a state distribution sampled from the Nash policies.\\n\\nThe paper is quite heavy on notations and relatively light on experimental results. The main theoretical results is a bit remote from the case Alpha Zero is applied to. Indeed the bound is in 1/(1-/gamma) while Alpha Zero works with gamma = 1. Also \\n\\nCasting a one player environment as a two player game in which nature plays the role of the second player makes the paper very heavy on notations.\\n\\nIn the experimental sections, the only comparison with RL types algorithm is with SARSA, it would be interesting to know how other RL algorithms, perhaps model free, would compare to this, i.e. is Alpha Zero actually necessary to solve this tasks?\\n\\n\\n--- \\np 1\\n\\n' it uses the current policy network g_theta' : policy and value network.\\n\\np 2 / appendix\\nNo need to provide pseudo code for alpha zero the original paper already describes that?\\n\\np2 (2). It seems a bit surprising to me that the state density rho does not depend upon pi but only on pi star?\", \"p4\": \"Not sure why you need to introduce R(pi), isnt it just V_pi (s_0) ? Also usually the letter R is used for the return i.e. the sum of discounted reward without the expectation, so this notation is a bit confusing?\", \"p5\": \"\", \"paragraph2\": \"I don't quite see the point of this.\", \"p8\": \"\\\"~es, because at most on packet can get serviced from any input or output port.~\\\" typo ?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJgYl205tQ | Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality | [
"Sukarna Barua",
"Xingjun Ma",
"Sarah Monazam Erfani",
"Michael Houle",
"James Bailey"
] | Generative Adversarial Networks (GANs) are an elegant mechanism for data generation. However, a key challenge when using GANs is how to best measure their ability to generate realistic data. In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality. In particular, we propose a new evaluation measure, CrossLID, that assesses the local intrinsic dimensionality (LID) of input data with respect to neighborhoods within GAN-generated samples. In experiments on 3 benchmark image datasets, we compare our proposed measure to several state-of-the-art evaluation metrics. Our experiments show that CrossLID is strongly correlated with sample quality, is sensitive to mode collapse, is robust to small-scale noise and image transformations, and can be applied in a model-free manner. Furthermore, we show how CrossLID can be used within the GAN training process to improve generation quality. | [
"Generative Adversarial Networks",
"Evaluation Metric",
"Local Intrinsic Dimensionality"
] | https://openreview.net/pdf?id=BJgYl205tQ | https://openreview.net/forum?id=BJgYl205tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skl7msNbxE",
"BklQaxijTm",
"HJe-tKh9am",
"Sylcwt3cTm",
"S1xLVKn5aX",
"Bygbx_hqam",
"r1g4u825TX",
"Bkltr83q6m",
"SkxnJIn5a7",
"rJlszM39aX",
"BJxNftyy6X",
"SJgEnknh2m",
"rkxxo9DX27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544796955467,
1542332602617,
1542273401423,
1542273377586,
1542273325768,
1542273001365,
1542272620053,
1542272577018,
1542272483653,
1542271506620,
1541499147637,
1541353387994,
1540745879912
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1095/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1095/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1095/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1095/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper propose a new metric for the evaluation of generative models, which they call CrossLID and which assesses the local intrinsic dimensionality (LID) of input data with respect to neighborhoods within generated samples, i.e. which is based on nearest neighbor distances between samples from the real data distribution and the generator. The paper is clearly written and provides an extensive experimental analysis, that shows that LID is an interesting metric to use in addition to exciting metrics as FID, at least for the case of not to complex image distributions The paper would be streghten by showing that the metric can also be applied in those more complex settings.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Intersting new evaluation metric which might have a scalabilty issue.\"}",
"{\"title\": \"Major sections updated in the revised paper\", \"comment\": \"To address the reviewers' comments, the following sections/subsections of the paper was updated:\\n-Section 4.1 (in the context of CrossLID, in the first paragraph after its definition)\\n-Appendices F, G, H, I, J, L\\nWe hope the above information would help the reviewers to identify the major changes incorporated in the paper. \\nApart from these, sections 3, 6.1, 6.2, and 7 have also been updated with minor changes with respect to reviewers comments.\"}",
"{\"title\": \"Response to Reviewer 3 - Part 1\", \"comment\": \"Thank you very much for your comments. Please read the above \\u201cGeneral Response to Reviewers 1, 2 and 3\\u201d for our general response G1, G2 and G3.\\n\\n--\\\"Clarity: I think the clarity can be improved -- instead of stating the (rather abstract) properties of LID, the readers might benefit from the direct discussion of the LID estimator and a couple of examples, derive the max - mean relationship for the MLE estimator and provide some guiding comments. In a later section one might discuss why the estimator is so powerful and generally applicable. Secondly, the story starts with \\u201cdiscriminability of the distance measure\\u201d and the number of latent variables needed to do it, but I felt that this only complicated matters as many of these concepts are unclear at this point.\\u201d \\n\\nRESPONSE 3.1: The reviewer is correct in regarding the MLE estimator of LID as determined by (the reciprocal of) the difference between the maximum and mean of the log-distances within a neighborhood sample. This difference is basically how the estimator assesses the form of discriminability modeled by LID. While we still feel that it is important to introduce LID from a correct, theoretical perspective (from its motivation in terms of intrinsic dimensionality & discriminability), we agree that this practical explanation of the MLE estimator is helpful too. We have added an explanation in terms of max and mean log-distances to Section 3 (in the context of LID, immediately after the statement of the MLE estimator), Section 4.1 (in the context of CrossLID, in the first paragraph after its definition), and Section 4.2 (in the statement of Equation 7).\\n\\n\\n--\\\" Estimator is relatively easy to compute in practice (i.e. the bottleneck will still be in the forward and backward passes of the DNNs).\\u201d\\n\\nRESPONSE 3.2: In fact, only one forward pass is required to compute CrossLID measure. No backward pass is needed for feature extraction. We have added mention of this in Section 4.2 of the updated paper.\\n\\n--\\\"Con:\\n- FID vs CrossLID: I feel that many arguments against FID are too strong. In particular, in \\u201crobustness to small input noise\\u201d and \\u201crobustness to input transformation\\u201d you are changing the underlying distribution *significantly* -- why should the score be invariant to this? After all, it does try to capture the shift in distribution.\\u201d\\n\\nRESPONSE 3.3: Please see our responses G3 above. This is an interesting comment. We believe that debate over properties of an ideal GAN quality measure is extremely important for the ICLR community. We do not claim our perspective is the only one, but we do believe it adds an important new dimension to the discussion (and after all, such type of discussion should be one of the purposes of a paper at ICLR). We have added some extra discussion to \\u201cRobustness to small input noise\\u201d section of the updated paper about this issue. See also general comment G3 above. A potential drawback of high sensitivity to low noise levels is that the metric may respond inconsistently for images with low noise as compared to images of extremely low quality. For which we show in Fig. 17, FID rates the images with centers occluded by black rectangles to be of better quality than images with 2% Gaussian noise --- quite the opposite to human visual judgment.\\n\\n\\n--\\\"In the robustness to sample size again FID is criticized to have a high-variance in low-sample size regime: This is well known, and that\\u2019s why virtually all work presenting FID results apply 10k samples and average out the results over random samples. In this regime it was observed that it has a high bias and low variance (Figure 1 in [1]).\\u201d\\n\\nRESPONSE 3.4: We have added a comment to the \\u201cRobustness to sample size\\u201d section in the experiments of the updated paper noting that this methodology is already being used in the literature ([1]) when deploying FID. \\nNote that in all our experiments related to FID we used a sample size of 50k, which is large enough for FID to get a reasonably accurate estimation.\"}",
"{\"title\": \"Response to Reviewer 3 - Part 2\", \"comment\": \"--\\\"In terms of the dependency of the scores to an external model, why wouldn\\u2019t one compute FID on the discriminator feature space? Similarly, why wouldn\\u2019t one compute FID in the pixel space and get an (equally bad) score as LID in pixel space? Given these issues, in my opinion, Table 1 overstates the concerns with FID, and understates the issues with CrossLID.\\u201d\\n\\n\\nRESPONSE 3.5: Please see our responses G2 above. Although we believe exploring this issue is beyond the scope of our study, we have done some preliminary experiments on FID in a discriminator feature space, and the results indicate that FID can be computed as well using the discriminator activations. We have updated the \\u201cDependency on external model\\u201d in section 6.1 and Table 1 of the paper accordingly to reflect that FID might also be computed on the discriminator feature space.\\n\\n\\n--\\\" FID vs CrossLID in practice: I argue that the usefulness comes from the fact that relative model comparison is sound. From this perspective it is critical to show that the Spearman\\u2019s rank correlation between these two competing approaches on real data sets is not very high -- hence, there are either sample quality or mode dropping/collapsing issues detected by one vs the other. Now, Figure 1 in [1] shows that this FID is sensitive to mode dropping. Furthermore, FID is also highly correlated with sample quality (page 7 of [2]).\\u201d\\n\\nRESPONSE 3.6 For correlation tests, please see our response 3.10 below. We agree that it has been shown in the literature that FID is sensitive to mode dropping and correlated with sample quality. Indeed FID also performs well on these scenarios in our paper (See \\u201cSensitivity to mode collapse\\u201d section and Table 1). Please also see our response G1 above - we are *not* arguing that one needs to \\u201creplace\\u201d FID.\\n\\n--\\\" A critical aspect here is that in pixel space of large dimension the distances will tend to be very similar, and hence all estimators will be practically useless. As such, learning the proper features space is of paramount importance. In this work the authors suggest two remedies: (1) Compute a feature extractor by solving a surrogate task and have one extractor per data set. (2) During the training of the GAN, the discriminator is \\u201clearning\\u201d a good feature space in which the samples can be discriminated. Both of these have significant drawbacks. For (1) we need to share a dataset-specific model with the community. This is likely to depend on the preprocessing, model capacity, training issues, etc.. Then, the community has to agree to use one of these. On the other hand, (2) is only useful for biasing a specific training run. Hence, this critical aspect is not addressed and the proposed solution, while sensible, is unlikely to be adopted.\\u201d\\n\\nRESPONSE 3.7: We have updated the paper to note debate around these issues. Our view is this - the community could choose to use CrossLID in different ways - it could use a dataset specific feature extractor (as we have outlined), or it might choose to use a single feature extractor based on the Inception model (like what is done for FID and LID). We preferred to evaluate the proposed measure on dataset-specific feature extractors as they give more robust feature vectors, and the evaluation metric is expected to have high discriminability. The drawback of using a single feature extractor, e.g., Inception, is that for a very different data distribution, e.g., SVHN, the features may not be robust enough. Authors in (Shane Barratt and Rishi Sharma, 2018) also provided similar arguments in favor of domain-specific feature extractors. Regarding (2), we agree that training specific bias may be induced. Indeed, our intention was to only show that the discriminator feature space may work in practice for feature extraction when no external model is available. Thus, in case of unavailability of an external feature extractor, for example, in case of a non-image, unlabeled dataset, one may find the approach useful for computing the value of an evaluation metric.\"}",
"{\"title\": \"Response to Reviewer 3 - Part 3\", \"comment\": \"--\\\" Main contributions section is too strong -- avoiding mode collapse was not demonstrated. Arguably, given labeled data, the issue can be somewhat reduced if the modes correspond to labels. Similarly, if the data is well-clusterable one can expect a reduction of this effect. However, as both the underlying metric as well as the clustering depends on the feature space, I believe the claim to be too strong. Finally, if we indeed have labels or some assumptions on the data distribution, competing approaches might exploit it as well (as done with i.e. conditional GANs).\\u201d\\n\\nRESPONSE 3.8: Mode collapse avoidance was evidenced in the paper in Section 6.2 (\\u201cEffectiveness in Preventing Mode Collapse\\u201d) and in more detail in Appendix M.\\nWe have reworded the main contributions section. Please see comment G2 above. \\nWe do not believe it is possible to deploy an oversampling procedure with FID on classic GAN models. The reason is that for a given class, FID would need to compare the GAN distribution samples with the real distribution samples. However, there is no class label available for the GAN distribution samples, and so such comparison isn\\u2019t possible. In contrast, CrossLID only requires availability of class labels for the real distribution samples and hence the oversampling approach can be used. \\nEstimation of CrossLID is possible mode-wise, because the estimation is essentially \\u201clocal\\u201d. One just computes the nearest neighbours of a real sample, using neighbours from the entire GAN distribution. Then an average is taken over all real samples within a class. In contrast, estimation of FID is essentially \\u201cglobal\\u201d, and operates by comparing two distributions. Mode-wise computation of FID may be possible for specific types of GAN models such as conditional GANs. The benefit of CrossLID is that it does not require any such specific GANs for mode-wise performance estimation. In regard to conditional GANs, this is a good suggestion for future work and we have added a note about this in future work of the paper. \\n\\n\\n--\\\" In nonparametric KNN based density estimation, one often uses statistics based on KNN distances. What is the relation to LID?\\u201d\\n\\nRESPONSE 3.9: KNN-based density estimation makes explicit use of the volume of the m-dimensional ball with radius equal to that of the neighborhood. The LID model, on the other hand, is concerned with the order of magnitude (or scale) of the growth rate in probability measure (not volume) as neighborhoods are expanded. As such, LID has the very useful property of being oblivious to the representational dimension of the data domain. We have updated Section 3 of the paper so as to emphasize the lack of dependence on knowledge of the representational dimension.\\n\\n\\n--\\\"With respect to the negative points above, without having a clear cut case why this measure outperforms and should replace FID in practice, I cannot recommend acceptance as introducing yet another measure might slow down the progress. To make a stronger case I suggest:\\n(1) Compute Spearman's rank correlation between FIDs and CrossLIDs of several trained models across these data sets.\\n(2) Compute the Pearson's correlation coefficient across the data sets. Given that your method has access to dataset specific feature extractors I expect it perform significantly better than FID.\\u201d\", \"response\": \"3.10: We have included these results in Appendix F of the paper. We found that CrossLID and FID scores are well-correlated (but not perfectly correlated) for all datasets in terms of both Pearson\\u2019s coefficients and Spearman\\u2019s rank coefficients. Although, the two metrics are well correlated, simple scatter plots of the scores and ranks show that there are subtle differences in their exact rankings of the models and thus each measure is providing a different perspective about the GAN generated data. We believe this validates the introduction of CrossLID - it is well correlated with FID, but not exactly and thus provides a further additional perspective about GAN sample quality.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you very much for your comments.\\n\\n--\\\"Cons:\\n-The CLID highly depends on the predefined neighborhood size, which is not studied properly during the paper. Authors suggest some experimentally fixed values, but a proper analysis (at least empirically), would be useful for the readers.\\n- The robustness against input noise is studied only for small values, which is not completely realistic.\\u201d\\n\\nRESPONSE 2.1: We have included the results of \\u201cimpact of neighborhood size on CrossLID estimation\\u201d in Appendix J and \\u201cimpact of high level noise\\u201d in Appendix H (please see Fig. 18 and relevant discussion in Appendix H) of the updated paper.\"}",
"{\"title\": \"Response to \\\"Concerns about clarity and scalability of the metric\\\" - Part 1\", \"comment\": \"Thank you very much for your comments. Please read the above \\u201cGeneral Response to Reviewers 1, 2 and 3\\u201d for our general response G1, G2 and G3.\\n\\n-- \\u201cFigure 1 is a good quick overview of some of the behaviors of the metric but it is not clear why the MLE estimator of LID should be preferred (or perform any differently) on this toy example from a simple average of 1-NN distances. The same is also appears to be true for the motivating example in Figure 8 as well.\\n\\nTo summarize a bit, I found that the paper did not do the best job motivating and connecting the proposed metric to the task at hand and describing in an accessible fashion its potentially desirable properties.\\u201d\\n\\nRESPONSE 1.1: MLE was identified as the best-performing estimator of LID in Amsaleg et. al. (2018), and for this reason we adopt it here. We have made this more clear when introducing it in Equation 5 (Section 3). As to why an adaptation of LID would be better than simply averaging 1-NN distances of points, the answer lies in the concentration effect of higher dimensions: as the local intrinsic dimensionality of the data (or, if you prefer, the dimension of the local submanifold) increases, the discriminability of the distance measure diminishes. Unlike methods based on thresholding of neighbor distances (such as Hausdorff distance or other linkage criteria from clustering), LID scores are naturally adaptive to local differences in intrinsic dimensionality. Although the two-dimensional point configurations shown in Fig. 1 are amenable to such techniques, they quickly break down for data in higher dimensions, or across a range of different local intrinsic dimensionalities. Without taking local intrinsic dimensionality into account, we would not know whether a given large 1-NN distance value indicates `large spatial separation' or `conformity within a locality of high intrinsic dimensionality' - implying that the direct use of 1-NN distance information leads to a rather poor assessment of the relationship of a point to its surroundings. The strength of the LID model is that it naturally allows for comparison of local effects across localities of different dimensionality. We have added discussion about these issues to Section 4.1 of the updated paper.\\n\\n-- \\u201cThe experimental section performs a variety of comparisons between CrossLID, Inception Score and FID. The general finding of the broader literature that Inception Score has some undesirable properties is confirmed here as well. A potentially strong result showing where CrossLID performs well at inter-class mode dropping, Figure 4, is unfortunately confounded with sample size as it tests FID in a setting using 100x lower than the recommended amount of samples. \\u201c\\n\\n\\nRESPONSE 1.2: Actually, in each case the sample size used for calculating FID was 50k. The 50k samples were created by performing oversampling of a dataset with n<50k. E.g. Select 10 classes, each with 100 samples (so a total of n=10*100=1000 instances). Then create a dataset of size 50k by sampling with replacement 50000 times from this pool of 1000 instances. We have added a clarifying comment to the \\u201cSensitivity to mode collapse\\u201d subsection of Section 6.1.\\n\\n\\n-- \\\"The analysis in this section is primarily in the form of interpretation of visual graphs of the behavior of the metrics as a quantity is changed over different datasets. I have some concerns that design decisions around these graphs (normalizing scales, subtracting baseline values) could substantially change conclusions.\\\"\\n\\nRESPONSE 1.3: We have added results (graphs with normalized scores) in Appendices G and H of the updated paper. The conclusions are generally unchanged. If there are other figures the reviewer would specifically like updated, please let us know.\"}",
"{\"title\": \"Response to \\\"Concerns about clarity and scalability of the metric\\\" - Part 2\", \"comment\": \"-- \\\"An oversampling algorithm based on CrossLID is also introduced which results in small improvements over a baseline DCGAN/WGAN and improves stability of a DCGAN when normalization is removed. A very similar oversampling approach could be tried with FID but is not - potentially leaving out a result demonstrating the effectiveness of CrossLID.\\u201d\\n\\nRESPONSE 1.4: We do not believe it is possible to deploy an oversampling procedure with FID on classic GAN models. The reason is that for a given class, FID would need to compare the GAN distribution samples with the real distribution samples. However, there is no class label available for the GAN distribution samples, and so such comparison isn\\u2019t possible. In contrast, CrossLID only requires availability of class labels for the real distribution samples and hence the oversampling approach can be used. \\nEstimation of CrossLID is possible mode-wise, because the estimation is essentially \\u201clocal\\u201d. One just computes the nearest neighbours of a real sample, using neighbours from the entire GAN distribution. Then an average is taken over all real samples within a class. In contrast, estimation of FID is essentially \\u201cglobal\\u201d, and operates by comparing two distributions. Mode-wise computation of FID may be possible for specific types of GAN models such as conditional GANs. The benefit of CrossLID is that it does not require any such specific GANs for mode-wise performance estimation. Please also see G2 above.\\n\\n\\n-- \\\"The paper also proposes computing CrossLID in the feature space of a discriminator to make the metric less reliant on an external model. While this is an interesting thing to showcase - FID can also be computed in an arbitrary feature space and the authors do not clarify or investigate whether FID performs similarly.\\u201d\\n\\nRESPONSE 1.5: Please see G2 above. Although we believe exploring this issue is beyond the scope of our study, we have done some preliminary experiments on FID in a discriminator feature space, and the results indicate that FID can be computed as well using the discriminator activations. Accordingly, we have updated Section 6.1 (\\u201cDependency on external model\\u201d) and Table 1 to reflect this finding.\\n\\n-- \\\"Several experiments get into some unclear value judgements over what the behavior of an ideal metric should be. The authors of FID argue the opposite position of this paper that the metric should be sensitive to low-level changes in addition to high-level semantic content. It is unclear to me as the reader which side to take in this debate.\\u201d\\n\\nRESPONSE 1.6: This is an interesting comment. We believe that debate over properties of an ideal GAN quality measure is extremely important for the ICLR community. We do not claim our perspective is the only one, but we do believe it adds an important new dimension to the discussion (and after all, such type of discussion should be one of the purposes of a paper at ICLR). We have added some extra discussion to \\u201cRobustness to small input noise\\u201d section of the updated paper about this issue. See also general comment G3 above.\\nA potential drawback of high sensitivity to low noise levels is that the metric may respond inconsistently for images with low noise as compared to images of extremely low quality. For which we show in Fig. 17, FID rates the images with centers occluded by black rectangles to be of better quality than images with 2% Gaussian noise --- quite the opposite to human visual judgment.\\n\\n-- \\\"I have some final concerns over the fact that the metric is not tested on open problems that GANs still struggle with. Current SOTA GANs can already generate convincing high-fidelity samples on MNIST, SVHN, and CIFAR10. Exclusively testing a new metric for the future of GAN evaluation on the problems of the past does not sit well with me.\\u201d\\n\\nRESPONSE 1.7: We understand it is certainly interesting to test measures on complex datasets. Our philosophy though, is that evaluation of GAN performance on MNIST/SVHN/CIFAR10 is still far from being a closed issue. If we have time (i.e. enough computational resources for such a big dataset) during the review response period, we will attempt to evaluate on imagenet, but for feasibility we need to prioritize this suggestion lower compared to the other issues raised by the reviewer(s).\\n\\n-- \\\"Some questions:\\n* Could the authors comment on run time comparisons of the metric with FID/IS?\\u201d\", \"response\": \"1.8: We have added runtime statistics to Appendix I of the paper. The three measures have increasing running times with a linear trend as sample size increases, but at different pace and scales with CrossLID the lowest and FID the highest.\"}",
"{\"title\": \"Response to \\\"Concerns about clarity and scalability of the metric\\\" - Part 3\", \"comment\": \"-- \\\"* How much benefit is there from something like CrossLID compared to the simplest case of distance to 1-NN in feature space? More generally an analysis of how the benefits of CrossLID as you increase neighborhood size would help illuminate the behavior of the metric.\\u201d\\n\\nRESPONSE 1.9: We have not included the 1-NN distance in our study for the reasons laid out in Response 1.1. Also, in Section 4.1 of our updated version we now note that in Amsaleg et al. (2018), an estimator of local intrinsic dimensionality using only the 1-NN and k-NN distance measurements (a variant in the \\u2018MiND\\u2019 family) was shown to lead to relatively poor performance. \\n\\n\\n-- \\\"* For Table 2, what are the FID scores and how do they correlate with CrossLID and Inception Score?\\u201d\\n\\nRESPONSE 1.10: We have updated Table 2 of the paper and included FID and Inception results in Appendix L of the updated paper. In addition, we have also added standard deviations for all metrics in Table 2 and Appendix L. The results indicate that, for all datasets, FID and CrossLID scores rank the competitive methods of our experiments similarly. Inception score is also consistent with FID and CrossLID scores, except on the SVHN dataset. The discussions are included in Appendix L.\\n\\n\\n-- \\\"Cons:\\n- No error bars / confidence intervals are provided to show how sensitive the various metrics tested are to sample noise. \\n\\n- Authors test FID outside of recommended situations (very low #of samples (500) in Figure 4) without noting this is the case. The stated purpose of Figure 4 is to evaluate inter-class mode dropping yet this result is confounded by the extremely low N (100x lower than the recommended N for FID).\\n- It is unclear whether metric continues to be reliable for more complex/varied image distributions such as Imagenet (see main text for more discussion)\\n- Many of the proposed benefits of the model (mode specific dropping and not requiring an external model) can also be performed for FID but the paper does not note this or provide comparisons.\\u201d\\n\\nRESPONSE 1.11: Addressed above, please see 1.5, 1.8, G3. For all metrics, the standard deviation is very small. We have added standard deviations for all metrics in Table 2 and Appendix L of GAN experimental results.. We hope these results will help the reader to understand the confidence interval of the metrics in general.\"}",
"{\"title\": \"General Response to Reviewers 1, 2 and 3\", \"comment\": \"We thank the reviewers for their comments. Based on the reviews taken together, we first make some general comments (G1-G3), followed by more detailed point by point comments for each review.\", \"g1\": \"We are not arguing CrossLID should \\u201creplace\\u201d existing measures such as FID. CrossLID measures GAN quality from a quite different perspective to FID (local rather than global). It can be deployed as an additional tool for the community to use for understanding and assessing GAN quality, and might be used alongside existing measures like FID or Inception Score. As an analogy, many measures for validating clustering quality have been developed (both internal and external) - a typical research paper compares different clustering algorithms in terms of their quality. It is well known that there is no single best quality measure and so it is common for researchers to report performance with respect to several measures, for more robustness of their findings. Our updated paper now includes mention of these perspectives in the conclusion.\", \"g2\": \"In our study, we have proposed strategies for CrossLID to work effectively through i) using feature space of the DNN to assess distances, and ii) using class labels to improve learning of certain modes. Reviewers 1 and 3 argued that such strategies ought also to be tried in conjunction with FID. We provide more detailed discussion about the feasibility of this below, but at a high level, we would be pleased if our strategies could also help enhance FID (we have updated the paper to note that such strategies might be evaluated for FID). Such an outcome would add (not detract) value to our overall contribution, since to the best of our knowledge, i) and ii) are novel strategies in the context of assessing GAN quality.\", \"g3\": \"Reviewers 1 and 3 raised the issue of stability. Is it better for a GAN metric to be sensitive to low levels of noise, or insensitive to low levels of noise? We have argued that the latter is desirable, whereas the reviewers appear to lean towards the former. We understand that there is some room for debate here and have updated the paper accordingly. To the best of our knowledge though, this issue of stability is unaddressed by the GAN community and we pose it as an open question \\u201cWhat level of sensitivity is appropriate for a GAN quality measure applied on images which are highly recognizable, but which contain a low level of noise\\u201d?\"}",
"{\"title\": \"Concerns about clarity and scalability of the metric\", \"review\": \"The paper proposes a new metric to evaluate GANs. The metric, Cross Local Intrinsic Dimensionality (CrossLID) is estimated by comparing distributions of nearest neighbor distances between samples from the real data distribution and the generator. Concretely, it proposes using the inverse of the average of the negative log of the ratios of the distances of the K nearest neighbors to the maximum distance within the neighborhood.\\n\\nThe paper introduces LID as the metric to be used within the introduction, but for readers unfamiliar with it, the series of snippets \\u201cmodel of distance distributions\\u201d and \\u201cassesses the number of latent variables\\u201d and \\u201cdiscriminability of a distance measure in the vicinity of x\\u201d are abstract and lack concrete connections/motivations for the problem (sample based comparison of two high-dimensional data distributions) the paper is addressing.\\n\\nAfter an effective overview of relevant literature on GAN metrics, LID is briefly described and motivated in various ways. This is primarily a discussion of various high-level properties of the metric which for readers unfamiliar with the metric is difficult to concretely tie into the problem at hand. After this, the actual estimator of LID used from the literature (Amsaleg 2018) is introduced. Given that this estimator is the core of the paper, it seems a bit terse that the reader is left with primarily references to back up the use of this estimator and connect it to the abstract discussion of LID thus far.\\n\\nFigure 1 is a good quick overview of some of the behaviors of the metric but it is not clear why the MLE estimator of LID should be preferred (or perform any differently) on this toy example from a simple average of 1-NN distances. The same is also appears to be true for the motivating example in Figure 8 as well.\\n\\nTo summarize a bit, I found that the paper did not do the best job motivating and connecting the proposed metric to the task at hand and describing in an accessible fashion its potentially desirable properties.\\n\\nThe experimental section performs a variety of comparisons between CrossLID, Inception Score and FID. The general finding of the broader literature that Inception Score has some undesirable properties is confirmed here as well. A potentially strong result showing where CrossLID performs well at inter-class mode dropping, Figure 4, is unfortunately confounded with sample size as it tests FID in a setting using 100x lower than the recommended amount of samples. \\n\\nThe analysis in this section is primarily in the form of interpretation of visual graphs of the behavior of the metrics as a quantity is changed over different datasets. I have some concerns that design decisions around these graphs (normalizing scales, subtracting baseline values) could substantially change conclusions. \\n\\nAn oversampling algorithm based on CrossLID is also introduced which results in small improvements over a baseline DCGAN/WGAN and improves stability of a DCGAN when normalization is removed. A very similar oversampling approach could be tried with FID but is not - potentially leaving out a result demonstrating the effectiveness of CrossLID.\\n\\nThe paper also proposes computing CrossLID in the feature space of a discriminator to make the metric less reliant on an external model. While this is an interesting thing to showcase - FID can also be computed in an arbitrary feature space and the authors do not clarify or investigate whether FID performs similarly.\\n\\nThese two extensions, addressing mode collapse via oversampling and using the feature space of a discriminator are interesting proposals in the paper, but the authors do not do a thorough investigation of how CrossLID performs to FID here.\\n\\nSeveral experiments get into some unclear value judgements over what the behavior of an ideal metric should be. The authors of FID argue the opposite position of this paper that the metric should be sensitive to low-level changes in addition to high-level semantic content. It is unclear to me as the reader which side to take in this debate. \\n\\nI have some final concerns over the fact that the metric is not tested on open problems that GANs still struggle with. Current SOTA GANs can already generate convincing high-fidelity samples on MNIST, SVHN, and CIFAR10. Exclusively testing a new metric for the future of GAN evaluation on the problems of the past does not sit well with me.\", \"some_questions\": [\"Could the authors comment on run time comparisons of the metric with FID/IS?\", \"How much benefit is there from something like CrossLID compared to the simplest case of distance to 1-NN in feature space? More generally an analysis of how the benefits of CrossLID as you increase neighborhood size would help illuminate the behavior of the metric.\", \"For Table 2, what are the FID scores and how do they correlate with CrossLID and Inception Score?\"], \"pros\": [\"Code is available!\", \"The metric appears to be more robust than FID in small sample size settings.\", \"A variety of comparisons are made to several other metrics on three canonical datasets.\", \"The paper has two additional contributions in addition to the metric. Addressing mode collapse via adaptive oversampling and utilizing the features of the discriminator to compute the metric in.\"], \"cons\": [\"No error bars / confidence intervals are provided to show how sensitive the various metrics tested are to sample noise.\", \"Authors test FID outside of recommended situations (very low #of samples (500) in Figure 4) without noting this is the case. The stated purpose of Figure 4 is to evaluate inter-class mode dropping yet this result is confounded by the extremely low N (100x lower than the recommended N for FID).\", \"It is unclear whether metric continues to be reliable for more complex/varied image distributions such as Imagenet (see main text for more discussion)\", \"Many of the proposed benefits of the model (mode specific dropping and not requiring an external model) can also be performed for FID but the paper does not note this or provide comparisons.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Authors coupled a local intrinsic dimensionality measure to assess GAN frameworks concerning their ability to generate realistic data. The proposal is straightforward and would be applied in different GAN-based approaches, mainly, being sensitive to mode collapse.\", \"review\": \"The paper is clear regarding motivation, related work, and mathematical foundations. The introduced cross-local intrinsic dimensionality- (CLID) seems to be naive but practical for GAN assessment. In general, the experimental results seem to be convincing and illustrative.\", \"pros\": [\"Clear mathematical foundations and fair experimental results.\", \"CLID can be applied to favor GAN-based training, which is an up-to-date research topic.\", \"Robustness against mode collapse (typical discrimination issue).\"], \"cons\": \"-The CLID highly depends on the predefined neighborhood size, which is not studied properly during the paper. Authors suggest some experimentally fixed values, but a proper analysis (at least empirically), would be useful for the readers.\\n- The robustness against input noise is studied only for small values, which is not completely realistic.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Sample-based quantitative evaluation of generative models based on k-nearest neighbor queries from the observed samples into the generated samples in a learned feature space.\", \"review\": \"Statistics based on KNN distances are ubiquitous in machine learning. In this paper the authors propose to apply the existing LID metric to GANs. The metric can be decomposed as follows: (1) Given a point x in X, compute the k-nearest neighbors KNN(x, X) and let those distances be R1, R2, \\u2026, Rk. Now, rewrite LID(x, X) = [max_over_i (log Ri) - mean_over_i (log Ri)] to uncover that the distribution of (log-)distances is summarized as a function of the max distance and the mean distance. (2) To extend the metric to two sets, A and B, define CrossLID(A; B) = E_(x in A) [LID(x, B)]. To see why CrossLID is useful, let X be the observed data and G the generated data. First consider CrossLID(A, B) where A=B=X which determines a lower-bound which is essentially the average (over elements of A) LID statistic determined by the underlying KNN graph of X. Now, keep A=X, and progressively change B to G (say by replacing some points from X with some points from G). This will induce a change of the distance statistics of some points from A, which will be detected on the individual LID scores of those points, and will hence be propagated to CrossLID. As a result, LID close to the baseline LID detects both sample quality issues as well as mode dropping/collapse issues. In practice, instead of computing this measure in the pixel space, one can compute it in the feature space of some feature extractor, or in some cases directly in the learned feature space of the generator. Finally, given some labeling of the points, one can keep track of the CrossLID statistic for each mode and use this during training to oversample modes for which the gap between the expected CrossLID and computed one is large.\", \"clarity\": \"I think the clarity can be improved -- instead of stating the (rather abstract) properties of LID, the readers might benefit from the direct discussion of the LID estimator and a couple of examples, derive the max - mean relationship for the MLE estimator and provide some guiding comments. In a later section one might discuss why the estimator is so powerful and generally applicable. Secondly, the story starts with \\u201cdiscriminability of the distance measure\\u201d and the number of latent variables needed to do it, but I felt that this only complicated matters as many of these concepts are unclear at this point.\", \"originality\": \"Up to my knowledge, the proposed application is novel, albeit built on an existing (well-known) estimator. Nevertheless, the authors have demonstrated several desirable properties which might be proven useful in practice.\", \"significance_of_this_work\": \"The work is timely and attempts to address a critical research problem which hinders future research on deep generative models.\", \"pro\": [\"Generally well written paper, although the clarity of exposition can be improved.\", \"Estimator is relatively easy to compute in practice (i.e. the bottleneck will still be in the forward and backward passes of the DNNs).\", \"Can be exploited further when labeled data is available\", \"Builds upon a strong line of research in KNN based estimators.\", \"Solid experimental setup with many ablation studies.\"], \"con\": \"- FID vs CrossLID: I feel that many arguments against FID are too strong. In particular, in \\u201crobustness to small input noise\\u201d and \\u201crobustness to input transformation\\u201d you are changing the underlying distribution *significantly* -- why should the score be invariant to this? After all, it does try to capture the shift in distribution. In the robustness to sample size again FID is criticized to have a high-variance in low-sample size regime: This is well known, and that\\u2019s why virtually all work presenting FID results apply 10k samples and average out the results over random samples. In this regime it was observed that it has a high bias and low variance (Figure 1 in [1]). In terms of the dependency of the scores to an external model, why wouldn\\u2019t one compute FID on the discriminator feature space? Similarly, why wouldn\\u2019t one compute FID in the pixel space and get an (equally bad) score as LID in pixel space? Given these issues, in my opinion, Table 1 overstates the concerns with FID, and understates the issues with CrossLID. \\n- FID vs CrossLID in practice: I argue that the usefulness comes from the fact that relative model comparison is sound. From this perspective it is critical to show that the Spearman\\u2019s rank correlation between these two competing approaches on real data sets is not very high -- hence, there are either sample quality or mode dropping/collapsing issues detected by one vs the other. Now, Figure 1 in [1] shows that this FID is sensitive to mode dropping. Furthermore, FID is also highly correlated with sample quality (page 7 of [2]).\\n- A critical aspect here is that in pixel space of large dimension the distances will tend to be very similar, and hence all estimators will be practically useless. As such, learning the proper features space is of paramount importance. In this work the authors suggest two remedies: (1) Compute a feature extractor by solving a surrogate task and have one extractor per data set. (2) During the training of the GAN, the discriminator is \\u201clearning\\u201d a good feature space in which the samples can be discriminated. Both of these have significant drawbacks. For (1) we need to share a dataset-specific model with the community. This is likely to depend on the preprocessing, model capacity, training issues, etc.. Then, the community has to agree to use one of these. On the other hand, (2) is only useful for biasing a specific training run. Hence, this critical aspect is not addressed and the proposed solution, while sensible, is unlikely to be adopted.\\n- Main contributions section is too strong -- avoiding mode collapse was not demonstrated. Arguably, given labeled data, the issue can be somewhat reduced if the modes correspond to labels. Similarly, if the data is well-clusterable one can expect a reduction of this effect. However, as both the underlying metric as well as the clustering depends on the feature space, I believe the claim to be too strong. Finally, if we indeed have labels or some assumptions on the data distribution, competing approaches might exploit it as well (as done with i.e. conditional GANs).\\n- In nonparametric KNN based density estimation, one often uses statistics based on KNN distances. What is the relation to LID?\\n\\nWith respect to the negative points above, without having a clear cut case why this measure outperforms and should replace FID in practice, I cannot recommend acceptance as introducing yet another measure might slow down the progress. To make a stronger case I suggest:\\n(1) Compute Spearman's rank correlation between FIDs and CrossLIDs of several trained models across these data sets.\\n(2) Compute the Pearson's correlation coefficient across the data sets. Given that your method has access to dataset specific feature extractors I expect it perform significantly better than FID.\\n \\n[1] https://arxiv.org/pdf/1711.10337.pdf\\n[2] https://arxiv.org/pdf/1806.00035.pdf\\n\\n========\\nThank you for the detailed responses. I have updated my score from 5 to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
B1xFxh0cKX | Guided Evolutionary Strategies: Escaping the curse of dimensionality in random search | [
"Niru Maheswaranathan",
"Luke Metz",
"George Tucker",
"Dami Choi",
"Jascha Sohl-Dickstein"
] | Many applications in machine learning require optimizing a function whose true gradient is unknown, but where surrogate gradient information (directions that may be correlated with, but not necessarily identical to, the true gradient) is available instead. This arises when an approximate gradient is easier to compute than the full gradient (e.g. in meta-learning or unrolled optimization), or when a true gradient is intractable and is replaced with a surrogate (e.g. in certain reinforcement learning applications or training networks with discrete variables). We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search. We define a search distribution for evolutionary strategies that is elongated along a subspace spanned by the surrogate gradients. This allows us to estimate a descent direction which can then be passed to a first-order optimizer. We analytically and numerically characterize the tradeoffs that result from tuning how strongly the search distribution is stretched along the guiding subspace, and use this to derive a setting of the hyperparameters that works well across problems. Finally, we apply our method to example problems including truncated unrolled optimization and training neural networks with discrete variables, demonstrating improvement over both standard evolutionary strategies and first-order methods (that directly follow the surrogate gradient). We provide a demo of Guided ES at: redacted URL | [
"evolutionary strategies",
"optimization",
"gradient estimators",
"biased gradients"
] | https://openreview.net/pdf?id=B1xFxh0cKX | https://openreview.net/forum?id=B1xFxh0cKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgwmbWQgN",
"ByxowIZyJV",
"HkeFVwDRAm",
"HkesDWvop7",
"S1gvmZPsTQ",
"ryx-zWDiaX",
"Bke4RAO6nQ",
"rkxvkSUTnX",
"rJeWFl_Y37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544913182780,
1543603810674,
1543563056835,
1542316387115,
1542316318986,
1542316297145,
1541406411847,
1541395679417,
1541140601419
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1094/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1094/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1094/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1094/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1094/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1094/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1094/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1094/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1094/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a \\u201cguided\\u201d evolution strategy method where the past surrogate gradients are used to construct a covariance matrix from which future perturbations are sampled. The bias-variance tradeoff is analyzed and the method is applied to real-world examples.\\n\\nThe method is not entirely new, and discussion of related work as well as comparison with them is missing. The main contribution is in the analysis and application to real-world examples, and the paper should be rewritten focusing on these contributions, while discussing existing work on this topic thoroughly.\\n\\nDue to these issue, I recommend to reject this paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Related work is overlooked and not compared with.\"}",
"{\"title\": \"Multiple Hansen 2011 references!\", \"comment\": \"Our apologies--we were talking about different Hansen 2011 references. We thought the reviewer was referring to \\\"The CMA Evolution Strategy: A Tutorial\\\" (http://www.cmap.polytechnique.fr/~nikolaus.hansen/cmatutorial110628.pdf), but only today noticed this technical report \\\"Injecting External Solutions Into CMA-ES\\\" (https://hal.inria.fr/inria-00628254/document). We apologize for the confusion.\\n\\nThis Hansen 2011 reference is indeed very relevant, we thank the reviewer for bringing it to our attention. After a quick read, the main difference with our work is that we inject external information into the covariance matrix from which perturbations are sampled, whereas Hansen 2011 injections solutions by replacing the samples themselves. We are reading this reference more carefully and will update our paper accordingly.\"}",
"{\"title\": \"Regarding Hansen 2011\", \"comment\": \"Thank you for your response.\\n\\nAs the title of Hansen 2011 tells, it of course consider injecting external solutions or directions. On the first page of this paper, they say,\\n--\\nExternal or modified proposal solutions or directions can have a variety of sources.\\n\\u2022 a gradient or Newton direction;\\n\\u2022 an optimal solution of a surrogate model built from already evaluated solutions;\\n\\u2022 the best-ever solution seen so far;\\n--\\n\\nThe authors might misunderstand Hansen 2011 since the external solutions and directions (gradients) are transformed as a solution in the algorithm description.\"}",
"{\"title\": \"Thank you for your review. Comments below:\", \"comment\": \"Thank you for your review. Comments below:\\n\\nRegarding the first point, we agree that comparisons against other adaptive ES methods (CMA-ES and NES) would be very useful, and are currently performing these comparisons. We will update the paper with these results. However, there is a fundamental limitation with CMA-ES and NES in that they are both purely black box (no gradient information) optimizers. This can be especially seen in the current Figure 1 in the paper, where CMA-ES is not able to take advantage of the initial external gradient information available to Guided ES or SGD, thus fails to make quick progress on the problem--it does begin to accelerate past standard ES as the covariance begins to adapt. NES will have the same issue. Again, we are running experiments to confirm this intuition.\\n\\nRegarding the second point, we carefully read through the Hansen 2011 reference again and could find no reference on using external gradient information to adapt the covariance matrix. As far as we can tell, Hansen 2011 focuses purely on adapting the covariance using information from iterates encountered during the optimization trajectory. To the best of our knowledge, our work is the first to propose incorporating surrogate\\u201d gradient information into an ES algorithm, as well as to analyze the bias and variance of the resulting gradient estimate.\"}",
"{\"title\": \"Thank you for your review. Comments below:\", \"comment\": \"Thank you for your review. Comments below:\\n\\nRegarding clarity, we appreciate the reviewer\\u2019s comments and have added more context throughout the paper. We have added pseudocode in the main text, and moved more of the bias-variance derivation to the appendix (to make room for more exposition of the method in 3.2).\\n\\nIn addition, we would appreciate it if the reviewer could elaborate on which aspects of the paper they thought could use more context. Specifically, if Fig1a is insufficient to explain the method, what additional information would the reviewer have appreciated?\\n\\nRegarding experiments, we are running more baseline comparisons against other adaptive evolutionary strategy methods (CMA-ES and natural evolutionary strategies, or NES). If the reviewer has other specific suggestions as to what would make the experiments more exhaustive, we would appreciate them.\"}",
"{\"title\": \"Thank you for the review. Responses below:\", \"comment\": \"Thank you for the review. Responses below:\\n\\n\\u201cThe proposed guided search seems similar to (stochastic) quasi-Newton methods. For instance the form in (2) is indeed a rank-one update of the gradient. What is authors take on this relationship?\\u201d\\nQuasi-Newton methods assume access to first-order information about the objective, whereas our focus is on black-box optimization. Critically, we do not assume that the \\u201csurrogate\\u201d gradient information we are provided is reliable. Providing a way to robustly use this information when it is useful and to discard it when it is not is our primary contribution.\\n\\nOur update is indeed inspired by quasi-Newton methods, and we now note this in the maintext. In particular, as the author notes, we adapt the search covariance with a history of k past gradient estimates similar to how the approximate inverse Hessian is updated according to the past k gradient evaluations in L-BFGS. However, we are not trying to approximate the inverse Hessian. In our application, we are updating the covariance of the distribution used to perturb parameters.\\n\\n\\u201cThe analysis assumes that the gradient exists. The proposed method is interesting when the gradients are not available. Therefore, it is not clear in what sense this analysis would apply to general functions. The authors also assume that the second order Taylor expression is exact. Is this absolutely necessary? Would the analysis work when the function is approximated locally with its second order expansion?\\u201c\\nThese assumptions were made largely to simplify the presentation of the bias-variance analysis. In the deep learning applications we focused on, the gradient of most loss functions exist, however, they may not be tractable (due to intractable integrals over nuisance variables) or may not be useful (e.g., the gradient of a hard thresholding function is 0). We dropped the higher order Taylor remainder to declutter the exposition, however, the analysis still holds (up to higher order error terms) when the function is locally approximated to 2nd order around each iterate. \\n\\n\\u201cI guess the equation in (2) is satisfied irrespective of the distribution of the \\\\epsilon_i vectors. If I am right, then what is the role of the particular distribution used for sampling from the subspace of surrogate gradients?\\u201d\\nCorrect, the equation in (2) does not depend on the particular distribution for \\\\epsilon. We chose a Gaussian distribution because it is: simple, easy to sample from, and has bounded variance. This is also consistent with previous work on evolutionary strategies (CMA-ES and NES). It would be interesting to, in future work, explore different choices for this distribution either analytically or empirically.\\n\\n\\u201cThe authors state \\\"ES has seen a resurgence in popularity in recent years (Salimans et al., 2017; Mania et al., 2018).\\\" Both cited papers are not published in any conference or journal. Is there some recent but published work to support the statement?\\u201d\\nWe have added additional published citations [1, 2, 3, 4] all of which use evolutionary strategies in concert with neural networks.\\n\\nWe would appreciate if the reviewer could expand on their justification for the given score, or consider revising it in the context of our responses here.\\n\\n[1] Cui et al. Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks (NIPS 2018)\\n[2] Houthooft et. al. Evolved Policy Gradients (NIPS 2018)\\n[3] Ha and Schmidhuber. World Models. (NIPS 2018)\\n[4] Ha, David. Neuroevolution for deep reinforcement learning problems. (GECCO 2018)\"}",
"{\"title\": \"Good idea but its relation with a similar approach is overlooked and analysis is oversimplified\", \"review\": \"In this manuscript, the authors propose an approach that combines random search with the surrogate gradient information. To this end, the proposed method samples from the subspace of the surrogate gradients. This subspace is constructed by storing the previous surrogate gradients. After several assumptions, the authors also a give a discussion on variance-bias trade-off as well as a discussion on hyperparameter optimization. The manuscript ends with numerical experiments.\\n\\nThe proposed guided search seems similar to (stochastic) quasi-Newton methods. For instance the form in (2) is indeed a rank-one update of the gradient. What is authors take on this relationship?\\n\\nThe analysis assumes that the gradient exists. The proposed method is interesting when the gradients are not available. Therefore, it is not clear in what sense this analysis would apply to general functions. The authors also assume that the second order Taylor expression is exact. Is this absolutely necessary? Would the analysis work when the function is approximated locally with its second order expansion? \\n\\nI guess the equation in (2) is satisfied irrespective of the distribution of the \\\\epsilon_i vectors. If I am right, then what is the role of the particular distribution used for sampling from the subspace of surrogate gradients?\\n\\nThe authors state \\\"ES has seen a resurgence in popularity in recent years (Salimans et al., 2017; Mania et al., 2018).\\\" Both cited papers are not published in any conference or journal. Is there some recent but published work to support the statement?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Improve random search by building a subspace of the previous surrogate gradients for derivative-free optimization; good results but paper lacks clarity and is quite hard to follow.\", \"review\": \"Summary: The paper proposes a method to improve random search by building a subspace of the previous k surrogate gradients, mixing it with an isotropic Gaussian distribution to improve the search. Results reported shows are good compared to other approaches for learning weights of neural networks. However, paper lacks clarity and is quite hard to follow.\", \"quality\": \"The paper presents a well-designed approach that is able to deal with optimization in high dimensionality space, by building a lower order surrogate model build upon the previous gradients computed with this surrogate model. The analysis appears to be correct and provide credential to the approach. Results reported are very good. However, testing is relatively limited to few cases. More experimental results on a good set of problems with several methods would have made the paper stronger and more convincing.\", \"clarity\": \"The paper is hard to follow. Maybe because I am not completely familiar with the topic, but many elements presented lacks some context. The authors appear clearly to be knowledgeable of their topics, but lacks the capacity to provide all required background to follow their thoughts. The method could have been better illustrated, I found that Fig. 1a not enough to explain the method, while Fig. 1b and other training curves not useful to understand the approach. Some pseudo-code to illustrate the use of the proposed method might certainly help to improve clarity. Sec. 3.2 is not enough to understand well the approach.\", \"originality\": \"The approach is allowing a nice trade-off between pure random search and guide search through a surrogate model over a subspace of limited dimensionality. This is in-line of some work on the use of ES for training neural networks, but I am not aware of other similar work although I am not super knowledgeable of the field.\", \"significance\": \"The approach can have its impact for optimizing deep networks with no gradient, but more exhaustive experimental testing would be required.\", \"pros_and_cons\": [\"Sound approach\", \"Good theoretical support of the approach (bias-variance analysis)\", \"Great results reported\", \"Of importance for optimizing without gradients\", \"Presentation of the method lacking many details and not very clear\", \"Overall quality of the paper is subpar, tend to be very textual and hard to follow in several parts\", \"Experiments are not exhaustive and detailed. Loss plots are provided for some methods compared. Looks more like a preliminary validation.\", \"I think that if the paper can be rewritten to be more tight, clearer in its presentation, with figures and pseudo-code to illustrate the method better, with more exhaustive testing, it can be really great. Current, the method appears to be great, but the writing quality of the paper is not yet there.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, but not really new\", \"review\": \"The idea of this paper is to accelerate the OpenAI type evolution strategy by introducing\\n1. non-isotropic distribution, where the covariance matrix is of form I + UU^t; and\\n2. external information such as a surrogate gradient to determine U.\\nThe experiments show promising results. I think it is the right direction to go, however at the same time, these ideas are not really new. \\n\\nThe first point is well studied in the context of evolution strategies in e.g., (Sun et al., arxiv 2011), (Loshchilov, Evolutionary Computation 2015), (Akimoto et al, GECCO 2016). They all have covariance matrix of form I + UU^t or a bit richer. There are mainly two advantages over full CMA-ES or NES: 1) computationally cheap, and 2) faster adaptation of the covariance matrix. The current paper does not adapt the covariance matrix and use an external information to guide the distribution. Therefore, it is different from the above work, but I suggest to compare Guided ES with these methods to see the effect of external information purely. \\n\\nThe second point, using external information to change the distribution shape, is also investigated in reference (Hansen, INRIA TechRep 2011), where external information such as a good point or a good directions (gradient) is injected in order to adapt the covariance matrix.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJxug2R9Km | Meta-Learning for Contextual Bandit Exploration | [
"Amr Sharaf",
"Hal Daumé III"
] | We describe MÊLÉE, a meta-learning algorithm for learning a good exploration policy in the interactive contextual bandit setting. Here, an algorithm must take actions based on contexts, and learn based only on a reward signal from the action taken, thereby generating an exploration/exploitation trade-off. MÊLÉE addresses this trade-off by learning a good exploration strategy based on offline synthetic tasks, on which it can simulate the contextual bandit setting. Based on these simulations, MÊLÉE uses an imitation learning strategy to learn a good exploration policy that can then be applied to true contextual bandit tasks at test time. We compare MÊLÉE to seven strong baseline contextual bandit algorithms on a set of three hundred real-world datasets, on which it outperforms alternatives in most settings, especially when differences in rewards are large. Finally, we demonstrate the importance of having a rich feature representation for learning how to explore.
| [
"meta-learning",
"bandits",
"exploration",
"imitation learning"
] | https://openreview.net/pdf?id=rJxug2R9Km | https://openreview.net/forum?id=rJxug2R9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xtd6MlxV",
"Sklm6yPP67",
"SJeF96IDpX",
"HkgsDh8wp7",
"rkeSHBI937",
"S1xpvoeO3Q",
"HJluMESy2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544723824633,
1542053819065,
1542053264780,
1542052963067,
1541199164993,
1541045092870,
1540473871747
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1093/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1093/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1093/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1093/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1093/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1093/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1093/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper provides an interesting strategy for learning to explore, by first training on fully supervised data before deploying that policy to an online setting. There are some concerns, however, on the realism and utility of this setting that should be further discussed. If the offline data is not related to the contextual bandit problem, it would be surprising for this to have much benefit, and this should be better motivated and discussed. Because there are no theoretical guarantees for exploration, a discussion is needed and as suggested by a reviewer the learned exploration policies could be qualitatively examined. For example, the paper says \\\"While these approaches are effective if the distribution of tasks is very similar and the state space is shared among different tasks, they fail to generalize when the tasks are different. Our approach targets an easier problem than exploration in full reinforcement learning environments, and can generalize well across a wide range of different tasks with completely unrelated features spaces.\\\" This is a pretty surprising statement, that your idea would not work well in an RL setting, but does work well in a contextual bandit setting.\\n\\nThere should also be a bit more discussion comparing to previous approach to learn how to explore, including in active learning. It is true that active learning is a different setting, but in both a goal is to become optimal as quickly as possible. Similarly, the ideas used for RL could be used here as well, essentially by setting gamma to 0. \\n\\nOverall, the ideas here are interesting and well-written, but need a bit more development on previous work, and motivation for why this approach will be effective.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Some concerns about problem setting\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the provided feedback. Please find our response below:\\n\\n1) Relevance to Real Problems:\\n===========================\\n \\n\\nWe believe that there is a fundamental misunderstanding in this point of the review regarding the experimental setup we study on our paper. \\n\\nWe want to stress that we don\\u2019t assume access to full information datasets that are representative of the contextual bandit task to be performed. As mentioned on the abstract & section 3.1 of the paper, MELEE uses offline synthetic datasets during the training phase. Contrary to the assertion in the review, these synthetic full information datasets are quite different from the task dependent contextual bandit dataset. \\n\\nThese synthetic datasets are very diverse and broad in their complexity, and exploration strategies learned on these datasets does indeed generalize to real contextual bandit datasets as we have verified both empirically and theoretically. The context vectors for these synthetic datasets are quite different in structure from the real contextual bandit task at hand, for which we don\\u2019t assume access to any sort of full information data. We learn a dataset independent exploration policy from these synthetic datasets, and use meta-features that can generalize across different datasets to learn how to explore in realistic contextual bandit settings. \\n\\nWe describe how we generate these synthetic datasets in appendix B. We generate 2D datasets by first sampling a random variable representing the Bayes classification error. The Bayes error is sampled uniformly from the interval 0.0 to 0.5. This Bayes error controls for the amount of noise in the dataset. \\n\\n\\n2) Experimental Validation: \\n=======================\\n\\nIt\\u2019s not true that we only compare to the LinUCB exploration algorithm. We compare to seven other contextual bandit exploration algorithm these algorithms are (Section 3.3): Epsilon greedy, Exponentiated Gradient Epsilon Greedy, Tau-first exploration, LinUCB, Cover, and Cover-Nu. Many of these algorithms does indeed use data in devising an exploration strategy. For example LinUCB, Cover, and Cover-NU all leverage information from the observed data to balance exploration and exploitation. \\n\\n3) Theorem 1 and sublinear Regret: \\n\\n==============================\\n\\nThis is a really good observation. The regret bounded by Theorem 1 is dependent on the term epsilon-hat-class (i.e. the average regression regret for each policy \\u03c0-n). Sublinear regret is still achievable whenever this term decreases over the time horizon T. For any reasonable underlying learning algorithm, we expect this term to be decreasing at a rate of T^-a (e.g. a:\\u00bd), putting this together, the sublinear regret will still be achievable.\\n\\n\\n\\n4) Theorem 2 and expected number of mistakes:\\n========================================= \\n\\n\\nThe theoretical gain is still guaranteed because it\\u2019s never the case that the upper bound of the expected number of mistakes obtained when Banditron is used in MELEE is larger than the one of Banditron alone. This follows directly from the edge assumption we make, as E\\u03b3t \\u2265 0, and \\u0393 \\u2264 1.\\n\\n\\n5) Minor concerns: \\u00a0\\n=================\\n\\nWe thank the reviewer for highlighting these concerns. The authors appreciate the reviewer\\u2019s suggestions for improving the overall exposure of the paper. In order to make it easier for reviewers\\u2019 to track the changes we kept the structure largely consistent with the original submission, but we\\u2019ll take all of these comments into account in the final version. \\n\\n6) Hand-crafted Exploration:\\n=========================\\n\\nWith \\u201chand-crafted\\u201d exploration algorithms we meant \\u201cnot learned\\u201d, we agree that this terminology is not accurate and we will remove it in the final version. \\n\\n7) Returned Policy:\\n=================\\n\\nTheoretically Algorithm 1 averages between the set of learned N policies. In practice, it\\u2019s typical that the final policy leads to a better performance empirically. In our experiments, Algorithm 1 returns the final N-th policy. We\\u2019ll describe the return policy explicitly in the paper and fix the notation for POLOPT.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the detailed review and insightful comments. We clarify several points below.\\n \\n1) Practical Impact: \\n=================\\n\\nWe request a clarification from AnonReviewer3 for why they think the practical impact may be minimal. The setting we study in the paper is the standard contextual bandit setting encountered in reality. The algorithm learns an exploration policy for balancing exploration with exploitation in contextual bandits, a fundamental issue addressed by any contextual bandit algorithm. We stress that our algorithms doesn\\u2019t assume access to fully supervised datasets at runtime, we rely on synthetic fully supervised datasets only for offline training. These datasets are generated synthetically and doesn\\u2019t require labelling effort from annotators at runtime. \\n\\n2) Comparison with Thompson Sampling:\\n====================================\\n\\n We compared MELEE to seven other exploration algorithms. Many of these algorithms does indeed use data in devising an exploration strategy. For example LinUCB, Cover, and Cover-NU all leverage information from the observed data to balance exploration and exploitation. Exponentiated Gradient epsilon-greedy as well uses the observed data to select the best epsilon for exploration. For completeness, we will add Thompson Sampling in our comparison. \\n\\n3) Knowledge of imitation learning: \\n===============================\\n\\nWe thank the reviewer for highlighting this issue. We will include a more detailed introduction for imitation learning in the final version for this work.\\n\\n4) Theoretical guarantees no-regret vs low-regret: \\n===========================================\", \"what_we_mean_by_low_regret_in_this_statement_is_the_low_average_regret_epsilon_class_hat\": \"the average regression regret for each policy \\u03c0-n), not the no-regret LEARN procedure in Alg 1 - line 16. We\\u2019ll rephrase this to make this distinction clear in the final draft for this paper.\\n\\n5) Noise in augmentation data: \\n===========================\\n\\nWe include noise in the augmentation datasets used for training MELEE. These datasets are generated synthetically and the details for the data generation process is highlighted in Appendix B. We generate 2D datasets by first sampling a random variable representing the Bayes classification error. The Bayes error is sampled uniformly from the interval 0.0 to 0.5. This Bayes error controls for the amount of noise in the dataset.\\n\\n\\n6) Minor comments: \\n==================\\n\\nThe authors thank the reviewer for highlighting these issues. We\\u2019ll take all of these comments into account in the final version.\\n\\n7) Why we require reward to be [0,1] in Alg 1: \\n=======================================\\n\\nWe\\u2019ll add a clarification for why we require bounded rewards. Theoretically, this is required to ensure the no-regret bound in theorem 1. Empirically, for multi-class contextual bandit classification problems, we use a reward of one for the correct action, and the reward of zero for all other incorrect actions.\\n\\n8) Why is epsilon=0 the best? \\n==========================\\n\\nEmpirically, MELEE doesn\\u2019t require the added extra exploration on top of the learned exploration strategy, and at runtime the best performance was achieved when we set the additional exploration parameter \\\\mu to 0 . At training time the synthetic datasets we used are not noise-free. As described in point (5) of this response, we control the amount of noise in the training dataset via the Bayes error parameter. Bietti et. al. observed a similar behavior for the same datasets we used in our experiments. For epsilon-greedy exploration, the best performance was achieved when setting epsilon to zero. They attribute this to the diversity of the context vectors in these datasets. \\n\\n9) Major Modifications: \\n====================\\n\\nWe assume the \\u201cmajor modifications\\u201d are the issues highlighted in the \\u201ccons\\u201d section of the review. We kindly request a clarification about any other major modifications the reviewer thinks should be necessary.\", \"references\": \"Alberto Bietti, Alekh Agarwal, and John Langford. A Contextual Bandit Bake-off. working paper or preprint, May 2018. URL https://hal.inria.fr/hal-01708310.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for recognizing the contribution of our method. We answer each of the improvement points below.\\n\\n1) Analysis for Learnt Policy:\\n======================\\n We agree that it\\u2019d be interesting to analyze and gain more insight from the learnt policy to design better exploration algorithms for contextual bandits, however, it\\u2019s not clear how to perform this analysis. One possibility is to track the exploitation / exploration decisions made by the learnt policy over-time. We can also compute feature importance estimates or perform an ablation study for the features used by MELEE. Similarity between MELEE and other exploration algorithms in terms of the selected action could also be analyzed. However, these results could be highly dependent on the underlying dataset properties.\\n\\n2) Offline training time and online runtime:\\n=====================================\\n In our experiments, the online runtime for MELEE was similar to epsilon-greedy & exponentiated gradient epsilon-greedy. MELEE was faster than both Cover (which requires a bag of policies) and LinUCB (which requires an inversion for the estimated covariance matrix. Offline training for MELEE requires more time for generating the synthetic data and running the imitation learning algorithm. We trained the model used in our experiments for approximately one day. We will provide exact statistics about the training time and the online runtime performance for MELEE in the final version for this work.\\n\\n3) Introduction to Imitation Learning: \\n================================\\nWe thank the reviewer for highlighting this issue. We will include a more detailed introduction for imitation learning in the final version for this work.\"}",
"{\"title\": \"Decent idea with very good validation\", \"review\": \"The paper proposes to train exploration policies for contextual bandit problems through imitation learning on synthetic data-sets. An exploration policy takes the decision of choosing an action on each time-step (balancing explore/exploit) based on the history, the confidences of taking different actions suggested by a policy optimizer (bet expert policy given the history). The idea in this paper is to generate many multi-class supervised learning data-sets and sun an imitation learning algorithm for training a good exploration policy. I think this is a novel idea and I have not seen this before. Moreover some intuitive features for training the exploration policy, like the historical counts of the arms, the time-step, arms rewards variances are used on top the the confidence scores from the policy optimizer. It is shown empirically that these extra features add value.\\n\\nOverall I think this is a well-written paper with very thorough experimentation. The results are also promising. It would be interesting to gain some insights from the learnt policy, in order to improve hand-designed policies. For example, in a few data-sets it would be interesting to see whether the learnt policy is similar to epsilon greedy in the early stages and switches to greedy after a point, or which of the hand-designed strategies like bagging/cover is the learnt policy most similar to in terms of choice of actions, however I am not sure how such an analysis can be done. It would also be fair to discuss the offline training time and online run-time of the algorithm with respect to others. Also, I think the paper should provide a brief introduction to imitation learning, as it is commonly not known in the bandit community.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Overall, given the novelty of the idea and the good results, I am inclined to accept, with major modifications.\", \"review\": \"This paper proposes a new method (Melee) to explore on contextual bandits. It uses a supervised full information data set to evaluate (using counterfactual estimation) and select (using imitation learning) a proper exploration strategy. This exploration strategy is then used to augment an e-greedy contextual bandit algorithm.\\n\\nThe novelty is that the exploration strategy is learned from the data, as opposed to being engineered to minimize regret. The edge of Melee stems from the expected improvement for choosing an action against the standard bandit optimization recommendation.\", \"pros\": [\"using data to learn exploration strategy in tis manner is a novel idea for bandits\", \"good experimental results\", \"well written paper\"], \"cons\": [\"Practical impact may be minimal. This setting is seldom encountered in reality.\", \"No comparison with Thompson sampling bandits, which also use data in devising an exploration strategy. I suggest authors compare to better suited bandits and exploration strategies, beyond basic e-greedy and UCB.\", \"Article assumes knowledge of imitation learning. which is not a given in bandit literature. I suggest a simple explanation or sketch of the imitation algorithm.\", \"Theoretical guarantees questionable. Theorem 1 talks about \\\"no-regret algorithm\\\". you then extend this notion and claim \\\"if we can achieve low regret .... then ....\\\". It is unclear to me how this theorem allows you to make such claim. A low regret is > no-regret, and hence a bound on no-regret may not generalize to low regret.\", \"May want to add noise to augmentation data, to judge robustness of method.\", \"Overall, given the novelty of the idea and the good results, I am inclined to accept, with major modifications. Improvements of the method and analysis are likely to follow. Given the flaws though, I am not fighting for this paper.\"], \"minor_comments\": \"sec 2.1: you may want to explain why you require reward to be [0,1]\", \"alg_1\": \"explain Val and rho in algorithm.\\nsec 2.3: what is \\\"ergo\\\". Also, you may want to refer to f as \\\"function\\\" and to pi as \\\"policy\\\". referring to f as policy may be confusing (even though it is a policy). For example: \\\"(line 8) on which it trains a new policy\\\"\\nEnd of 2.4: \\\"as discussed in 2.4\\\" should be \\\"in 2.3\\\"\\nsec 3.3: why is epsilon=0 the best? is it because synthetic data has no noise? This result surprises me.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper investigates a problem that does not correspond to any real problem.\", \"review\": \"This paper investigates a meta-learning approach for the contextual bandit problem. The goal is to learn a generic exploration policy from datasets, and then to apply the exploration policy to contextual bandit tasks. The authors have adapted an algorithm proposed for imitation learning (Ross & Bagnell 2014) to their setting. Some theoretical guarantees straightforwardly extracted from (Ross & Bagnell 2014) and from (Kakade et al 2008) are presented. Experiments are done on 300 supervised datasets.\", \"major_concerns\": \"\", \"1_this_paper_investigates_a_problem_that_does_not_correspond_to_the_real_problem\": \"how to take advantage of a plenty of logs generated by a known stochastic policy (or worst unknown deterministic policy) for the same (or a close) contextual bandit task?\\nMost of companies have this problem. I do not know a single use case, in which we have some full information datasets, which are representative of contextual bandit tasks to be performed. If the full information datasets does not correspond to the contextual bandit tasks, it is not possible to learn something useful for the contextual bandit task. \\n\\n2 The experimental validation is not convincing.\\n\\nThe experiments are done on datasets, which are mostly binary classification datasets. In this case, the exploration task is easy. May be it is the reason why the exploration parameter \\\\mu or \\\\epsilon = 0 provides the best results for MELEE or \\\\epsilon-greedy?\\n\\nThe baselines are not strong. The only tested contextual bandit algorithm is LinUCB. However a diagonal approximation of the covariance matrix is used when the dimension exceeds 150. In this case LinUCB is not efficient. There are a lot of contextual bandit algorithms that scale with the dimension.\\n\\n\\n3 The theoretical guarantees are not convincing. \\n\\nThe result of Theorem 1 is a weak result. A linear regret against the expected reward of the best policy is usually considered as a loosely result. Theorem 2 shows that there is no theoretical gain of the use of the proposed algorithm: the upper bound of the expected number of mistakes obtained when Banditron is used in MELEE is upper than the one of Banditron alone.\", \"minor_concerns\": \"The algorithms are not well written. POLOPT function has sometimes one parameter, sometimes two and sometimes three parameters. The algorithm 1 is described in section 2, while one of the inputs of the algorithm 1 (feature extractor function) is described in section 3.1. The algorithm 1 seems to return all the N exploration policies. The choice of the returned policy has to be described.\\n\\nIn contextual bandits, the exploration policy is not handcrafted. The contextual bandit algorithms are designed to be optimal or near optimal in worst case: they are generic algorithms.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1gOe209t7 | Reconciling Feature-Reuse and Overfitting in DenseNet with Specialized Dropout | [
"Kun Wan",
"Boyuan Feng",
"Lingwei Xie",
"Yufei Ding"
] | Recently convolutional neural networks (CNNs) achieve great accuracy in visual recognition tasks. DenseNet becomes one of the most popular CNN models due to its effectiveness in feature-reuse. However, like other CNN models, DenseNets also face overfitting problem if not severer. Existing dropout method can be applied but not as effective due to the introduced nonlinear connections. In particular, the property of feature-reuse in DenseNet will be impeded, and the dropout effect will be weakened by the spatial correlation inside feature maps. To address these problems, we craft the design of a specialized dropout method from three aspects, dropout location, dropout granularity, and dropout probability. The insights attained here could potentially be applied as a general approach for boosting the accuracy of other CNN models with similar nonlinear connections. Experimental results show that DenseNets with our specialized dropout method yield better accuracy compared to vanilla DenseNet and state-of-the-art CNN models, and such accuracy boost increases with the model depth. | [
"Specialized dropout",
"computer vision"
] | https://openreview.net/pdf?id=r1gOe209t7 | https://openreview.net/forum?id=r1gOe209t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skgz4lOj14",
"S1gRqRvC37",
"HJxe-1m22Q",
"HyemrA0d2X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544417321643,
1541467798195,
1541316344442,
1541103162903
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1092/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1092/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1092/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1092/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers recommend reject and there is no rebuttal. There is no basis on which to accept the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"This paper concerns the application of different binary dropout structures and schedules with the specific aim to regularise the DenseNet architecture.\", \"review\": \"Overall Thoughts:\\n\\nI think the use of regularisation to improve performance in DenseNet architectures is a topic of interest to the community. My concern with the paper in it\\u2019s current form is that the different dropout structures/schedules are priors and it is not clear from the current analysis exactly what prior is being specified and how to match that to a particular dataset. Further, I believe that the current presentation of the empirical results does not support the nature of the claims being made by the authors. I would be very interested to hear the authors\\u2019 comments on the following questions.\\n\\nSpecific Comments/Questions:\", \"sec1\": \"Sorry if I have missed something but for the two reasons against std dropout on dense net, the reference supports the second claim but could a reference be provided to substantiate the first?\\n\\nSec1/2: The discussion around feature re-use needs to be clarified slightly in my opinion. Dropout can provide regularisation in a number of regimes - the term \\u201cfeature reuse\\u201d is a little tricky because I can see the argument from both sides - under the authors arguments, forcing different features to be used can be a source or robustness so would not the level of granularity be something to be put in as a prior and not necessarily inherently correct or incorrect?\", \"sec3\": \"Similarly, with the dropout probability schedules, there are practical methods for learning such probabilities during training (e.g. Concrete Dropout) - would it not be possible to learn these parameters with these approaches? Why do we need to set them according to fixed schedules? I think it would be necessary to demonstrate that a fixed schedule outperforms learned parameters.\", \"sec4\": \"Please could the authors provide justification to the claim that the improvements would increase with the depth of the network?\", \"refs\": \"Please could the authors be sure to cite the published versions of articles (not ArXiv versions) when papers have been peer reviewed - e.g. the citation for DenseNet (among others)\", \"other_points\": \"Could the authors use text mode for sub or superscripts in maths equations when using words as opposed to symbols?\\n\\nThere are a number of uses of \\u201ccould\\u201d when I don\\u2019t think the authors mean \\u201ccould\\u201d - please could this be checked?\", \"typos\": \"p4 replying -> relying, whcih -> which\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting heuristics but no justification either theoretically or empirically\", \"review\": \"This paper proposes a special dropout procedure for densenet. The main argument is standard dropout strategy may impede the feature-reuse in Densenet, so the authors propose a pre-dropout technique, which implements the dropout before the nonlinear activation function so that it can be feeded to later layers. Also other tricks are discussed, for example, channel-wise dropout, and probability schedule that assigns different probabilities for different layers in a heuristic way.\\n\\nTo me this is a mediocre paper. No theoretical justification is given on why their pre-dropout structure could benefit compared to the standard dropout. Why impeding the feature-reuse in the standard dropout strategy is bad? Actually I am not quite sure if reusing the features is the true reason densenet works well in applications.\\n\\nHeuristic is good if enough empirical evidence is shown, but I do not think the experiment part is solid either. The authors only report results on CIFAR-10 and CIFAR-100. Those are relatively small data sets. I would expect more results on larger sets such as image net.\\n\\nCifar-10 is small, and most of the networks work fairly well on it. Showing a slight improvement on CIFAR-10 (less than 1 point) does not impress me at all, especially given the way more complicated way of the dropout procedure. \\n\\nThe result of the pre-dropout on CIFAR-100 is actually worse than the original densenet paper using standard dropout. Densenet-BC (k=24) has an error rate of 19.64, while the pre-dropout is 19.75.\\n\\nAlso, the result is NOT the-state-of-the-art. Wide-ResNet with standard dropout has better result on both CIFAR-10 and CIFAR-100, but the authors did not mention it.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Evaluation of dropout regimes for DenseNets\", \"review\": \"The paper studies the effect of different dropout regimes (unit-wise, channel-wise and layer-wise), locations and probability affect the performance of DenseNet classification model. The experiments are performed on two datasets: CIFAR10 and CIFAR100.\\n\\nIn order to improve the paper, the authors could take into consideration the following points:\\n\\n1. The experimental validation is rather limited. Additional experiments on large scale datasets should be performed (e. g. on ImageNet).\\n2. The design choices are rather arbitrary. The authors study three different probability schedules. Wouldn't it be better to learn them using recent advances in neural architecture search or in RL.\\n3. \\\"The test error is reported after every epoch and ...\\\". This suggest that the authors are monitoring the test set throughout the training. Thus, the hyper parameters selected (e. g. the dropout regimes) might reflect overfitting to the test set.\\n4. Table 1 misses some important results on CIFAR10 and CIFAR100, as is, the Table suggest that the method described in the paper is the best performing method on these datasets (and it is not the case). Moreover, the inclusion criteria for papers to appear in Table 1 is not clear. Could the authors correct the Table and add recent results on CIFAR10 and CIFAR100?\\n5. Section 4.1: \\\"... a perfect size for a model of normal size to overfit.\\\" This statement is not clear to me. What is a normal size model? Moreover, claiming that CIFAR10 and CIFAR100 is of perfect size to overfit seems to be a bit misleading too. Please rephrase.\\n6. Section 3.3: what do the authors mean by deterministic probability model?\\n7. Abstract: \\\"DenseNets also face overfitting problem if not severer\\\". I'm not aware of any evidence for this. Could the authors add citations accordingly?\\n8. Some discussions on recent approaches to model regularizations and connections to proposed approach are missing. The authors might consider including the following papers: https://arxiv.org/pdf/1708.04552.pdf, https://arxiv.org/pdf/1802.02375.pdf, among others.\\n\\nOverall, the paper is easy to understand. However, the originality of the paper is rather limited and it is not clear what is the added value to for the community from such paper. I'd encourage the authors to include additional experiments, correct misleading statements and add a discussion of model regularization techniques in the related work section.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1zOg309tX | Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets | [
"Zhiming Zhou",
"Yuxuan Song",
"Lantao Yu",
"Hongwei Wang",
"Weinan Zhang",
"Zhihua Zhang",
"Yong Yu"
] | In this paper, we investigate the underlying factor that leads to the failure and success in training of GANs. Specifically, we study the property of the optimal discriminative function $f^*(x)$ and show that $f^*(x)$ in most GANs can only reflect the local densities at $x$, which means the value of $f^*(x)$ for points in the fake distribution ($P_g$) does not contain any information useful about the location of other points in the real distribution ($P_r$). Given that the supports of the real and fake distributions are usually disjoint, we argue that such a $f^*(x)$ and its gradient tell nothing about "how to pull $P_g$ to $P_r$", which turns out to be the fundamental cause of failure in training of GANs. We further demonstrate that a well-defined distance metric (including the dual form of Wasserstein distance with a compacted constraint) does not necessarily ensure the convergence of GANs. Finally, we propose Lipschitz-continuity condition as a general solution and show that in a large family of GAN objectives, Lipschitz condition is capable of connecting $P_g$ and $P_r$ through $f^*(x)$ such that the gradient $\nabla_{\!x}f^*(x)$ at each sample $x \sim P_g$ points towards some real sample $y \sim P_r$. | [
"GANs",
"Lipschitz-continuity",
"convergence"
] | https://openreview.net/pdf?id=r1zOg309tX | https://openreview.net/forum?id=r1zOg309tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJgbl2TKlN",
"HyxExBm-l4",
"B1eWMFbd1N",
"B1ljRdWd1N",
"HJg_BrpGkE",
"HJlOsZE20X",
"HJxtQXln0X",
"rJgau013C7",
"SyxXvRyhRX",
"Bkes1R1nC7",
"BkgKq6kn0Q",
"SkxDk51iCX",
"S1g40CRcA7",
"HJll5C050Q",
"HJlzFY9q0m",
"rkxfycBIR7",
"HJlrF25opQ",
"BJggS2qspm",
"rJeLkh9s6m",
"HJelpj5j6X",
"SygATc9s6X",
"r1eWYF9s6m",
"ByldHOC33m",
"BklPmo0527",
"BJxOhIWq3X",
"BJxy5mxdhQ"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545358313177,
1544791275519,
1544194313186,
1544194259286,
1543849279797,
1543418271787,
1543402273129,
1543401077145,
1543401051213,
1543400930746,
1543400849407,
1543334367334,
1543331531601,
1543331464459,
1543313785813,
1543031257713,
1542331517153,
1542331448089,
1542331358077,
1542331320008,
1542331077934,
1542330744764,
1541363776514,
1541233438704,
1541179055886,
1541043078884
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1091/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1091/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1091/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Thanks.\", \"comment\": \"We sincerely thanks all ICLR reviewers and Area Chair for their effort in reviewing this paper and all constructive comments. But we should highlight that this paper is not a \\\"theoretical analysis of a compact dual form of the Wasserstein distance\\\".\\n\\nThe core of this paper is, however, that we show that \\\"a well-defined distance metric does not necessarily guarantee the convergence of GANs\\\" and this is the fundamental cause of failure in training of GANs. And we show that Lipschitz-continuity condition is a general solution to this problem, and characterized the necessary condition where Lipschitz condition ensures the convergence. (see details in the paper.)\"}",
"{\"metareview\": \"The paper investigates problems that can arise for a certain version of the dual form of the Wasserstein distance, which is proved in Appendix I. While the theoretical analysis seems correct, the significance of the distribution is limited by the fact, that the specific dual form analysed is not commonly used in other works. Furthermore, the assumption that the optimal function is differentiable is often not fulfilled neither. The paper would herefore be significantly strengthen by making more clear to which methods used in practice the insights carry over.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Intersting theoretical analyis of a compact dual form of the Wasserstein distance which however is not widely used in literature.\"}",
"{\"title\": \"We hope we can get an official judgment on our discussion with AnonReviewer2.\", \"comment\": \"We just notice that AnonReviewer1 provided a feedback on our discussion with AnonReviewer2. It seems the reviewers did not get into a consensus on our discussion with AnonReviewer2.\\n\\n1) However, from the authors\\u2019 perspective, AnonReviewer2 made enormous mistakes and the authors have convinced the AnonReviewer2 that there is no technical correctness issue in this paper. Because at the final stage of the discussion, AnonReviewer2 only tried to claim \\\"showing the failure of a new dual form of Wasserstein distance is meaningless\\\", i.e., something about contribution. \\n\\n2) Furthermore, from the authors\\u2019 perspective, most of the arguments provided by AnonReviewer2, with which to claim \\\"showing the failure of a new dual form of Wasserstein distance is meaningless\\\", are also false. The authors have provided corresponding arguments showing that AnonReviewer2 has many misunderstandings on related materials and also clarified that the contribution of this paper is much more than just \\\"showing Wasserstein distance in new dual form might also fail\\\". AnonReviewer2 refuses further discussion. \\n\\nAs one of the top confluences like ICLR, we think it should be able to judge: are the authors making mistakes, or is the AnonReviewer2 making mistakes? We hope we can get an official judgment on whether there is technical correctness issue in the paper and whether the contribution is significant. \\n\\nRespectfully, \\nThe authors\"}",
"{\"title\": \"Thank you very much for your update.\", \"comment\": \"We sincerely thank you for considering our discussion with AnonReviewer2. With all due respect, from the authors\\u2019 perspective, AnonReviewer2 has made enormous mistakes; and we are not able to get into a consensus with AnonReviewer2, because he/she refuses further discussion. The authors thus hope there is an official judgment. A formal request is posted in the topmost comment.\"}",
"{\"title\": \"More detailed arguments on why \\\"Theorem 3 in Wasserstein GAN is a false Theorem\\\" are provided.\", \"comment\": \"It seems the reviewer still does not get our arguments on why \\\"Theorem 3 in Wasserstein GAN is a false Theorem\\\". We have polished our arguments and provided more details. Please check them out in our response to Q2, in the above reply.\\n\\nThough we don't think \\\"Theorem 3 in Wasserstein GAN\\\" and \\\"envelope theorem\\\" are very relevant to our paper, we'd like to fully address any concern from the reviewer.\"}",
"{\"title\": \"We really appreciate your effort. Thank you very much. And we enjoy the discussion with you.\", \"comment\": \"We really appreciate your effort. Thank you very much and we really enjoy the discussion with you.\\n\\nWe are very rigorous in the rebuttal and we have very carefully checked related materials. In particular, we read the Wikipedia and the proof of theorem 3 line by line, as well as the paper you mentioned: \\u2018\\u2018Sobolev training of neural networks\\u2019\\u2019. \\n\\nWe wish the reviewer could consider our argument in depth and understand what we said, instead of denying it without reliable arguments. We are glad to continue the discussion at any time. \\n\\nIf the reviewer really does not want to continue the discussion, we will respond to all concerns that the reviewer posted and leave the judgment to other reviewers, chairs, and the public. \\n\\nWe wish the reviewer could spare a little bit more time to respond to our final three questions. \\n1. Do you still think there is any technical correctness issue in our paper? If so, please specify and list them. \\n2. Would you please provide a comment to each of our listed contributions (4+4)?\\n3. Would you please respond to one key remaining issue in our last response which we paste as follows?\", \"q\": \"It's widely known that the gradient of an estimator is not necessarily an estimator of the gradient unless one imposes constraints such as bounded Sobolev norms. The same problem appears in e.g. actor-critic formulations or other problems like synthetic gradients (see Sobolev training of neural networks for instance).\\n\\n>> We have checked that paper \\u2018\\u2018Sobolev training of neural networks\\u2019\\u2019. Could you specify which sentence or theorem in that paper provides any support for your claim? It is simply making use of the trivial fact that we highlighted in \\u2018\\u20182. What is trivial and what is not?\\u2019\\u2019 to improve the training. It definitely makes sense and has nothing to do with our arguments here. \\n\\nThank you very much. We really enjoy and appreciate the discussion with you. \\n\\n-------------------------------------------------\\n\\nSpecific response to your reply, in case you would continue the discussion:\", \"q1\": \"NO, one can't if the dual space itself X varies on theta.\\n\\n>> Why? Could you provide your argument or reference?\", \"q2\": \"Derivative of the loss wrt the critic is 0 when the critic is optimal so the corresponding term to the changes of the critic vanishes.\\n\\n>> This is the key mistake in the proof. Actually, we have already pointed this out in the last response. But it seems more details are necessary. Please check the following. \\n\\n>> In Envelope theorem, it is critical to writing the objective as a simple function of the parameter, where all other variables need to be written as functions of the parameter. In particular, the critic function in Theorem 3 of Wasserstein GAN is not written as a function of the parameter, which leads to its false conclusion. \\n\\n>> More specifically, \\\"the derivative of the loss wrt the critic is 0\\\" does not mean \\\"the derivative of the critic wrt the parameter\\\" is zero. (Section 2.3 provides an instance; please recheck it.) \\n\\n>> If it is still not clear to you, the arguments in \\\"2. What is trivial and what is not? \\\" might also help. Please have a check. \\n\\n-------------------------------------------------\\n\\nClarification (in case there exists any possible confusion): \\n\\nThe ''envelope theorems and gradient estimators'' are related materials that the reviewer points out, with which the reviewer claims one of our contributions is meaningless. And we are correcting the reviewer such that him/she can correctly understand our contribution about whether ''criticizing a well-defined distance metric does not necessarily guarantee the convergence of GANs'' is meaningful or not.\"}",
"{\"title\": \"Reply\", \"comment\": \"\\\"1. Theorem 3 in Wasserstein GAN [1] is a false Theorem. And the 4th fact you provided in the clarification does not hold. \\\"\\n\\nThis is ridiculous.\\n\\n\\\"The proof in [1] essentially uses the Envelope theorem [2] to argue that the gradient of Wasserstein distance in primal is equal to the gradient of Wasserstein distance in dual form. However, in the proof, it ignores the fact that the optimal discriminative function is actually a function of the parameter of Pg and ignores the relevant gradient.*\\\"\\n\\nIt does not! The envelope theorem is meant EXACTLY for those situations. See the same wikipedia article, or the proof of theorem 3 of wgan in detail. In the notation of the wikipedia article, X = {1-Lip functions}, and x \\\\in X* would be the optimal critic. It is obvious that this changes with the discriminator, but the derivative of the loss wrt the critic is 0 when the critic is optimal so the corresponding term to the changes of the critic vanishes. I have no interest in discussing this further, the authors are not paying enough attention or rigour to their arguments. \\n\\n\\\"From another aspect (proof by contradiction): if the argument in the proof of Theorem 3 in Wasserstein GAN is correct, one can easily extend it to any dual form of Wasserstein distance, which implies \\u2018\\u2018the gradient of W(Pr, Pg) is also equal to the gradient of our new dual form\\u2019\\u2019. It is contradictory to our analysis. Please re-check Section 2.3, where the argument is straightforward and is definitely true. \\\"\\n\\nNO, one can't if the dual space itself X varies on theta (such as with {|f(x) - f(y)| <= d(x, y) for x ~ Pr, y ~ Ptheta with prob 1}, and not with the space of 1-Lipschitz functions), such as in your case.\\n\\n=================================================\\n\\nAs a last comment, if the area chairs think my arguments are wrong, feel free to disregard what I said. If I am wrong, then sadly figuring it out from the paper and this discussion is taking longer than the time I can allocate to this paper. I have spent more time discussing and reviewing this paper than with the rest of the papers combined, I simply don't have the time (nor I think this is the optimal medium) to continue this discussion. I was asked for a review, I gave it, and I continued the discussion for a while. I don't have time or energy to explain envelope theorems and gradient estimators to the authors.\"}",
"{\"title\": \"Contribution clarification.\", \"comment\": \"\", \"minor_contributions\": \"1. We have shown that the locality of the optimal discriminative function and its gradient in traditional GANs is an intrinsic cause to mode collapse. \\n\\n2. We have conducted a fairly systematic study on how the hyper-parameters influence the optimal discriminative function in traditional GANs (Appendix A), which explains why traditional GANs are sensitive to hyper-parameters, hard to train and easily broken. \\n\\n3. We have provided a new technique for implementing the Lipschitz-continuity condition in Appendix E, which, according to our experiments, can achieve the theoretically optimal discriminative function in many cases where the gradient penalty or spectral normalization can not. \\n\\n4. We have provided a new dual form of Wasserstein distance (proved in Appendix I).\"}",
"{\"title\": \"Contribution clarification.\", \"comment\": \"We sincerely thank the reviewers for their constructive comments and valuable suggestions. Since the contributions of this paper are one of the topics that we have discussed with the reviewers. We summarize the contributions of this submission as follows for ease of reference:\", \"major_contributions\": \"1. In this paper, we have shown that in many GANs, the gradient from the optimal discriminative function is not reliable, which turns out to be the fundamental cause of failure in training of GANs. Instances that are explicitly discussed in the paper include Original GAN, Least-Square GAN, Fisher GAN, and GAN with a new dual from of Wasserstein distance.* \\n\\n2. We have highlighted in this paper that a well-defined distance metric does not necessarily guarantee the convergence of GANs. Because in typical GAN settings, the optimal discriminative function is only a helper of the estimator that estimates some distance metric, whose gradient does not necessarily reflect the gradient of the distance metric (including JS divergence, Fisher IPM, Wasserstein distance in a new dual form). Hence, we call on researchers to pay more attention to the design of the optimal discriminative function (and its gradient) when introducing new GAN objectives. \\n\\n3. We have proved in this paper that the Lipschitz-continuity condition as a general solution to make the gradient of the optimal discriminative function reliable, and characterized the necessary condition where Lipschitz condition ensures the convergence, which leads to a broad family of valid GAN objectives under Lipschitz condition, where Wasserstein distance is one special case. \\n\\n4. We have tested several new objectives with experiments, which are also sound according to our theorems. And we found that, compared with Wasserstein distance, the outputs of the discriminator with some new objectives are more stable and the final qualities of generated samples are also consistently higher than those produced by Wasserstein distance. \\n\\n* It is not limited to these GANs. A straightforward extension as mentioned in the paper is that all GANs where the optimal discriminative function f^*(x) at x is only related to P_g(x) and P_r(x), e.g., f-GAN.\"}",
"{\"title\": \"Thank you. Nice progress. (1/2)\", \"comment\": \"We sincerely thank you for your prompt reply.\\n\\nIn your clarification, you mainly try to claim that \\u2018\\u2018showing the failure of a new dual form of Wasserstein distance is meaningless\\u2019\\u2019, \\u2018\\u2018not much of a meaningful contribution\\u2019\\u2019, \\u2018\\u2018 it's widely known\\u2019\\u2019, etc. So, are we now talking about the contribution? Nice progress. \\n\\nAbove all, let\\u2019s make sure one thing: do you still think there is any technical correctness issue in our paper? If so, please specify it. Otherwise, we would take it as we have convinced you that there is no technical correctness issue in our paper. \\n\\nFor contribution clarification, we have listed the contributions of this paper in the uppermost comment. Please check it out and we hope that you might notice that the specific contribution that you does not recognize is only a single item in the long list. Nonetheless, we will respond to your clarification and address this specific concern in the next. \\n\\nWe have to point out that there is a serious mistake in your statements. Let\\u2019s detail it as follows: \\n\\n1. Theorem 3 in Wasserstein GAN [1] is a false Theorem. And the 4th fact you provided in the clarification does not hold. \\n\\n>> The proof in [1] essentially uses the Envelope theorem [2] to argue that the gradient of Wasserstein distance in primal is equal to the gradient of Wasserstein distance in dual form. However, in the proof, it ignores the fact that the optimal discriminative function is actually a function of the parameter of Pg and ignores the relevant gradient.* In Envelope theorem, it is critical to writing the objective as a simple function of the parameter, where all other variables need to be written as functions of the parameter and the gradients of all terms are unignorable. Please check the first Theorem in [2]. \\n\\n>> From another aspect (proof by contradiction): if the argument in the proof of Theorem 3 in Wasserstein GAN is correct, one can easily extend it to any dual form of Wasserstein distance, which implies \\u2018\\u2018the gradient of W(Pr, Pg) is also equal to the gradient of our new dual form\\u2019\\u2019. It is contradictory to our analysis. Please re-check Section 2.3, where the argument is straightforward and is definitely true. \\n\\n>> As it stands, the correct argument about the validity of the gradient of Wasserstein distance in dual form with Lipschitz-continuity condition is Proposition 1 in [3] or our theorems. \\n\\n[1] https://arxiv.org/pdf/1701.07875.pdf\\n[2] https://en.wikipedia.org/wiki/Envelope_theorem\\n[3] Improved Training of Wasserstein GANs \\n\\n* This turns out to be the same type of mistake that Martin Arjovsky et al. have made in Theorem 2.5 of their paper \\u2018\\u2018Towards Principled Methods for Training Generative Adversarial Networks\\u2019\\u2019 (ICLR 2017). This is just for your information, not important here. If you are interested, we can have more discussions on this after the rebuttal.\"}",
"{\"title\": \"Thank you. Nice progress. (2/2)\", \"comment\": \"2. What is trivial and what is not? \\n\\n>> The trivial thing is that: if two functions are identical, then their gradient (with respect to the input) are equal. That is, the gradient of a perfect estimator of an objective is equal to the gradient of the objective. \\n\\n>> The non-trivial (not noticed by many people) thing is that: the gradient of a helper of the estimator of an objective (e.g., the optimal discriminative function f* of Wasserstein distance in dual form) is not equal to the gradient of the objective. This is what we argued in the paper, i.e., the gradient of the optimal discriminative function mostly has nothing to do with the gradient of the objective. Note that the optimal discriminative function is **only a helper** of estimator and is not equivalent to the estimator itself. \\n\\n3. Specific response to your comments:\", \"q\": \"The formulation in (III) was introduced by you, and thus showing the failure of the gradient of it as a gradient estimator is pretty meaningless unless it highlights problems of the formulations that other people use in practice.\\n\\n>> Clarification: we are not criticizing the typical formulation that the community uses. We are criticizing that a well-defined distance metric does not necessarily guarantee the convergence of GANs, and particularly, Wasserstein distance in a new dual form also does not guarantee the convergence. \\n\\n>> If you wish, you could say \\u2018\\u2018criticizing a well-defined distance metric does not necessarily guarantee the convergence of GANs is pretty meaningless\\u2019\\u2019.\"}",
"{\"title\": \"Clarification\", \"comment\": \"Perfect, let me be as formal as I can be. There are several quantities in here involved. Let F = {f : X -> R, 1-Lip in X}. Let H_theta = {f: X -> R, |f(x) - f(y) | <= d(x, y) with x ~Pr, y ~Ptheta with probability 1}. Let L(f, theta) = E_{x ~ Pr} [f(x)] - E_{y ~ Pz} E[f(g_theta(z))].\\n\\nI) W(Pr, Ptheta)\\nII) Max_{f in F} L(f, theta)\\nIII) Max_{f in H_theta} L(f, theta)\", \"facts\": [\"(I) = (II), this is the Kantorovich duality.\", \"(II) = (III), this you proved.\", \"Following grad W(Pr, Ptheta) guarantees convergence as much as one can guarantee convergence of local minimization for locally Lipschitz functions. This is, with an arbitrarily small mollifier one can give convergence to a first order saddle point. This is because W(Pr, Ptheta) is itself locally Lipschitz.\", \"If f* is the minimizer in (II), then E_z[grad_theta f*(g_theta(z))] = grad W(Pr, Ptheta) by an envelope theorem as shown in WGAN, thus following this guarantees convergence in the same way as one can do it for W(Pr, Ptheta) itself.\", \"If h* is the minimizer in (III), then following E_z[grad_theta h*(g_theta(z))] does not guarantee convergence.\", \"Ergo, what you showed is that approximating (III) is a consistent estimator of the Wasserstein distance, and that gradients of the estimator do not give a consistent estimator of the gradient of the Wasserstein distance. This is not much of a meaningful contribution, since it's widely known that the gradient of an estimator is not necessarily an estimator of the gradient unless one imposes constraints such as bounded Sobolev norms. The same problem appears in e.g. actor critic formulations or other problems like synthetic gradients (see Sobolev training of neural networks for instance). The formulation in (III) was introduced by you, and thus showing the failure of the gradient of it as a gradient estimator is pretty meaningless unless it highlights problems of the formulations that other people use in practice.\", \"Thus, the problems you showcase are not problems of the Wasserstein distance itself, are problems of the estimator you presented of the Wasserstein distance. One can do this with any quantity that can be rephrased as a maximization problem: estimating it and taking gradients is not the same as estimating its gradient. This is not a problem of the quantity itself but of the estimator. If the estimator was widely used this would be a meaningful contribution, but as you show the estimator being used in practice does not actually suffer from the problems of the estimator you introduced.\"]}",
"{\"title\": \"We truly appreciate your feedback. Let\\u2019s continue the discussion. (1/2)\", \"comment\": \"\", \"q\": \"This is why figure 1, b) doesn't make any sense, since the Kantorovich dual is defined in the *entire* space.\\n\\n>> In Figure 1b, we are talking about our new dual form of Wasserstein distance, not the \\u2018\\u2018Kantorovich dual\\u2019\\u2019. We have specified in the caption that it is about the new dual form of Wasserstein distance. If necessary, we can further add the specification in the sub-caption.\"}",
"{\"title\": \"We truly appreciate your feedback. Let\\u2019s continue the discussion. (2/2)\", \"comment\": \"\", \"q\": \"I think the changes made to the paper are a good direction in rewriting the claims (hence my changed score), but this is still far for sufficient in terms of contribution and technical correctness for a conference like ICLR.\\n\\n>> Thanks for the positive comments and changing the score, but we are wondering what the technical correctness issue is in our paper. Could you specify it? Then we can have further discussion and clarification. \\n\\nThanks again for your valuable comments. Let\\u2019s continue the discussion.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"\\\"And we realize that they basically assume the supports of the two distribution are the entire space (or they just ignore the things about support).\\\"\\nThis is false! They assume the duality functions are defined in the entire space, but not necessarily the distributions! This is why figure 1, b) doesn't make any sense, since the Kantorovich dual is defined in the *entire* space.\\n\\n\\\"We would also like to provide an intuitive explanation on why \\u201cf(x) - f(y) <= d(x, y) for all x ~ Pr, y ~ Pg\\u201d is enough. \\\" This is obvious, all you are doing is rewriting the Wasserstein distance in a way that the gradient of W(Pr, Ptheta) is not the average gradient of the critic on the generator's samples. This is not a flaw of the Wasserstein distance, since *the gradient of the Wasserstein distance wrt the generator's parameters itself is well defined and well behaved*, but the gradient with respect to your particular dual formulation is not. If the authors want to make the claim that not all dual formulations satisfy an envelope theorem (as in this pathological formulation, since the space of critic {|f(x) - f(y)| <= d(x,y) for all x ~ Pr, y ~ Ptheta} changes with theta, which is also contrary to what we call a dual formulation typically), this is the claim the authors should be making, not that the Wasserstein distance itself is broken. The authors try to claim in multiple moments sentences like \\\"the gradient of f\\u2217(x) from the dual form of Wasserstein distance given a compacted\\ndual constraint also does not reflect any useful information about other points in P\\\". This is not *the* dual formulation that the entire community uses (aka the Kantorovich duality), but a particular dual-like formulation that the author's concocted which breaks.\\n\\nI think the changes made to the paper are a good direction in rewriting the claims (hence my changed score), but this is still far for sufficient in terms of contribution and technical correctness for a conference like ICLR.\"}",
"{\"title\": \"The authors are looking forward to your feedback.\", \"comment\": \"Dear reviewer,\\n\\nWe have provided the proof and hope that the proof and associated arguments can address your concerns about \\u2018\\u2018several false statements in this paper\\u2019\\u2019. \\n\\nIf possible, we would like to have more discussions with you. The discussions/suggestions would be helpful to us. And we are pleased to improve the paper according to your feedback if the arguments are not clear enough. \\n\\nThanks for reading this message. We will be grateful for any feedback you provide. \\n\\nRespectfully, \\nThe authors\"}",
"{\"title\": \"New version uploaded.\", \"comment\": \"We have checked the paper. It deals with the saddle-point optimization problem stems from the minimax. As a work in another line of efforts that aims at addressing the training problem of GANs, we have added a reference to this paper in the introduction, when discussing the difficulty of GAN training.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for your constructive feedback.\", \"q\": \"Page 23, lines 34 and 38: cluttered expression $\\\\frac{\\\\partial [}{\\\\partial 2}]$ makes the statements not understandable. It also appears on page 24 several times.\\n\\n>> The $\\\\frac{\\\\partial [}{\\\\partial 2}]$ comes from a breaking \\\\newcommand for second-order derivation in latex. We have fixed it. \\n\\nThanks a lot for the careful reading of our paper and the detailed comments, which are very helpful.\"}",
"{\"title\": \"Response to Reviewer 2 (1/2)\", \"comment\": \"Thanks for your constructive feedback.\", \"q\": \"Furthermore, the relationship between Lipschitz continuity and having a gradient is elaborated in [2].\\n\\n>> Our theorem is a more general version of the theorem you mentioned, where we no longer restrict the objective to be Wasserstein distance and state the general properties of optimal discriminative function f* under Lipschitz-continuity condition.\"}",
"{\"title\": \"Response to Reviewer 2 (2/2)\", \"comment\": \"Thanks for your constructive feedback.\", \"q\": \"The idea that most conclusions of WGAN hold *without* the Wasserstein distance, but with Lipschitz continuity are already elaborated in the WGAN paper. See in fact, Appendix G.1 [3], where this is described in detail.\\n\\n>> Given Wasserstein GAN is one of the most important papers in GAN community and we have realized that its argument in the main text does not hold very well, we believe it is necessary to highlight the point. Currently, most people tend to believe that a good distance metric is the key to the convergence or stability of GANs. One contribution of our paper is that it thoroughly expounded that: the property of the gradient in terms of \\\\nabla_x f*(x) is substantially different from the property of a distance metric; and to ensure the convergence of GANs or design new formulation for GANs, one should carefully check whether the gradient \\\\nabla_x f*(x) is reliable. \\n\\n>> Regarding Appendix G.1 [3], although it states that Lipschitz might be generally applicable, the discussion there is far from enough: (i) the discussion in Appendix G.1 is limited to the objective of the original GAN; our theorem, in contrast, elaborated a family of GAN objectives and characterized the necessary condition where Lipschitz condition ensures the convergence. (ii) the discussion in Appendix G.1 ignores the $\\\\log$ term in the objective of original GAN; our theorem is, however, directly applicable to the whole objective of the original GAN.* (iii) according to our theorem, for any objective other than W-distance, it is theoretically necessary to penalize the Lipschitz constant k(f) to ensure the convergence; though Appendix G.1 mentioned that to avoid the saturation, k need to be small, it fails to cover the other fold that \\u2018\\u2018\\\\nabla_x f*(x)=0 for all x\\u2019\\u2019 might also happen even if there is no saturation region or saturation region is not touched.** \\n\\n[1]: Optimal transport, old and new\\n[2]: http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a.pdf \\n[3]: http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a-supp.pdf \\n[4]: Improved Training of Wasserstein GANs \\n\\n* In Appendix G.1, it discusses the properties of f with bounded value-range (to simulate a classifier), while in our paper, f is assumed to have unbounded value range and loss metrics are applied to the unbounded f. Therefore, the arguments are actually quite different. \\n\\n** That is to say, small k is not enough to guarantee the convergence, and penalizing/decreasing the Lipschitz constant is necessary. Given a fixed Lipschitz constant k, according to our analysis, the following state is possible: \\u2018\\u2018Pg!=Pr\\u2019\\u2019 and \\u2018\\u2018for each x, f*(x) is optimal\\u2019\\u2019, but there does not exist two points x,y such that |f(x)-f(y)|>k|x-y|. In this case, \\u2018\\u2018\\\\nabla_x f*(x)=0 for all x\\u2019\\u2019 and the generator stop learning, however, Pg does not equal to Pr.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for your constructive feedback.\", \"q\": \"Which currently known objectives do not satisfy the assumptions of the theorem?\\n\\n>> There exists a few practically used instances of GAN objectives that do not satisfy the assumptions of the theorem. For example, the objective of Least-Square GAN and the hinge loss used in [1][2][3]. More generally, any objective that \\u2018\\u2018holds a zero gradient at certain point\\u2019\\u2019 does not satisfy the assumptions of the theorem (check Eq. 12). \\n\\n[1] Geometric GAN\\n[2] Energy-based Generative Adversarial Network\\n[3] Spectral Normalization for Generative Adversarial Networks\"}",
"{\"title\": \"We have provided the proof on the new dual form of Wasserstein distance.\", \"comment\": \"Dear reviewers,\\n\\nWe have provided detailed proofs of our new dual form of Wasserstein distance (in Appendix I). We hope the proofs and the associated detailed explanations can address the concern about \\u201cthe possible wrong in our key arguments\\u201d. \\n\\nAccording to the reviewers\\u2019 feedback, we have extensively revised the paper. The arguments are much more clear now. In particular, we have revised all statements about the failure of Wasserstein distance to make them strictly refer to the new dual form. We have also made the relationship among Lipschitz condition, Wasserstein distance, and the new dual form more clear. We believe there is no confusion in our statements now.\", \"the_key_points_are_summarized_as_follows\": \"(i) the dual form of Wasserstein distance can be written in a more compact manner, where the constraint is looser than Lipschitz condition. (ii) if using Wasserstein distance with the new dual form, it suffers from the convergence issue, where \\\\nabla_x f*(x) is ill-behaving. (iii) the above observations indicate that a well-defined distance metric does not necessarily guarantee the convergence of GANs. (iv) we prove that Lipschitz condition is a general key to solve the non-convergence problem of GANs, which works with a family of GAN objectives (detailed in Eq. 12) and is not limited to Wasserstein distance.\\n\\nDetailed comments from each reviewer have been addressed individually. Thanks a lot for these constructive feedbacks. And special thanks to AnonReviewer3 who have meticulously checked our notations and formulations and provided detailed feedback, which helps us a lot in improving this manuscript. \\n\\nIf the reviewers have further concerns, we would appreciate further discussions.\"}",
"{\"title\": \"Lipschitzness of the discriminator is more critical than the choice of the divergence\", \"review\": \"The authors study the fundamental problems with GAN training. By performing a gradient analysis of the value surface of the optimal discriminator, the authors identify several key issues.\\n\\nIn particular, for a fixed GAN objective they consider the optimal discriminator f* and analyze the gradients of f* at points x ~ P_g and x~P_d. The gradient decouples into the magnitude and direction terms. In previous work, the gradient vanishing issue was identified and the authors show that it is fundamentally only controlling the magnitude. Furthermore, controlling the magnitude doesn\\u2019t suffice as the gradient direction itself might be non-informative to move P_g to P_d. The authors proceed to analyze two cases: (1) No overlap between P_g and P_d where they show that the original GAN formulation, as well as the Wasserstein GAN will suffer from this issue, unless Lipschitzness is enforced. (2) For the case where P_g and P_d have overlap, the gradients will be locally useful which the authors identify as the fundamental source of mode collapse. \\n\\nThe main theoretical result suggests that (1) penalizing the discriminator proportionally to the square of the Lipschitz constant is the key -- the choice of divergence is not. This readily implies that pure Wasserstein divergence may fail to provide useful gradients, as well as that other divergences combined with Lipschitz penalties (precise technical details in the paper) might succeed. Furthermore, it also implies that one can mix and match the components of the objective function for the discriminator, as long as the penalty is present, giving rise to many objectives which are not necessarily proper divergences. Finally, one can explain the recent success of many methods in practice: While the degenerate examples showing deficiencies of current methods can be derived, in practice we implement discriminators as some deep neural networks which induce relatively smooth value surfaces which in turn make the gradients more meaningful.\", \"pro\": [\"Clear setup and analysis of the considered cases. Interesting discussion from the perspective of the optimal discriminator and divergence minimization. The experiments on the toy data are definitely interesting and confirm some of the theoretical results.\", \"A convincing discussion of why Wasserstein distance is not the key, but rather it is the Lipschitz constant. This brings some light on why the gradient penalty or spectral normalization help even for the non-saturating loss [2].\", \"Discussion on why 1-Lip is sufficient, but might be too strong. The authors suggest that instead of requiring 1-Lip on the entire space, it suffices to require Lipschitz continuity in the blending region of the marginal distributions.\"], \"con\": [\"Practical considerations: I appreciate the theoretical implications of this work. However, how can we exploit this knowledge in practice? As stated by the authors, many of these issues are sidestepped by our current inductive biases in neural architectures.\", \"Can you provide more detail on your main theorem, in particular property (d). Doesn't it imply that the discriminator is constant?\", \"Which currently known objectives do not satisfy the assumptions of the theorem?\", \"The work would benefit from a polishing pass.\", \"========\", \"Thank you for the response. Given that there is no consensus on the questions posed by AnonReviewer2, there will be no update to the score.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks. We will check it and add the necessary reference.\"}",
"{\"title\": \"Review for \\\"Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets\\\"\", \"review\": \"The authors try to claim that Lipschitz continuity of the discriminator is a fundamental solution of GANs, and that current methods do not satisfy this approach in principle.\\n\\nThere are several false statements in this paper. In particular, sections 2.3 and 4.4 are wrong (and most of the paper is based on statements made there). The necessary constraint for the Wasserstein distance is NOT f(x) - f(y) <= d(x, y) for all x ~ Pr, y ~ Pg. It has to actually be 1-Lipschitz in the entire space. See Chapters 5 and 6 of [1], for example remark 6.4 or particular cases 5.16 and 5.4. Indeed, this is how it is written in all of the literature this reviewer is aware off, and it's a fact well used in the literature. Indeed, all the smoothness results for optimal transport in [1] heavily exploit the fact that the gradient of the critic is in the direction of the optimal transport map, which wouldn't be the case in the situation the authors try to claim of 'f not being defined outside of the support of Pr or Pg'.\\n\\nFurthermore, the relationship between Lipschitz continuity and having a gradient is elaborated in [2] https://arxiv.org/abs/1701.07875 , for example figure 2 clearly show this. Furthermore, and contrary to what section 4.5 tries to claim, the idea that most conclusions of wgan hold *without* the Wasserstein distance, but with Lipschitz continuity are already elaborated in the wgan paper. See in fact, appendix G.1 [3], where this is described in detail.\\n\\n[1]: http://cedricvillani.org/wp-content/uploads/2012/08/preprint-1.pdf\\n[2]: http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a.pdf\\n[3]: http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a-supp.pdf\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Although the proposed general formulation is itself interesting, some of the arguments are not sound, and the proposed scheme is somehow similar to the gradient-penalty-based formulation in Gulrajani et al. (2017).\", \"review\": \"[pros]\\n- It proposes a general formulation of GAN-type adversarial learning as in (1), which includes the original GAN, WGAN, and IPM-type metrics as special cases.\\n- It also proposes use of the penalty term in terms of the Lipschitz constant of the discriminative function.\\n\\n[cons]\\n- Some of the arguments on the Wasserstein distance and on WGAN are not sound.\\n- Theorem 3 does not make sense.\\n- The proposed scheme is eventually similar to the gradient-penalty-based formulation in Gulrajani et al. (2017).\\n\\n[Quality]\\nI found some weaknesses in this paper, so that I judge the quality of this paper not to be high. For example, the criticisms on the Wasserstein distance in Section 2.3 and in Section 4.4, as well as the argument on WGAN at the end of Section 3.1, is not sound. The claim in Theorem 3 does not make sense, if we literally take its statement. All these points are detailed below.\\n\\n[Clarity]\\nThe main paper is clearly written, whereas in the appendices I noticed several grammatical and spelling errors as well as unclear descriptions.\\n\\n[Originality]\\nDespite that the arguments in this paper are interesting, the proposed scheme is somehow eventually similar to the gradient-penalty-based formulation in Gulrajani et al. (2017), with differences being introduction of loss metrics $\\\\phi,\\\\varphi,\\\\psi$ and the form of the gradient penalty, $\\\\max \\\\|\\\\nabla f(x)\\\\|_2^2$ in this paper versus $E[(\\\\|\\\\nabla f(x)\\\\|_2-1)^2]$ in Gulrajani et al. (2017). This fact has made me to think that the originality of this paper is marginal.\\n\\n[Significance]\\nThis paper is significant in that it would stimulate empirical studies on what objective functions and what types of gradient penalty are efficient in GAN-type adversarial learning.\", \"detailed_comments\": \"In Section 2.3, the authors criticize use of the Wasserstein distance as the distance function of GANs, but their criticism is off the point. It is indeed a problem not of the Wasserstein distance itself, but of its dual formulation.\\n\\nIt is true mathematically that $f$ in equation (8) does not have to be defined outside the supports of $P_g$ and $P_r$ because it does not affect the expectations in (8). In practice, however, one may regard that $f$ satisfies the condition $f(x)-f(y)\\\\le d(x,y)$ not only on the supports of $P_g$ and $P_r$ but throughout the entire space $\\\\mathbb{R}^n$. It is equivalent to requiring $f$ to satisfy the 1-Lipschitz condition on $\\\\mathbb{R}^n$, and is what WGAN (Arjovsky et al., 2017) tries to do in its implementation of the \\\"critic\\\" $f$ via a multilayer neural network with weight clipping.\\n\\nOne can also argue that, if one defines $f$ only on the supports of $P_g$ and $P_r$, then it should trivially be impossible to obtain gradient information which can change the support of $P_g$. The common practice of requiring the Lipschitz condition throughout $\\\\mathbb{R}^n$ is thus reasonable from this viewpoint. This is therefore not the problem of the Wasserstein distance itself, but the problem regarding how the dual problem is implemented in learning of GANs. In this regard, the discussion in this section, as well as that in Section 4.4, is misleading.\\n\\nOn optimizing $k$, I do not agree with the authors's claim at the end of Section 3.1 that WGAN may not have zero gradient with respect to $f$ even when $P_g=P_r$. Indeed, when $P_g=P_r$, for any measurable function $f$ one trivially has $J_D[f]=E_{x\\\\sim P_g}[f(x)]-E_{x\\\\sim P_r}[f(x)]=0$, so that the functional derivative of $J_D$ with respect to $f$ does vanish identically. \\n\\nI do not understand the claim of Theorem 3. I think that the assumption is too strong. If one literally takes \\\"$\\\\forall x \\\\not= y$\\\", then one can exchange $x$ and $y$ in the condition $f(y)-f(x)=k\\\\|x-y\\\\|$ to obtain $f(x)-f(y)=k\\\\|y-x\\\\|$, which together would imply $k=0$, and consequently $f$ is constant. One would be able to prove that if there exists $(x,y)$ with $x \\\\not= y$ such that $f(y)-f(x)=k\\\\|x-y\\\\|$ holds then the gradient of $f$ at $x_t$ is equal to $k(y-x)/\\\\|x-y\\\\|$ under the Lipschitz condition.\", \"appendix_g\": \"Some notations should be made more precise. For example, in the definition of J_D the variable of integration $x$ has been integrated out, so that $J_D$ no longer has $x$ as its variable. The expression $\\\\partial J_D/\\\\partial x$ does not make any sense. Also, $J_D^*(k)$ is defined as \\\"arg min\\\" of $J_D$, implying as if $J_D^*(k)$ were a $k$-Lipschitz function.\\n\\nPage 5, line 36: $J_D(x)$ appears without explicit definition.\\n\\nPage 23, lines 34 and 38: Cluttered expression $\\\\frac{\\\\partial [}{\\\\partial 2}]$ makes the statements not understandable. It also appears on page 24 several times.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1lug3R5FX | On the Geometry of Adversarial Examples | [
"Marc Khoury",
"Dylan Hadfield-Menell"
] | Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. We propose a geometric framework, drawing on tools from the manifold reconstruction literature, to analyze the high-dimensional geometry of adversarial examples. In particular, we highlight the importance of codimension: for low-dimensional data manifolds embedded in high-dimensional space there are many directions off the manifold in which to construct adversarial examples. Adversarial examples are a natural consequence of learning a decision boundary that classifies the low-dimensional data manifold well, but classifies points near the manifold incorrectly. Using our geometric framework we prove (1) a tradeoff between robustness under different norms, (2) that adversarial training in balls around the data is sample inefficient, and (3) sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust. | [
"adversarial examples",
"high-dimensional geometry"
] | https://openreview.net/pdf?id=H1lug3R5FX | https://openreview.net/forum?id=H1lug3R5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJeJ6fCHx4",
"Byx6bww8JE",
"ryl1arv8JN",
"rklS3XKBk4",
"Ske8ZAzg1E",
"S1xb4q4n0X",
"HkePK_6o0m",
"rklFsS2oRQ",
"Skesu13o0Q",
"Bk_T6Ywc0m",
"Hye6LVtG6Q",
"Skx0lEKzam",
"SJe00fYzpm",
"rJgtqzFfaX",
"SyeHPNA92m",
"rkxyHuB5n7",
"S1e3vSFun7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545097910804,
1544087301054,
1544086966727,
1544029101071,
1543675390095,
1543420457197,
1543391359216,
1543386528789,
1543384946567,
1543301573036,
1541735508801,
1541735414260,
1541735125795,
1541735056642,
1541231709352,
1541195830814,
1541080419588
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1090/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"~Emin_Orhan1"
],
[
"(anonymous)"
],
[
"~Tianhang_Zheng1"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"~Tianhang_Zheng1"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"~Tianhang_Zheng1"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1090/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1090/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1090/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1090/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper gives a theoretical analysis highlighting the role of codimension on the pervasiveness of adversarial examples. The paper demonstrates that a single decision boundary cannot be robust in different norms. They further proved that it is insufficient to learn robust decision boundaries by training against adversarial examples drawn from balls around the training set.\\n\\nThe main concern with the paper is that most of the theoretical results might have a very restrictive scope and the writing is difficult to follow. \\n\\nThe authors expressed concerns about a review not being very constructive. In a nutshell, the review in question points out that the theory might be too restrictive, that the experimental section is not very strong, that there are other works on related topics, and that the writing of the paper could be improved. While I understand the disappointing of the authors, the main points here appear to be consistent with the other reviews, which also mention that the theoretical results in this paper are not very general, that the writing is a bit complicated or heavy in mathematics, and not easy to follow, or that it is not clear if the bounds can be useful or easily applied in other work. \\n\\nOne reviewer rates the paper marginally above the acceptance threshold, while two other reviewers rate the paper below the acceptance threshold.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting work, but restrictive analysis\"}",
"{\"title\": \"Response\", \"comment\": \"Hi Emin,\\n\\nWe\\u2019re glad you found our paper insightful. In particular we\\u2019d like to thank you for bringing additional references to our attention. \\n\\n> Regarding potential improvements to our results on k-nn.\\n\\nThis is actually not true in our mathematical model. The reason it is not true is because we place no condition forbidding \\u201coversampling\\u201d. While a delta-cover requires that every point on the data manifold has a sample within a distance delta, there is no condition that forbids a point from having arbitrarily many such samples. In particular we can construct examples in which the nearest sample is on the correct class manifold while the next k-1 samples are on a different class manifold. The precise number of points that are guaranteed to be on the correct class manifold will vary as we move throughout the tubular neighborhood, but we can always construct a sampling situation in which the majority of the k samples are on the wrong class manifold, for sufficiently large k. These configurations are unlikely in practice, and it may be reasonable to impose a condition such as \\u201cno two samples are closer than some distance alpha\\u201d to prevent oversampling. With this additional condition it may be possible to prove something as you\\u2019ve described. Alternatively one could consider a statistical setting where points are sampled from a probability distribution with a specific support. As we\\u2019ve discussed in one of our rebuttals, we\\u2019re very interested in the direction of point sets sampled from class manifolds according to some probability distribution. However we feel that the lovely work of Wang et al has already well explored k-nn classifiers in this setting. \\n\\nFurthermore the reason we considered specifically 1-nearest neighbors is because the decision boundary induced by 1-nearest neighbors is comprised of Voronoi facets, and it is well known that the Voronoi cells are elongated in the directions orthogonal to the data manifold for dense samples. Thus 1-nearest neighbors is an example of a classification algorithm that naturally accounts for the high codimension of the data manifold, which we have argued is a key source of the pervasiveness of adversarial examples for naturally trained and adversarially trained deep networks.\"}",
"{\"title\": \"Response\", \"comment\": \"Hi,\\n\\nApologies for the slow response. The authors of [1] propose LID as a measure of the \\u201cintrinsic dimensionality\\u201d of a point with respect to a given dataset. The authors show that adversarial examples tend to exhibit higher LID than unperturbed examples and explore using LID features as a detector for adversarial examples. We note however that, as shown in Figure 1 of [1], LID may be larger than the embedding dimension (LID = 4.36 in a 2D embedding space). Thus LID is not easily interpretable as the dimension of a subspace as claimed. In our language, points in the normal directions off of the data manifold would exhibit higher LID. However higher LID is not necessarily indicative of an adversarial example. Whether or not a point off the data manifold is an adversarial example is dependent upon the decision boundary of the classifier. For example, many points off the data manifold in our examples may exhibit high LID, but the decision axis still classifies such points correctly. For the optimal decision boundary such points are not adversarial examples. Furthermore the local neighborhood near an adversarial example need not have the geometry of an affine subspace, and can exhibit more complex geometry depending on the decision boundary. \\n\\nWe draw attention to codimension as a key source of the pervasiveness of adversarial examples. Codimension is an exact characterization of every possible direction off of the data manifold, and is always equal to d - k. Unlike the LID, which characterizes the local dimensionality of a single point with respect to a data set, the normal space is a linear subspace of dimension d - k which captures all of the normal directions off of the manifold. When the codimension is high, there are many directions off of the data manifold in which to construct adversarial examples. We show empirically that in high codimension settings, standard optimization procedures and adversarial training have difficulty learning a decision boundary that is far away from the data manifold in every normal direction. Thus we conclude that high codimension increases vulnerability.\"}",
"{\"comment\": \"I'm not sure if the added text comparing with Wang et al. (ICML, 2018) captures the connection between the two works accurately enough. The main message of Wang et al. (2018), as I read it, is that there's a fundamental difference between adversarial robustness characteristics of k-nn classifiers with small k and those with large k. It seems to me that Theorems 5-6 in the current paper can be significantly improved for a k-nn with large k as well (instead of the current k=1 case). In fact, for large k, it seems to me that delta can be made arbitrarily large. I would encourage the authors to consider this case in a future revision.\\n\\nI think overall this paper provides useful insights. I particularly appreciate the results on the nearest neighbor classifiers. I think the robustness of nearest neighbor type models is underappreciated in the current literature on adversarial examples. Finally, I would like to point out a few papers that came out recently empirically demonstrating the superior adversarial robustness properties of these kinds of models. It may be useful for the authors to know about these more empirically motivated papers (disclosure: I'm the author of the last one listed below): \\n\\n1. Zhao J, Cho K (2018) Retrieval-augmented convolutional neural networks for improved robustness against adversarial examples. arXiv:1802.09502.\\n\\n2. Papernot N, McDaniel P (2018) Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv:1803.04765.\\n\\n3. Orhan AE (2018) A simple cache model for image recognition. NeurIPS 2018. arxiv:1805.08709.\", \"title\": \"connection to Wang et al. (ICML, 2018)\"}",
"{\"comment\": \"In paper [1], they found that adversarial examples escape to submanifold/subspace of higher intrinsic dimensionality, which seems equivalent to the findings proposed here. Given a fixed embedding/representation dimension, the lower the intrinsic dimension (of the underlying manifold), the higher the codimension (higher vulnerability), and also the easier escape to \\\"higher\\\" intrinsic dimensionality. Suppose embedding dimension d=10, intrinsic dimension k=2 vs k=5:\\n1) findings in this paper: codimension=10 - 2 (8, higher vulnerability) vs 10 - 5 (5);\\n2) findings in [1]: the intrinsic dimensionality of adversarial submanifold should have k>2 (also indicating higher vulnerability, as it is much easier to escape to k>2 than k>5) vs k>5. I am wondering, what makes codimension based analysis different to intrinsic dimensionality based analysis, as for a given dataset, its embedding dimension is fixed. Sorry if I misunderstood the idea.\\n\\n[1] Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. ICLR 2018\", \"title\": \"Missing discussion?\"}",
"{\"comment\": \"Got it. So the BIM adversarial samples are crafted from the natural model and Madry's robust model (For NN, BIM is like a black-box attack).\\n\\nThanks for your reply!\", \"title\": \"Thanks for your reply\"}",
"{\"title\": \"Response\", \"comment\": \"In Figure 6 Left and Center, we compute adversarial examples using BIM for the natural (left) and robust (center) models and classify those adversarial examples using both the the model and a nearest neighbor classifier. In the paper we state \\\"At \\u000feps = 0.5, nearest neighbors maintains accuracy of 78% to adversarial perturbations that cause the accuracy of the robust model to drop to 0%\\\". This is what we expect if the adversarial perturbations are in directions nearly normal to the data distribution where nearest neighbors naturally excels due to the geometric properties of its decision boundary.\\n\\nHowever NN has its own failure modes. In Figure 6 Right, we consider a custom iterative attack on a nearest neighbor classifier, as described in the second paragraph of Section 7.2. In this case the robust model is more successful at classifying the adversarial examples generated for nearest neighbors using this custom attack, implying that their failure modes are distinct. In Appendix J, Figure 20 we provide a qualitative comparison of the adversarial examples generated for both the robust model (using BIM) and nearest neighbors (using our custom attack). Figure 20 shows immediate qualitative differences between the two.\\n\\nWe hope this answers your question. Let us know if you have any others.\"}",
"{\"comment\": \"Just a minor question: what I refer to is the result in figure 6. BIM attack is tested on K-NN (1-NN). I was wondering how did you implement BIM on K-NN? Is K-NN differentiable?\", \"title\": \"Clarification\"}",
"{\"title\": \"Response\", \"comment\": \"Hi Tianhang. Thank you, we're glad you found the paper enlightening. Could you clarify to which result you're referring to specifically?\"}",
"{\"comment\": \"Very interesting results. Although the theory is only proved for some simple topologies, still got a lot of insights from this paper.\", \"just_one_small_question\": \"how did you implement BIM on KNN classifiers? Maybe it is already introduced in the paper, but I did not find it.\", \"title\": \"Very interesting results\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review. Please see our new post for common comments. Below we respond to your individual concerns.\", \"re\": \"Subfigure in Figure 4\\n\\nThank you for pointing out that we should have been more clear with our explanation. We can imagine two settings, one where we hold d fixed and increase k, and another where we hold k fixed and increase d. In the first setting, Figure 4 shows that lower dimensional problems are generally easier. This aligns well with results and intuition in the machine learning community. We are trying to draw attention to the second setting, that if we hold k fixed and increase d (and thus increase the codimension) the problem becomes more difficult. \\n\\n[1] Dimension Detection by Local Homology\\n[2] Maximum Likelihood Estimation of Intrinsic Dimension\\n[3] Estimating Local Intrinsic Dimension with k-Nearest Neighbor Graphs\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your review. Please see our new post for common comments. Below we respond to your individual concerns.\", \"re\": \"Concerns on experimental validation\\n\\nOur primary contribution is our theoretical results detailed in the summary above. Our experiments complement our theoretical results. Our synthetic training data is intended to explore the predictive power of our model of learners for real algorithms. Our results in Fig. 2 show that our theory for changing the norm predicts when real adversarial approaches fail. The CIRCLES and PLANES datasets show that real algorithms do, in fact, show this vulnerability to codimension. Our experiment on MNIST provides an example of a dataset with non-uniform sampling where nearest neighbor classifiers have fundamentally different performance than an adversarial training approach. We will update the paper to emphasize ways our experiments complement and support our other results. We have considered additional experiments that modify co-dimension for MNIST or the big-MNIST domain from [1] and would be happy to run them if requested.\\n\\nWe would like to highlight the fact that we made careful effort to use state-of-the-art attacks and defenses and followed best practices when running the experiments (e.g. averaging over multiple retrainings).\\n\\n[1] Shafahi etal, Are adversarial examples inevitable?\\n[2] Adversarial spheres\\n[3] Adversarially robust generalization requires more data\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your review. Please see our new post for common comments. Below we respond to your individual concerns.\", \"re\": \"Significance of robustness of nearest neighbors versus \\u2018x^\\\\epsilon based\\u2019 algorithms.\\n\\nWe apologize for not making the point of the results in Section 6 clear. The importance of Theorem 1 is to show that different classification algorithms have different sampling requirements with respect to robustness. In particular nearest neighbor classifiers require fewer samples to achieve the same level of robustness for a fixed codimension. The ball-based learner is a theoretical model of the adversarial training used in state-of-the-art defenses for adversarial examples [3,4]. We have updated this section to make the importance of our results more clear. \\n\\n[1] Towards the first adversarially robust neural network model on MNIST. \\n[2] Adversarially robust generalization requires more data. NIPS\\n[3] Explaining and harnessing adversarial examples. ICLR\\n[4] Towards deep learning models resistant to adversarial attacks. ICLR\"}",
"{\"title\": \"Common Response to All Reviewers\", \"comment\": \"We would like to thank all the reviewers for their helpful comments. To avoid repetition, we present general comments here and individual comments below.\\n\\nIn reading the reviews we realized that our introduction was unclear with respect to the contributions of the paper. We have restructured the introduction to appropriately highlight our contributions. The primary contributions of the paper are as follows. First we introduce a geometric framework, where we model classes of data as lying on distinct manifolds. Second we use this framework to show that there exists a tradeoff in robustness under different norms. Third, we show that in theory high codimension plays a role in vulnerability to adversarial examples. Vulnerability to adversarial examples is often attributed to high dimensional input spaces. To our knowledge this is the first work that investigates the role codimension plays in adversarial examples. We give theoretical results that show that even under ideal sampling conditions, state of the art methods, like adversarial training, fail in simple settings. Interestingly we find that different classification algorithms are less sensitive to changes in codimension. In preliminary experiments on synthetic data and on MNIST we provide empirical evidence to support this point. \\n\\nRegarding the related work of Wang et al. (ICML 2018) on kNN. We were unaware of the work of Wang et al. and we would like to thank the R2 for bringing this important related work to our attention. In developing the paper we only turned to nearest neighbor as an example of a classification algorithm that is robust to high-codimension. We apologize for the lack of clarity. The work of Wang et.al. is related and we have updated the paper to appropriately contextualize our results with respect to this work. Specifically we have added the following passage to the related work.\\n\\n\\u201cWang et al. (2018) explore the robustness of k-nearest neighbor classifiers to adversarial examples. In the setting where the Bayes optimal classifier is uncertain about the true label of each point, they show that k-nearest neighbors is not robust if k is a small constant. They also show that if k is asymptotically large, then k-nearest neighbors is robust. Using our geometric framework we show a complementary result: in the setting where each point is certain of its label, 1-nearest neighbors is robust to adversarial examples.\\u201d\\n\\nApproaching the problem from a geometric perspective, we reach the complementary result that 1-nearest neighbors is robust in the setting where each sample is certain of its true label.\"}",
"{\"title\": \"interesting work, but the theory is not very deep\", \"review\": \"This paper studies the geometry of adversarial examples under the assumption that dataset encountered in practice exhibit lower dimensional structure despite being embedded in very high dimensional input spaces. Under the proposed framework, the authors analyze several interesting phenomena and give theoretical results related to the necessary number of samples needed to achieves robustness. However, the theory in this paper is not very deep.\", \"pros\": \"The logic of this paper is very clear and easy to follow. Definitions and theories are illustrated with well-designed figures.\\n\\nThis paper shows the tradeoff between robustness under two norm and infinity norm for the case when the manifolds of two classes of data are concentric spheres.\\n\\nWhen data are distributed on a hypercube in a k dimensional subspace, the authors show that balls with radius \\\\delta centered at data samples only covers a small part of the \\u2018\\\\delta neighborhood\\u2019 of the manifold. \\n\\nGeneral theoretical results on robustness and minimum training set to guarantee robustness are given for nearest neighbor classifiers and other classifiers.\", \"cons\": \"Most of the theoretical results in this paper are not very general. The tradeoff between robustness in different norms are only shown for concentric spheres; the \\u2018X^\\\\epsilon is a poor model of \\\\mathcal{M}^\\\\epsilon\\u2019 section is only shown for hypercubes in low dimensional subspaces. \\n\\nSection 5 is not very convincing. As is discussed later in the paper, although $X^\\\\delta$ only covers a small part of \\\\mathcal{M}^\\\\delta, robustness can be achieved by using balls centered at samples with larger radius.\\n\\nMost of the analysis is based on the assumption that samples are perfectly distributed to achieve the best possible robustness result. A more interesting case is probably when samples are generated on the manifold following some probabilistic distributions. \\n\\nTheorems given in Section 6 are reasonable, but not very significant. It is not very surprising that nearest neighbor classifier is more robust than \\u2018x^\\\\epsilon based\\u2019 algorithms, especially when the samples are perfectly distributed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Synthetic examples and weak analysis of nearest neighbor classifier\", \"review\": \"This paper gives a theoretical analysis of adversarial examples, showing that (i) there exists a tradeoff between robustness in different norms, (ii) adversarial training is sample inefficient, and (iii) the nearest neighbor classifier can be robust under certain conditions. The biggest weakness of the paper is that theoretical analysis is done on a very synthetic dataset, whereas real datasets can hardly be conceived to exhibit similar properties. Furthermore, the authors do not give a bound on the probability that the sampling conditions for the robust nearest neighbor classifier (Theorem 1) will be satisfied, leading to potentially vacuous results.\\n\\nWhile I certainly agree that theoretical analysis of the adversarial example phenomenon is challenging, there have been prior work on both analyzing the robustness of k-NN classifiers (Wang et al., 2018 - http://proceedings.mlr.press/v80/wang18c/wang18c.pdf) and on demonstrating the curse of dimensionality as a major contributing factor to adversarial examples (Shafahi et al., 2018 - https://arxiv.org/abs/1809.02104, concurrent submission to ICLR). I am very much in favor of the field moving in these directions, but I do not think this submission is demonstrating any meaningful progress.\", \"pros\": [\"Rigorous theoretical analysis.\"], \"cons\": [\"Results are proven for particular settings rather than relying on realistic data distribution assumptions.\", \"Paper is poorly written. The authors use unnecessarily complicated jargon to explain simple concepts and the proofs are written to confuse the reader. This is especially a problem since the paper exceeds the suggested page limit of 8 pages.\", \"While it is certain that nearest neighbor classifiers are robust to adversarial examples, their application is limited to only very simple datasets. This makes the robustness result lacking in applicability.\", \"Weak experimental validation. The authors make repeat use of synthetic datasets and only validate their claim on MNIST as a real dataset.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting paper on adversarial examples, but with certain concerns\", \"review\": \"This paper tried to analyze the high-dimensional geometry of adversarial examples from a geometric framework. The authors explained that there exists a tradeoff between being robust to different norms. They further proved that it is insufficient to learn robust decision boundaries by training against adversarial examples drawn from balls around the training set. Moreover, this paper showed that nearest neighbor classifiers do not suffer from this insufficiency.\\n \\nIn general, I think this paper is very interesting and enlightening. The authors analyzed the most robust boundary of norm 2 and norm infinity in different dimensions through a simple example and concluded that the single decision boundary cannot be robust in different norms. In addition, the author started from a special manifold and proposed a bound (ratio of two volumes) to prove the insufficiency of the traditional adversarial training methods and then extended to arbitrary manifold. It is good that this might provide a new way to evaluate the robustness of adversarial training method. However, I have some concerns: 1) Is it rigorous to define the bound by vol_X/vol_pi? In my opinion, the ratio of the volume of intersection (X^\\\\del and \\\\pi^\\\\del) and vol \\\\pi^\\\\del may be more rigorous? 2) I don't know if such bound can be useful or easily applied in other work? In my opinion, it might be difficult, since the volume itself appears difficult to calculate. \\nI think the paper is a bit complicated or heavy in mathematics, and not easy to follow (though I believe I have well understood it). Some typos and minor issues are also listed as below.\", \"minor_concerns\": \"1. At the end of the introduction, 3 attacking methods, FGSM, BIM, and PGD, should be given their full names and also citations are necessary.\\n2. Could you provide a specific example to illustrate the bound in Eq. (3), e.g. in the case of d=3, k=1.\\n3. In Page 7, \\u201cFigure 4 (left) shows that this expression approaches 1 as the codimension (d-k) of Pi increases.\\u201d I think, the subfigure shows that the ratio approaches 1 when d and k are all increased.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByldlhAqYQ | Transfer Learning for Sequences via Learning to Collocate | [
"Wanyun Cui",
"Guangyu Zheng",
"Zhiqiang Shen",
"Sihang Jiang",
"Wei Wang"
] | Transfer learning aims to solve the data sparsity for a specific domain by applying information of another domain. Given a sequence (e.g. a natural language sentence), the transfer learning, usually enabled by recurrent neural network (RNN), represent the sequential information transfer. RNN uses a chain of repeating cells to model the sequence data. However, previous studies of neural network based transfer learning simply transfer the information across the whole layers, which are unfeasible for seq2seq and sequence labeling. Meanwhile, such layer-wise transfer learning mechanisms also lose the fine-grained cell-level information from the source domain.
In this paper, we proposed the aligned recurrent transfer, ART, to achieve cell-level information transfer. ART is in a recurrent manner that different cells share the same parameters. Besides transferring the corresponding information at the same position, ART transfers information from all collocated words in the source domain. This strategy enables ART to capture the word collocation across domains in a more flexible way. We conducted extensive experiments on both sequence labeling tasks (POS tagging, NER) and sentence classification (sentiment analysis). ART outperforms the state-of-the-arts over all experiments.
| [
"transfer learning",
"recurrent neural network",
"attention",
"natural language processing"
] | https://openreview.net/pdf?id=ByldlhAqYQ | https://openreview.net/forum?id=ByldlhAqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeT3H_ox4",
"rJgMuxuieN",
"S1gw3tDil4",
"ryg9EAIjx4",
"BkxHEW8jl4",
"rkgAfEjge4",
"rJgs-aD6Am",
"SyeoJ4gjR7",
"S1g0uXesAX",
"ByeMKYAtCX",
"rkxdELsKRm",
"H1ltRziK0X",
"rJeHRZoYC7",
"Hye7v16q3m",
"r1xwngaY3Q",
"SklbRJ0_37"
],
"note_type": [
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545467316767,
1545465962131,
1545464238555,
1545461298362,
1545457965431,
1544758293975,
1543499011044,
1543336931421,
1543336822287,
1543264634374,
1543251504220,
1543250641084,
1543250380619,
1541226331432,
1541161135294,
1541099464887
],
"note_signatures": [
[
"~zheng_li4"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"~zheng_li4"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"~zheng_li4"
],
[
"ICLR.cc/2019/Conference/Paper1089/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1089/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1089/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1089/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1089/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1089/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"I think the first concern has not been addressed by AnonReviewer3. You still use 1400 target domain labeled data for training. Do your think it is minimally supervised domain adaptation in a small-scale setting? You can check some supervised domain methods (e.g., http://aclweb.org/anthology/P18-1233), they only use 50 target domain labeled data.\\n\\nI have to say, even without the aid of the combination of rest three domains, the hierarchical attention network (with MLP, not GRU Yang et.al) can achieve better results based on 1400 target domain labeled data than the reported results in Table 3.\", \"title\": \"Response\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your concerns.\\n\\nYour first concern is already addressed by AnonReviewer3. Please refer to our response to AnonReviewer3 about minimally supervised domain adaptation.\\n\\nFor your second concern, we will try to fine-tune the hyperparameters and see if it changes a lot before the camera ready. We will also release our source code and datasets later.\"}",
"{\"comment\": \"Hi Cui,\\n\\nThanks for your reply.\\n\\nI think you may ignore some critical points.\\n\\nFirst, the experiment setting may be unfair. According to the description of your paper \\\"We use the training data and development data from both domains for training and validating. And we use the testing data of the target domain for testing\\\", your method should be supervised domain adaptation method. However, you have compared with many unsupervised domain adaptation methods in a supervised setting. \\n\\nSecond, I have mentioned that if the setting has been changed, you should tune the hyper-parameters, not only just use the original setup in a different setting.\\n\\nYou can send me your original raw data split ([email protected]) such that I can give you updated results in the next few days.\", \"title\": \"Missing some points\"}",
"{\"title\": \"detailed experimental settings for HATN\", \"comment\": \"Hi Zheng, let's try to make it clearer and reach an accurate agreement for HATN.\\n\\nAs described in the response above, we used the source code and keeped hyper parameters in https://github.com/hsqmlzno1/HATN. More specifically, we initialize HATN by 300d skip-gram vectors. The dimensions of the word attention layer and the sentence attention layer are both 300. We train the model with batch_size=50, learning rate=1e-4. We use the same early-stopping policy. Besides, we use the same unlabeled data provided from https://github.com/hsqmlzno1/HATN. These settings are all from your github repository.\\n\\nPlease provide more details of your implementation and you results. We will consider updating the results in the camera ready version if we find the results change a lot in your settings.\"}",
"{\"comment\": \"I have got this information from my github issues (https://github.com/hsqmlzno1/HATN/issues/5). I'm the author of the hatn model. I think the author of this paper may have reported inaccurate results about the hatn model in the rebuttal. I have also verified the hatn model in the small-scale setting, the results still remain to be superior, which is largely better than the reported results. So I think the author need to check the experiments and tune the hyper-parameters if the setting has been changed, which could be more promising. Thanks!\", \"title\": \"Poor results about the HATN model\"}",
"{\"metareview\": \"This paper presents a method for transferring source information via the hidden states of recurrent networks. The transfer happens via an attention mechanism that operates between the target and the source. Results on two tasks are strong.\\n\\nI found this paper similar in spirit to Hypernetworks (David Ha, Andrew Dai, Quoc V Le, ICLR 2016) since there too there is a dynamic weight generation for network given another network, although this method did not use an attention mechanism.\\n\\nHowever, reviewers thought that there is merit in this paper (albeit pointed the authors to other related work) and the empirical results are solid.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta Review\"}",
"{\"title\": \"Detailed rewritings\", \"comment\": \"For your detailed writing advices, we have rewritten the two sentences accordingly.\\n\\n1.\\tWe rewrote the sentence \\n\\u201cART discriminates between information of the corresponding position and that of all positions with collocated words.\\u201d \\nto \\n\\u201cFor each word in the target domain, ART learns to incorporate two types of information from the source domain: (a) the hidden state corresponding to the same word, and (b) the hidden states for all words in the sequence.\\u201d\\n\\n2.\\tWe rewrote the sentence \\n\\u201cBy using the attention mechanism (Bahdanau et al., 2015), we compute the correlation for each word pair.\\u201d \\nto \\n\\u201cART learns to incorporate information (b) based on the attention scores (Bahdanau et al., 2015) of all words from the source domain.\\u201d\\n\\nFor more writing improvements, please refer to the previous comment or the paper.\"}",
"{\"title\": \"Writing and Minimally Supervised Domain Adaptation\", \"comment\": \"Thank you for your response.\\n\\n== Writing ==\\nWe agree that there is room for writing of the original submission. We have been improving the writing quality. We believe that the latest version is much clearer now.\", \"we_made_the_following_revisions_to_improve_the_writing\": \"1. We gave more descriptions of how ART works.\\ni. [Learn to Collocate and Transfer] In section 1, we rewrote paragraph of \\u201clearn to collocate and transfer\\u201d. We highlighted how ART incorporates two types of information and uses the attention mechanism to capture the long-term cross-domain dependency.\\nii. [Architecture] In section 2, we added a paragraph to describe the architecture of ART. We elaborated how it incorporates the information of the source domain from the pre-trained model.\\niii. [Model training] In section 2, we rewrote the paragraph of model training. We highlighted the model pre-training procedure and fine-tuning procedure of ART.\\n2. We added the interpretations and examples for some confusing notions, such as \\u201clevel-wise transfer learning\\u201d, \\u201ccell-level transfer learning\\u201d, and \\u201ccollocate\\u201d.\\n3. We abandoned or reduced some vague words or phrases, such as \\u201cword correlation\\u201d, \\u201ccollocate\\u201d. The revised version uses more precise expressions, such as \\u201cdependencies between two words\\u201d, \\u201cincorporate information by their attention score\\u201d.\\n4. We rewrote the related work section. We compared ART with BERT and ELMo. The latter two approaches also use pre-trained models for downstream tasks.\\n5. We fixed some typos.\\n\\n== Minimally Supervised Domain Adaptation ==\\n1. For merging three domains as one source domain, we try to evaluate the effectiveness of ART when the source domain corpus is rich and the target domain corpus is scarce. We merge the rest three domains to enrich the source domain corpus. We do not differentiate samples from the three domains, which is different from standard multi-source domain adaptation. \\n2. We will highlight that we use annotated target data, which is different from some baselines. The annotated target data is also used by HRN in Table 5 and other pre-training approaches.\"}",
"{\"title\": \"Writing\", \"comment\": \"Thank you for your encouraging comments.\\nWe agree that there is room for writing of the original submission. We have been improving the writing quality. We believe that the latest version is much clearer now.\", \"we_made_the_following_revisions_to_improve_the_writing\": \"1. We gave more descriptions of how ART works.\\ni. [Learn to Collocate and Transfer] In section 1, we rewrote paragraph of \\u201clearn to collocate and transfer\\u201d. We highlighted how ART incorporates two types of information and uses the attention mechanism to capture the long-term cross-domain dependency.\\nii. [Architecture] In section 2, we added a paragraph to describe the architecture of ART. We elaborated how it incorporates the information of the source domain from the pre-trained model.\\niii. [Model training] In section 2, we rewrote the paragraph of model training. We highlighted the model pre-training procedure and fine-tuning procedure of ART.\\n2. We added the interpretations and examples for some confusing notions, such as \\u201clevel-wise transfer learning\\u201d, \\u201ccell-level transfer learning\\u201d, and \\u201ccollocate\\u201d.\\n3. We abandoned or reduced some vague words or phrases, such as \\u201cword correlation\\u201d, \\u201ccollocate\\u201d. The revised version uses more precise expressions, such as \\u201cdependencies between two words\\u201d, \\u201cincorporate information by their attention score\\u201d.\\n4. We rewrote the related work section. We compared ART with BERT and ELMo. The latter two approaches also use pre-trained models for downstream tasks.\\n5. We fixed some typos.\"}",
"{\"title\": \"Solids results; still on the fence\", \"comment\": \"Thanks for providing the latest set of results. Your experimental results are quite solid and so I am improving my score. However not giving it very high scores because I still feel a little hesitant about the writing quality in this paper. The technical writing is still subpar. E.g.\\n1) \\\"ART discriminates between information of the corresponding position and that of all positions with collocated words.\\\" => you probably want to say \\\"ART incorporates the hidden state representation corresponding to the same position and a function of the hidden states for all other words weighted by their attention scores\\\"\\n2) \\\"By using the attention mechanism (Bahdanau et al., 2015), we compute the correlation for each word pair\\\" => correlation has a very specific meaning and it makes it confusing if you use here.\\n\\nThere are several such examples.\"}",
"{\"title\": \"Response to AnonReviewer3 [new baselines, experimental settings, and clarifications]\", \"comment\": \"Thank you for your insightful and supportive comments. We have made the following revisions: (1) We added two baselines according to your comments. The results further justify the effectiveness of ART. (2) We added a new experiment for minimally supervised domain adaptation in Table 3. ART still outperforms all the competitors by a large margin. (3) We clarified the ART model and model training process in the revised paper. We will give more details below:\\n\\n== Writing ==\\n1. High level description of the ART model. \\nWe have added the following description of ART model in section 2.\\n\\u201cThe source domain and the target domain share an RNN layer, from which the common information is transferred. We pre-train the neural network of the source domain. Therefore the shared RNN layer represents the semantics of the source domain. The target domain has an additional RNN layer. Each cell in it accepts transferred information through the shared RNN layer. Such information consists of (1) the information of the same word in the source domain (the red edge in figure 2); and (2) the information of all its collocated words (the blue edges in figure 2). ART uses attention to decide the weights of all candidate collocations. The RNN cell controls the weights between (1) and (2) by an update gate.\\u201d\\n\\n2. Model training. We add more details of the model training part in section 2.\\nWe first pre-train the parameters of the source domain by its training samples. Then we fine-tune the pre-trained model with additional layers of the target domain. The fine-tuning uses the training samples of the target domain. All parameters are jointly fine-tuned.\\n\\n3. Related work.\\nWe have rewritten the related work section. We compare with other cell-level transfer learning approaches and pre-trained models.\\n\\n== Innovation of cell-level transfer ==\\nWe agree that some previous transfer learning approaches also consider cell-level transfer. But none of them considers the word collocations. As a pre-trained model, ELMo uses bidirectional LSTMs to generate contextual features. Instead, ART uses attention mechanism in RNN that each cell in the target domain directly access information of all cells in the source domain. We added more details in the related work section.\\n\\n== Baselines ==\\nWe added two baselines, LSTM-u and FLORS, according to your comments. LSTM-u uses a standard LSTM and is trained by the union data of the source and the target domain. FLORS is a domain adaptation model for POS tagging (http://www.aclweb.org/anthology/Q14-1002). Their results are shown in Table 2 and Table 5. ART outperforms LSTM-u in almost all settings by a large margin. Note that FLORS is independent of the target domain. If the training corpus of the target domain is quite rare (Twitter/0.01), FLORS performs better. But with richer training data of the target domain (Twitter/0.1), ART outperforms FLORS by a large margin.\", \"table_2\": \"Classi\\ufb01cation accuracy on the Amazon review dataset.\\nSource\\t\\tTarget\\t\\tLSTM-u\\tART\\nBooks\\t\\tDVD\\t\\t0.770 \\t0.870 \\nBooks\\t\\tElectronics\\t0.805 \\t0.848 \\nBooks\\t\\tKitchen\\t\\t0.845 \\t0.863 \\nDVD\\t\\tBooks\\t\\t0.788 \\t0.855 \\nDVD\\t\\tElectronics\\t0.788 \\t0.845 \\nDVD\\t\\tKitchen\\t\\t0.823 \\t0.853 \\nElectronics\\tBooks\\t\\t0.740 \\t0.868 \\nElectronics\\tDVD\\t\\t0.753 \\t0.855 \\nElectronics\\tKitchen\\t\\t0.863 \\t0.890 \\nKitchen\\t\\tBooks\\t\\t0.760 \\t0.845 \\nKitchen\\t\\tDVD\\t\\t0.758 \\t0.858 \\nKitchen\\t\\tElectronics\\t0.815 \\t0.853 \\n Average\\t\\t\\t\\t0.792 \\t0.858\", \"table_5\": \"Performance over POS tagging.\\nTask\\t\\t\\tSource\\tTarget\\t\\tFLORS\\tART\\nPOS Tagging\\t PTB\\t\\tTwitter/0.1\\t0.763\\t0.859\\nPOS Tagging\\t PTB\\t\\tTwitter/0.01\\t0.763\\t0.658\\n\\n== Experimental settings ==\\nBased on your comment, we added a new experiment for minimally supervised domain adaptation in sentence classification. For each target domain in the Amazon review dataset, we combined the training/development data of rest three domains as the source domain. We show the results in Table 3. ART outperforms the competitors by a large margin. This verifies its effectiveness in the setting of minimally supervised domain adaptation.\", \"table_3\": \"Classification accuracy with scarce training samples of the target domain.\\nTarget\\t\\tLSTM\\tLSTM-u\\tCCT\\t\\tLWT\\tHATN\\tART\\nBooks\\t\\t0.745 \\t0.813 \\t0.848 \\t0.808 \\t0.820 \\t0.895 \\nDVD\\t\\t0.695 \\t0.748 \\t0.870 \\t0.770 \\t0.828 \\t0.875 \\nElectronics\\t0.733 \\t0.823 \\t0.848 \\t0.818 \\t0.863 \\t0.865 \\nKitchen\\t\\t0.798 \\t0.840 \\t0.860 \\t0.840 \\t0.833 \\t0.870 \\nAverage\\t\\t0.743 \\t0.806 \\t0.856 \\t0.809 \\t0.836 \\t0.876\"}",
"{\"title\": \"Response to AnonReviewer2 [new baselines and clarifications]\", \"comment\": \"Thank you for your insightful and supportive comments. We have made the following revisions: (1) We added two baselines according to your comments. The results further justify the effectiveness of ART. (2) We clarified \\u201ccollocate\\u201d, \\u201clayer-wise transfer learning\\u201d, \\u201cmodel training\\u201d, and their related issues. We give more details below:\\n\\n1. Regarding computational cost:\\nThe network depth only increases by 2 if we ignore the detailed operations (e.g. gates). One is caused by collocating and transferring. Another one is caused by merging the original input, the previous cell\\u2019s hidden state, and the transferred information. So the time cost does not increase much.\\n\\n2. Regarding Con1: why does 'f' not overfit to only selecting information from the target domain? \\nYour understanding is correct. Function 'f' will overfit to the target domain. All parameters will be jointly fine-tuned by the training samples of the target domain. Nevertheless, the pre-training for the source domain still helps because it provides representations of the source domain. Another recent successful example of using pre-trained models is BERT (Devlin et al., 2018), which also fine-tunes all the parameters to specific tasks. \\nAnd we rewrite the model training part in section 2 to make it clearer.\\n\\u201cWe first pre-train the parameters of the source domain by its training samples. Then we fine-tune the pre-trained model with additional layers of the target domain. The fine-tuning uses the training samples of the target domain. All parameters are jointly fine-tuned.\\u201d\\n\\nRegarding Con2. More simple baselines.\\nFirst, we added a baseline model, LSTM-s, which directly uses parameters from the source domain to the target domain. The results are shown in Table 2. ART outperforms the baseline by a large margin.\", \"table_2\": \"Classi\\ufb01cation accuracy on the Amazon review dataset.\\nSource\\t\\tTarget\\t\\tLSTM-s\\tHATN\\tART\\nBooks\\t\\tDVD\\t\\t0.718\\t0.813\\t0.870\\nBooks\\t\\tElectronics\\t0.678\\t0.790\\t0.848\\nBooks\\t\\tKitchen\\t\\t0.678\\t0.738\\t0.863\\nDVD\\t\\tBooks\\t\\t0.730\\t0.798\\t0.855\\nDVD\\t\\tElectronics\\t0.663\\t0.805\\t0.845\\nDVD\\t\\tKitchen\\t\\t0.708\\t0.765\\t0.853\\nElectronics\\tBooks\\t\\t0.648\\t0.763\\t0.868\\nElectronics\\tDVD\\t\\t0.648\\t0.788\\t0.855\\nElectronics\\tKitchen\\t\\t0.785\\t0.808\\t0.890\\nKitchen\\t\\tBooks\\t\\t0.653\\t0.740\\t0.845\\nKitchen\\t\\tDVD\\t\\t0.678\\t0.738\\t0.858\\nKitchen\\t\\tElectronics\\t0.758\\t0.850\\t0.853\\n Average\\t\\t\\t\\t0.695\\t0.783\\t0.858\\n\\nSecond, you suggest directly concatenating the hidden states of the source and the target domains. In fact, we already proposed a very similar baseline CCT. The only difference is that CCT uses a gate to merge the two values, instead of concatenation. ART outperforms CCT in all cases.\\nThird, we already used 100d GloVe vectors to initialize ART and all its ablations we proposed in this paper. The pre-trained word embeddings are also widely used by its competitors (e.g. AMN and HATN). We have added the description in section 4.\\n\\n\\nRegarding Con 3. Experiments: the hierarchical attention transfer work of Li et al.\\nWe added the comparison with HATN (Li et al 2018). The results are shown in Table 2 and Table 3. We use the source code and hyper parameters of (Li et al 2018) from the authors\\u2019 Github. We changed its labeled training samples from 5600 to 1400 as with ART.\\n\\nThe results are shown in Table 2 above. ART still beats the baseline by a large margin. This verifies its effectiveness.\\n\\n== Writing ==\\nRegarding Writing1.\\nFirst, for the meaning of \\u201ccollocate\\u201d, we added more explanations and take figure 1 as an example in section 1. \\n\\u201cHere \\u201ccollocate\\u201d indicates that a word's semantics can have long-term dependency on other words. To understand a word in the target domain, we need to precisely represent its collocated words from the source domain. We learn from the collocated words via the attention mechanism. For example, in figure 1, \\u201chate\\u201d is modified by the adverb \\u201csometimes\\u201d, which implies the act of hating is not serious. But the \\u201csometimes\\u201d in the target domain is trained insufficiently. We need to transfer the semantics of \\u201csometimes\\u201d in the source domain to understand the implication.\\u201d\\nSecond, to avoid the ambiguity of \\u201csentence pair\\u201d, we rewrote the description in the revised version.\\n\\u201cThe model needs to be evaluated O(n^2) times for each sentence, due to the enumeration of n indexes for the source domain and n indexes for the target domain. Here n denotes the sentence length.\\u201d\\n\\nRegarding Writing2.\\n\\u201cLayer-wise transfer learning\\u201d indicates that the approach represents the whole sentence by a single vector. So the transfer mechanism is only applied to the vector. We cannot apply layer-wise transfer learning algorithms to sequence labeling tasks.\\nWe added the descriptions in section 1.\"}",
"{\"title\": \"Response to AnonReviewer1 [new baselines and clarifications]\", \"comment\": \"Thank you for your insightful and supportive comments. We have made the following revisions: (1) We added two baselines based on your comments. The results further justified the effectiveness of ART. (2) We added the clarification of \\u201clayer-wise transfer learning\\u201d, \\u201ccell-level transfer learning\\u201d, and \\u201ccollocate\\u201d in section 1. We will give more details below:\\n\\n==Experiments==\\nWe added two baselines, LSTM-u and HATN, according to your comments. LSTM-u uses a standard LSTM and is trained by the union data of the source domain and the target domain. The HATN model is from the paper \\\"Hierarchical Attention Transfer Network for Cross-domain Sentiment Classification\\\" (Li et al 2018). We use the source code and hyper parameters of (Li et al 2018) from the authors\\u2019 Github. We changed its labeled training samples from 5600 to 1400 as with ART.\\n\\nThe results are shown in Table 2. ART still beats the baselines by a large margin. This verifies its effectiveness.\", \"table_2\": \"Classi\\ufb01cation accuracy on the Amazon review dataset.\\nSource\\t\\tTarget\\t\\tLSTM-u\\tHATN\\tART\\nBooks\\t\\tDVD\\t\\t0.770\\t0.813\\t0.870\\nBooks\\t\\tElectronics\\t0.805\\t0.790\\t0.848\\nBooks\\t \\tKitchen\\t\\t0.845\\t0.738\\t0.863\\nDVD \\t Books\\t\\t0.788\\t0.798\\t0.855\\nDVD\\t Electronics\\t0.788\\t0.805\\t0.845\\nDVD\\t Kitchen\\t\\t0.823\\t0.765\\t0.853\\nElectronics \\tBooks\\t\\t0.740\\t0.763\\t0.868\\nElectronics\\tDVD\\t\\t0.753\\t0.788\\t0.855\\nElectronics\\tKitchen\\t\\t0.863\\t0.808\\t0.890\\nKitchen\\t\\tBooks\\t\\t0.760\\t0.740\\t0.845\\nKitchen\\t\\tDVD\\t\\t0.758\\t0.738\\t0.858\\nKitchen\\t\\tElectronics\\t0.815\\t0.850\\t0.853\\n Average\\t\\t\\t0.792\\t0.783\\t0.858\\n\\n\\n== Writing==\\nWe added more detailed explanations and took figure 1 as an example to clarify the confusing parts in section 1.\\n\\n1. Layer-wise transfer learning: \\n\\u201cLayer-wise transfer learning\\u201d indicates that the approach represents the whole sentence by a single vector. So the transfer mechanism is only applied to the vector. \\n\\n2. Cell-level transfer learning:\\nART uses cell-level information transfer, which means each cell is affected by the transferred information. For example, in figure 1, the state of \\u201chate\\u201d in the target domain is affected by \\u201csometimes\\u201d and \\u201dhate\\u201d in the source domain. \\n\\n3. Collocate: \\nWe use the term \\u201ccollocate\\u201d to indicate that a word's semantics can have long-term dependency on another word. To understand a word in the target domain, we need to precisely capture and represent its collocated words from the source domain. We learn from the collocated words via the attention mechanism. For example, in figure 1, \\u201chate\\u201d is modified by the adverb \\u201csometimes\\u201d, which implies the act of hating is not serious. But \\u201csometimes\\u201d in the target domain is trained insufficiently. We need to transfer the semantics of \\u201csometimes\\u201d.\"}",
"{\"title\": \"Good empirical results on transfer learning; writing could be clearer\", \"review\": \"== Quality of results ==\\nThis paper's empirical results are its main strength. They evaluate on a well-known benchmark for transfer learning in text classification (the Amazon reviews dataset of Blitzer et al 2007), and improve by a significant margin over recent state-of-the-art methods. They also evaluate on several sequence tagging tasks and achieve good results.\\n\\nOne weakness of the empirical results is that they do not compare against training a model on the union of the source and target domain. I think this is very important to compare against.\", \"note\": \"the authors cite a paper in the introduction \\\"Hierarchical Attention Transfer Network for Cross-domain Sentiment\\nClassification\\\" (Li et al 2018) which also achieves state of the art results on the Amazon reviews dataset, but do not compare against it. At first glance, Li et al 2018 appear to get better results. However, they appear to be training on a larger amount of data for each domain (5600 examples, rather than 1400). It is unclear to me why their evaluation setup is different, but some clarification about this would be helpful.\\n\\n== Originality ==\", \"a_high_level_description_of_their_approach\": \"1. Train an RNN encoder (\\\"source domain encoder\\\") on the source domain\\n2. On the target domain, encode text using the following strategy:\\n - First, encode the text using the source domain encoder\\n - Then, encode the text using a new encoder (a \\\"target domain encoder\\\") which has the ability to attend over the hidden states of the source domain encoder at each time step of encoding.\\n\\nThey also structure the target domain encoder such that at each time step, it has a bias toward attending to the hidden state in the source encoder at the same position.\\n\\nThis has a similar flavor to greedy layer-wise training and model stacking approaches. In that regard, the idea is not brand new, but feels well-applied in this setting.\\n\\n== Clarity ==\\nI felt that the paper could have been written more clearly. The authors set up a comparison between \\\"transfer information across the whole layers\\\" vs \\\"transfer information from each cell\\\" in both the abstract and the intro, but it was unclear what this distinction was referring to until I reached Section 4.1 and saw the definition of Layer-Wise Transfer.\\n\\nThroughout the abstract and intro, it was also unclear what was meant by \\\"learning to collocate cross domain words\\\". After reading the full approach, I see now that this simply refers to the attention mechanism which attends over the hidden states of the source domain encoder.\\n\\n== Summary ==\\nThis paper has good empirical results, but I would really like to see a comparison against training a model on the union of the source and target domain. I think superior results against that baseline would increase my rating for this paper.\\n\\nI think the paper's main weakness is that the abstract and intro are written in a way that is somewhat confusing, due to the use of unconventional terminology that could be replaced with simpler terms.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper proposed to use RNN/LSTM with collocation alignment as a representation learning method for transfer learning/domain adaptation in NLP.\", \"review\": \"The proposed method is suitable for many NLP tasks, since it can handle the sequence data.\\n\\nI find it difficult to follow through the model descriptions. Perhaps a more descriptive figures would make this easier to follow, I feel that the ART model is a very strait forward and it can be easily described in much simpler and less exhausting (sorry for the strong word) way, while there is nothing wrong with being as elaborating as you are, I feel that all those details belong in an appendix. \\nCan you please explain the exact learning process?\\nI didn\\u2019t fully understand the exact way of collocations, you first train on the source domain and then use the trained source network when training in the target domain with all the collocated words for each training example? I deeply encourage you to improve the model section for future readers. \\nIn contrast to the model section, the related work and the experimental settings sections are very thin.\\nThe experimental setup for the sentiment analysis experiments is quite rare in the transfer learning/domain adaptation landscape, having equal amount of labeled data from both source and target domains is not very realistic in my humble opinion.\\nMore realistic setup is unsupervised domain adaptation (like in DANN and MSDA-DAN papers) or minimally supervised domain adaptation (like you did in your POS and NER experiments).\\n\\nIn addition to the LSTM baseline (which is trained with target data only), I think that LSTM which is trained on both source and target domains data is required for truly understand ART gains \\u2013 this goes for the POS and NER tasks as well.\\nThe POS and NER experiments can use some additional baselines for further comparison, for example:\", \"http\": \"//www.aclweb.org/anthology/N18-1112\", \"https\": \"//openreview.net/pdf?id=rk9eAFcxg\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reasonable idea but the technical details are quite unclear\", \"review\": \"This paper presents the following approach to domain adaptation. Train a source domain RNN. While doing inference on the target domain, first you run the source domain RNN on the sequence. Then while running the target domain RNN, set the hidden state at time step i, h^t_i, to be a function 'f' of h^t_{i-1} and information from source domain \\\\psi_i; \\\\psi_i is computed as a convex combination of the state of the source domain RNN, h^s_{i}, and an attention-weighted average of all the states h^s{1...n}. So in effect, the paper transfers information from each of source domain cells -- the cell at time step i and all the \\\"collocated\\\" cells (collocation being defined in terms of attention). This idea is then extended in a straightforward way to LSTMs as well.\\n \\nDoing \\\"cell-level\\\" transfer enables more information to be transferred according to the authors, but it comes at a higher computation since we need to do O(n^2) computations for each cell.\\n\\nThe authors show that this beats a variety of baselines for classification tasks (sentiment), and for sequence tagging task (POS tagging over twitter.)\", \"pros\": \"1. The idea makes sense and the experimental results show solid\", \"cons\": \"1. Some questions around generalization are not clearly answered. E.g. how are the transfer parameters of function 'f' (that controls how much source information is transferred to target) trained? If the function 'f' and the target RNN is trained on target data, why does 'f' not overfit to only selecting information from the target domain? Would something like dropping information from target domain help?\\n\\n2. Why not also compare with a simple algorithm of transferring parameters from source to target domain? Another simple baseline is to just train final prediction function (softmax or sigmoid) on the concatenated source and target hidden states. Why are these not compared with? Also, including the performance of simple baselines like word2vec/bow is always a good idea, especially on the sentiment data which is very commonly used and widely cited. \\n\\n3. Experiments: the authors cite the hierarchical attention transfer work of Li et al (https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16873/16149) and claim their approach is better, but do not compare with them in the experiments. Why?\", \"writing\": \"The writing is quite confusing at places and is the biggest problem with this paper. E.g.\\n\\n1. The authors use the word \\\"collocated\\\" everywhere, but it is not clear at all what they mean. This makes the introduction quite confusing to understand. I assumed it to mean words in the target sentences that are strongly attended to. Is this correct? However, on page 4, they claim \\\"The model needs to be evaluated O(n^2) times for each sentence pair.\\\" -- what is meant by sentence pair here? It almost leads me to think that they consider all source sentence and target sentences? This is quite confusing. \\n\\n2. The authors keep claiming that \\\"layer-wise transfer learning mechanisms lose the fine-grained cell-level information from the source domain\\\", but it is not clear exactly what do they mean by layer-wise here. Do they mean transferring the information from source cell i to target cell i as it is? In the experiments section on LWT, the authors claim that \\\"More specifically, only the last cell of the RNN layer transfers information. This cell works as in ART. LWT only works for sentence classification.\\\" Why is it not possible to train a softmax over both the source hidden state and the target hidden state for POS tagging?\", \"nits\": \"\", \"page_4_line_1\": \"\\\"i'th cell in the source domain\\\" -> \\\"i'th cell in the target domain\\\". \\\"j'th cell in target\\\" -> \\\"j'th cell in sourcE\\\".\", \"revised\": \"increased score after author response.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJgOl3AqY7 | Modulated Variational Auto-Encoders for Many-to-Many Musical Timbre Transfer | [
"Adrien Bitton",
"Philippe Esling",
"Axel Chemla-Romeu-Santos"
] | Generative models have been successfully applied to image style transfer and domain translation. However, there is still a wide gap in the quality of results when learning such tasks on musical audio. Furthermore, most translation models only enable one-to-one or one-to-many transfer by relying on separate encoders or decoders and complex, computationally-heavy models. In this paper, we introduce the Modulated Variational auto-Encoders (MoVE) to perform musical timbre transfer. First, we define timbre transfer as applying parts of the auditory properties of a musical instrument onto another. We show that we can achieve and improve this task by conditioning existing domain translation techniques with Feature-wise Linear Modulation (FiLM). Then, by replacing the usual adversarial translation criterion by a Maximum Mean Discrepancy (MMD) objective, we alleviate the need for an auxiliary pair of discriminative networks. This allows a faster and more stable training, along with a controllable latent space encoder. By further conditioning our system on several different instruments, we can generalize to many-to-many transfer within a single variational architecture able to perform multi-domain transfers. Our models map inputs to 3-dimensional representations, successfully translating timbre from one instrument to another and supporting sound synthesis on a reduced set of control parameters. We evaluate our method in reconstruction and generation tasks while analyzing the auditory descriptor distributions across transferred domains. We show that this architecture incorporates generative controls in multi-domain transfer, yet remaining rather light, fast to train and effective on small datasets. | [
"Musical Timbre",
"Instrument Translation",
"Domain Translation",
"Style Transfer",
"Sound Synthesis",
"Musical Information",
"Deep Learning",
"Variational Auto-Encoder",
"Generative Models",
"Network Conditioning"
] | https://openreview.net/pdf?id=HJgOl3AqY7 | https://openreview.net/forum?id=HJgOl3AqY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1l3l3CmeE",
"SyxRA2-1JV",
"r1gzlZCTCm",
"SkgfU-gw0m",
"rklb2xgmAX",
"BJeCRTJ707",
"SyltxKsTpm",
"ryxdk0Yaa7",
"SJxnP8_eam",
"HJgaRiJcnm",
"Ske_2A9d3m",
"HyeLrre8jQ",
"rJxESNauq7",
"BJlgYBaQ9m",
"rkg6uDbMqQ",
"rJll5K8W97"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544969203993,
1543605461669,
1543524585732,
1543074122096,
1542811817318,
1542811093900,
1542465776662,
1542458847753,
1541600868268,
1541172181362,
1541086896323,
1539863870380,
1538999356173,
1538671991661,
1538557812742,
1538513287799
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1088/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1088/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1088/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1088/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1088/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1088/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1088/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a VAE-based model which is able to perform musical timbre transfer.\\n \\nThe reviewers generally find the approach well-motivated. The idea to perform many-to-many transfer within a single architecture is found to be promising. However, there have been some unaddressed concerns, as detailed below. \\n\\nR3 has some methodological concerns regarding negative transfer and asks for more extended experimental section. R1 and R2 ask for more interpretable results and, ultimately, a more conclusive study. R2 specifically finds the results to be insufficient.\\n\\nThe authors have agreed with some of the reviewers' feedback but have left most of it unaddressed in a new revision. That could be because some of the recommendations require significant extra work.\\n\\nGiven the above, it seems that this paper needs more work before being accepted in ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good motivation but important reviewers' concerns remain unaddressed\"}",
"{\"title\": \"answer to the review\", \"comment\": \"thank you for the constructive remarks\\n\\naccording to it, we are reconsidering how we compute scores and particularly scaling transfer objectives (MMD,KNN,Energy Statistics) with scores on different partitions of source/target domains against reference batches of the target domain\\n\\nwe may also consider some sample-to-sample evaluations restricted to common tessitura\\neg. transferring a note from a domain to an other and comparing with the target domain sample that has the same note class\\n\\nit is correct also that you point that the formula is for a mix of RBF kernels with different bandwidth\"}",
"{\"title\": \"Re: author response\", \"comment\": \"Re: Scores and bolding\\n\\nIt's clear that you intend to bold the \\\"best\\\" score, but sorting by mean RMSE (or whatever metric you choose) isn't particularly robust, since it ignores the variance of each method. I'm skeptical that differences in the third decimal place are really significant, and some kind of statistical test for significance in distinguishing from the \\\"best\\\" seems warranted here (paired t-test, wilcoxon sign-rank, something).\\n\\nFor the MMD and KNN metrics, as I suggested initially, you might look at the scores produced by generating random partitions of samples from the target distribution. This would at least give a lower bound on the scores you could hope to achieve with a good approximation of the target distribution.\", \"re\": \"RBF kernel, your definition seems to be a sum of RBF kernels with different bandwidths, not an RBF kernel properly. This should be clarified in the text.\"}",
"{\"title\": \"answer to the reviewer comment\", \"comment\": \"Thank you for the further discussion.\\n\\nSo far our models were either non-conditional or conditional on both encoder and decoder.\\nModels without the pitch information could train well but did not generalized so well.\\nThey tended to reconstruct with some transposition or on wrong octave.\\n\\nI am considering on experimenting with having a non-conditional state inside the conditional models.\\nI am also considering conditioning only the decoder. If that is what you refer as only having pitch-conditional generative model. It would indeed offer a larger application potential at encoding unlabelled audio while offering the generative control.\\n\\nRegarding the latent space dimensionality, I will add more details on the impact of lowering down to 3. I as well worked on an alternative WAE-MMD regularization instead of KLD. Which would be more flexible, the lowered performance of the 3D latent space may be balanced with a larger prior variance or a more efficient prior choice.\\n\\nI will improve my experiment, based on the constructive remarks I receive from you and from AnonReviewer1. And will submit the newer version again to later conferences.\"}",
"{\"title\": \"answer to the review\", \"comment\": \"Thank you for your review, below we answer the points that were questioned.\\n\\n* Missing implementation steps and optimization details:\\nIn addition to implementation details, the appendix has a rather detailed table of the architecture parameters. Moreover, we will ultimately release codes on Github.\\n\\n* Non-matched experiment to practice environment:\\nThe evaluation of generative models and unsupervised domain translations remains an open question, even less covered in the field of sound. We didn't apply our models yet to datasets previously covered in the related works, such as Nsynth, which is planned and would give some more direct comparisons.\\n\\n* How to avoid the negative knowledge transfer:\\nAs we defined our purpose, the resulting generation is a blending of both domains that renders a target timbre while retaining some of the input features. It amounts to note class (that is explicitly controlled for the note-conditional model states) together with timbre. We plan on experiments on controlling the amount of timbre transfer in between the input and target domains.\"}",
"{\"title\": \"response to authors' comments\", \"comment\": \"Thank you. I have a few more comments:\", \"regarding_pitch_label_extraction\": \"if this is necessary anyway, then why not just train a pitch-conditional generative model? What is the benefit of additionally conditioning the model on the original audio, in this case? I still think it defeats the purpose of the \\\"transfer\\\" aspect a bit. Have you assessed at all how much the model actually relies on being conditioned on the original audio? Perhaps it is already largely ignoring it, and just using the pitch label to know what to generate.\", \"regarding_the_3d_latent_space\": \"this is a reasonable argumentation and it would be good to make this more clear in the paper itself. Still, some comparison experiments with different latent space dimensionalities would be useful to demonstrate that 3 is enough.\", \"regarding_claims_about_running_time\": \"it is definitely worth stating that the results you report can be achieved in less than a day on a single Tesla V100 GPU. But the claim in the paper was specifically about other strategies than the chosen one taking much longer, and this is not corroborated, so it would be good to reformulate this.\"}",
"{\"title\": \"answer to the review\", \"comment\": \"Thank you for your detailed review and the constructive comments on our work. We note the remarks on the paper writing that we will correct and answer below the main points that were commented.\\n\\n* In-depth evaluation of MoVE and comparison of with/without conditioning:\\nWe agree and this was also pointed by 'AnonReviewer2', we are working on new incremental benchmarks, more detailed on both one-to-one and many-to-many models. Moreover, the need of pitch/octave conditioning limits the applicability of our model to transfer only on audio carrying such note features. Hence we trained models without conditioning mechanism and, as answered to 'AnonReviewer2', we are planning experiments on models which are conditional but integrating an unconditioned state to be trained in parallel of the note-conditional state.\\n\\n*** Interpretability of the generative scores:\\nWe agree on this remark, the idea of scaling scores is right and would improve the interpretability of our benchmarks. For that purpose, we should define a set of reference scores as you recommended to.\\n\\n* Incomplete definition of the metrics:\\nWe gave references to the papers that introduced such metrics. Discussing a set of reference scores should also come with a better explanation of these.\\n\\n* Criteria for bolding: we intended to highlight the best scores\\n\\n*** Pairing generated and real examples by instrument and note to compare:\\nIn addition to the spectral descriptor distribution plots, we used sample-specific scatter plots to visualize how the transfer maps them individually. On the overlap of each instrument tessitura, we can make such pairing. We can also transfer and transpose to the target instrument tessitura if needed. Remains the question of which metric can be used here to evaluate generation at the sample-level (?), as our model does not aim at reconstructing an hypothetical corresponding sample in the target domain but rather at blending in features from the other domain so that it sounds like the input note (pitch, octave but also some dynamics/style qualities relative to the input instrument) played by the target instrument. We later aim at experimenting on mechanisms to control the amount of target feature blending in the process of transfer.\\n\\n* Invertible ? Decodable ? Approximate inversion ?\\nWe agree that the current state of the research should be stated as using approximate spectrogram inversion.\\nWe plan on replacing the iterative slow spectrogram inversion with Griffin-Lim by faster decoding with Multi-head Convolutional Neural Networks, arXiv:1808.06719, Sercan O. Arik et al.\\n\\n*** Definition of the RBF kernel:\\nThe summation is on the alpha parameter which can be a list of n values (or a single float value). The trainings were done with n=3 and alpha=[1. , 0.1 , 0.05]. Depending on the kernel and bandwidth definitions, we may link both as\\nalpha = 1 / (2 x bandwidth**2).\\n\\n* Calculation of reconstruction errors:\\nAll scores are computed on NSGT magnitude spectrogram slices. No evaluation (except listening) is done on the time-domain waveforms.\\n\\nThe points marked with *** are highlighted as we would gratefully receive further remarks from your review.\\nHow would you recommend making reference scores to the MMD/kNN evaluations ?\\nHow would you recommend comparing pairs of generated and ~ corresponding target domain samples ? (at the sample level)\\nIs the definition of the RBF kernel correct to you given that clarification (that should be added to the paper) ?\\n\\nThanks again for the interesting feedbacks !\"}",
"{\"title\": \"answer to the review\", \"comment\": \"Thank you for the detailed review and constructive remarks.\\nBelow are answers to the main points that were commented as well as updates on the current work.\\n\\n* Sound quality is disappointing and with artifacts:\\nWe are working on Fast Spectrogram Inversion using Multi-head Convolutional Neural Networks, arXiv:1808.06719, Sercan O. Arik et al. to replace Griffin-Lim inversion ; two possible improvements we expect are much faster (towards real-time) sound rendering and better audio quality.\\nWe are also working on mini-batch MMD latent regularization (Wasserstein-AE) instead of per-sample KLD regularization (VAE) which may result in improved generalization power and generative quality.\\n\\n* Not suited to transfer from audio without label:\\nIf the audio carries a note information, it can be easily/automatically extracted in the form of pitch tracks as we did for transferring on instrument solos. Some audio data do not have note qualities, which are out of the current training setting. For that we have been training unconditioned one-to-one models or solely instrument conditional many-to-many models that do not require any note information.But we are working on models which incorporate an unconditioned processing option (eg. training while zeroing the one-hot conditioning or adding an entry in the input embedding of FiLM which is the unconditional state) to be trained on a dataset that mixes conditional and non conditional audio (eg. adding instrument solo sections which in parts have a clear pitch track and in others none).\\n\\n* A fully convolutional model would process arbitrary length of audio:\\nWe use the linear layers to set the latent space dimensionality, when processing various length audio sequences, each encoding amounts to about 120ms context and we resynthesize with overlap-ad that mirrors the short-term input analysis ; this process was used when transferring on the instrument solos (a task that was beyond the training setting).\\n\\n* Insufficient justification of the 3D latent space:\\nAt first we validated that our models could perform well in term of training/test spectrogram reconstructions with only 3 latent dimensions, some reasons that we found interesting to enforce this are more related to a possible music/creative application of the model: less synthesis/control parameters for the user (and controls which may then be more expressive), direct visualization of the latent space which is turned into a 3D synthesis space from which users may draw and decode sound paths or create other interaction schemes, a denser latent space that may be better suited for random sampling/interpolations. The direct interaction with 3D latent space becomes even more interesting when we pipeline our model with fast-spectrogram inversion.\\n\\n* Interesting incremental comparison in one-to-one transfers:\\nWe keep working on more detailed benchmarks/comparisons that would equally cover one-to-one and many-to-many model variations and that would integrate the new features we are testing.\\n\\n* All claims about running time should be corroborated by controlled experiments:\\nIndeed we didn\\u2019t benchmark yet our models on Nsyth and our approach differs from others such as Mor et al. that report using \\u00ab\\u00a0eight Tesla V100 GPUs for a total of 6 days\\u00a0\\u00bb. From the beginning of our experiment we aim at a much lighter-weight system that could be trained/used more broadly (eg. with a single mid-range GPU). The computational cost difference is not rigorously estimated on a same given dataset/task to learn but still we think it is relevent to point that the results we report can be achieved in less that a day on a single Tesla V100 GPU.\\n\\n* Why does the MMD version constitute an improvement? Or is it simply more stable to train?\\nIt is more stable to train, it does not require the extra \\u2018cost\\u2019 of an auxiliary network training and it can generalize to many-to-many transfer without requiring as many adversarial networks. About the significance of score differences, we agree that it needs more details and comparisons, it was also noted by \\\"AnonReviewer1\\\" and we should make alternative tests to scale or give a few more references to the benchmark.\\n\\n* \\\"FILM-poi\\\" .. is this a typo ?\\nThank you for pointing this as well as your other remarks on the writing and use of precise terms/phrases. Indeed this is right, we mixed poi/pod but both refer to many-to-many conditioning on pitch+octave+instrument/domain classes.\\n\\nWe also thank you for pointing more literature to improve our references and discussions to related works.\"}",
"{\"title\": \"Interesting but is hardly to ready due to the confusing introduction\", \"review\": \"The authors proposed a Modulated Variational auto-Encoders (MoVE) to perform musical timbre transfer. The authors define timbre transfer as applying parts of the auditory properties of a musical instrument onto another. It replaces the usual adversarial translation criterion by a Maximum Mean Discrepancy (MMD) objective. By further conditioning our system on several different instruments, the proposed method can generalize to many-to-many transfer within a single variational architecture able to perform multi-domain transfers.\\nSome detailed comments are listed as follow,\\n1 The implementation steps of the proposed method (MoVE) are not clear. Some details are missing, which is hardly reproduced by the other researchers.\\n2 The experimental settings are not reasonable. The current experimental settings are not matched with the practice environment. \\n3 The proposed method can transfer the positive knowledge. However, some negative knowledge information can be also transferred. So how to avoid the negative transferring? \\n4 For the model, the optimization details or inferring details are missing, which are important for the proposed model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting and well written, but the evaluation is difficult to interpret\", \"review\": \"Summary\\n-------\\nThis paper describes a model for musical timbre transfer which builds on recent developments in domain- and style transfer.\\nThe proposed method is designed to be many-to-many, and uses a single pair of encoders and decoders with additional conditioning inputs to select the source and target domains (timbres).\\nThe method is evaluated on a collection of individual note-level recordings from 12 instruments, grouped into four families which are used as domains.\\nThe method is compared against the UNIT model under a variety of training conditions, and evaluated for within-domain reconstruction and transfer accuracy as measured by maximum mean discrepancy.\\nThe proposed model seems to improve on the transfer accuracy, with a slight hit to reconstruction accuracy.\\nQualitative investigation demonstrates that the learned representation can approximate several coarse spectral descriptors of the target domains.\\n\\n\\nHigh-level comments\\n-------------------\\nOverall, this paper is well written, and the various design choices seem well-motivated.\\n\\nThe empirical comparisons to UNIT are reasonably thorough, though I would have preferred more in-depth evaluation of the MoVE model as well. Specifically, the authors introduced an extra input (control) to encode the pitch class and octave information during encoding. I infer that this was necessary to achieve good performance, but it would be instructive to see the results without this additional input, since it does in a sense constitute a form of supervision, and therefore limits the types of training data which can be used.\\n\\nWhile I understand that quantifying performance in this application is difficult, I do find the results difficult to interpret. Some of this comes down to incomplete definition of the metrics (see detailed comments below).\\nHowever, the more pressing issue is that evaluation is done either sample-wise within-domain (reconstruction), or distribution-wise across domains (transfer). The transfer metrics (MMD and kNN) are opaque to the reader: for instance, in table 1, is a knn score of 43173 qualitatively different than 43180? What is the criteria for bolding here? It would be helpful if these scores could be calibrated in some way, e.g., with reference to\\nMMD/KNN scores of random partitions of the target domain samples.\\n\\nSince the authors do additional information here for each sample (notes), it would be possible to pair generated and real examples by instrument and note, rather than (in addition to) unsupervised, feature-space pairing by MMD. This could provide a slightly stronger version of the comparison in Figure 3, which shows that the overall distribution of spectral centroids is approximated by transfer, but does not demonstrate per-sample correspondence.\\n\\n\\n\\nDetailed comments\\n-----------------\\nAt several points in the manuscript, the authors refer to \\\"invertible\\\" representations (e.g., page 4, just after eq. 1), but it seems like what they mean is approximately invertible or decodable. It would be better if the authors were a little more careful in their use of terminology here.\\n\\nIn the definition of the RBF kernel (page 4), why is there a summation? \\n What does this index? How are the kernel bandwidths defined?\", \"how_exactly_are_reconstruction_errors_calculated\": \"using the NSGT magnitude representation, or after resynthesis in the time domain?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice idea but falls short of what it promises\", \"review\": \"This work proposes a hybrid VAE-based model (combined with an adversarial or maximum mean discrepancy (MMD) based loss) to perform timbre transfer on recordings of musical instruments. Contrary to previous work, a single (conditioned) decoder is used for all instrument domains, which means a single model can be used to convert any source domain to any target domain.\\n\\nUnfortunately, the results are quite disappointing in terms of sound quality, and feature many artifacts. The instruments are often unrecognisable, although with knowledge of the target domain, some of its characteristics can be identified. The many-to-many results are clearly better than the pairwise results in this regard, but in the context of musical timbre transfer, I don't feel that this model successfully achieves its goal -- the results of Mor et al. (2018), although not perfect either, were better in this regard.\", \"i_have_several_further_concerns_about_this_work\": [\"The fact that the model makes use of pitch class and octave labels also raises questions about applicability -- if I understood correctly, transfer can only be done when this information is present. I think the main point of transfer over a regular generative model that goes from labels to audio is precisely that it can be done without label information.\", \"The use of fully connected layers also implies that it requires fixed length input, so windowing and stitching are necessary for it to be applied to recordings of arbitrary length. Why not train a convolutional model instead?\", \"I think the choice of a 3-dimensional latent space is poorly justified. Why not use more dimensions and project them down to 3 for visualisation and interpetation purposes with e.g. PCA or t-SNE? This seems like an unnecessary bottleneck in the model, and could partly explain the relatively poor quality of the results.\", \"I appreciated that the one-to-one transfer experiments are incremental comparisons, which provides valuable information about how much each idea contributes to the final performance.\", \"Overall, I feel that this paper falls short of what it promises, so I cannot recommend acceptance at this time.\"], \"other_comments\": [\"In the introduction, an adversarial criterion is referred to as a \\\"discriminative objective\\\", but \\\"adversarial\\\" (i.e. featuring a discriminator) and \\\"discriminative\\\" mean different things. I don't think it is correct to refer to an adversarial criterion as discriminative.\", \"Also in the introduction, it is implied that style transfer constitutes an advance in generative models, but style transfer does not make use of / does not equate to any generative model.\", \"Some turns of phrase like \\\"recently gained a flourishing interest\\\", \\\"there is still a wide gap in quality of results\\\", \\\"which implies a variety of underlying factors\\\", ... are vague / do not make much sense and should probably be reformulated to enhance readability.\", \"Introduction, top of page 2: should read \\\"does not learn\\\" instead of \\\"do not learns\\\".\", \"Mor et al. (2018) do actually make use of an adversarial training criterion (referred to as a \\\"domain confusion loss\\\"), contrary to what is claimed in the introduction.\", \"The claim that training a separate decoder for each domain necessarily leads to prohibitive training times is dubious -- a single conditional decoder would arguably need more capacity than each individual separate decoder model. I think all claims about running time should be corroborated by controlled experiments.\", \"I think Figure 1 is great and helps a lot to distinguish the different domain translation paradigms.\", \"I found the description in Section 3.1 a bit confusing as it initially seems that the approach requires paired data (e.g. \\\"matching samples\\\").\", \"Section 3.1, \\\"amounts to optimizing\\\" instead of \\\"amounts to optimize\\\"\", \"Higgins et al. (2016) specifically discuss the case where beta in formula (1) is larger than one. As far as I can tell, beta is annealed from 0 to 1 here, which is an idea that goes back to \\\"Generating Sentences from a Continuous Space\\\" by Bowman et al. (2016). This should probably be cited instead.\", \"\\\"circle-consistency\\\" should read \\\"cycle-consistency\\\" everywhere.\", \"MMD losses in the context of GANs have also been studied in the following papers:\", \"\\\"Training generative neural networks via Maximum Mean Discrepancy optimization\\\", Dziugaite et al. (2015)\", \"\\\"Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy\\\", Sutherland et al. (2016)\", \"\\\"MMD GAN: Towards Deeper Understanding of Moment Matching Network\\\", Li et al. (2017)\", \"The model name \\\"FILM-poi\\\" is only used in the \\\"implementation details\\\" section, it doesn't seem to be referred to anywhere else. Is this a typo?\", \"The differences between UNIT (GAN; C-po) and UNIT (MMD; C-po) in Table 1 seem very small and I'm not convinced that they are significant. Why does the MMD version constitute an improvement? Or is it simply more stable to train?\", \"The descriptor distributions in Figure 3 don't look like an \\\"almost exact match\\\" to me (as claimed in the text). There are some clearly visible differences. I think the wording is a bit too strong here.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Repository progresses\", \"comment\": \"Hello everyone,\\nSince the submission, the repository has been developed.\\nAudio examples, visualisations and animations are detailed here:\", \"https\": \"//github.com/anonymous124/iclr2019MoVE/blob/master/docs/index.md\\n\\nThank you for your interest, work is still ongoing and more will be uploaded throughout the next weeks.\"}",
"{\"title\": \"thanks\", \"comment\": \"Thank you for the positive and constructive feedback !\\nIndeed, WAEs are of interest even though I only worked with the KLD latent regularization so far (here the MMD is applied to data distributions rather than latents as in WAEs).\\nI didn't mention them but I agree that it would be better to at least add them to the related works and references.\\n\\nThe main advantage reported compared to VAEs is their less blurry outputs in the case of image generation.\\nIt might also benefit to sound synthesis, but might be less crucial as spectrogram inversion is applied to the raw network outputs, which itself already introduces some posterior approximations (and audio artifacts).\\n\\nAbout this, for later experiments, I may consider replacing the use of Griffin-Lim by neural network spectrogram inversion, possibly using Wavenet Vocoder [1] or MCNN [2].\\n\\n[1] Wei Ping et al. \\\"DEEP VOICE 3: SCALING TEXT-TO-SPEECH WITH CONVOLUTIONAL SEQUENCE LEARNING\\\"\\n[2] Sercan Arik et al. \\\"Fast Spectrogram Inversion using Multi-head Convolutional Neural Networks\\\"\\n\\nI wonder if this should also be added to the final paper (if selected).\"}",
"{\"comment\": \"Very detailed and impressive work !\\n\\nI would just like to point out the previous series of work from Ilya Tolstikhin, Olivier Bousquet et al. on the use of MMD penalizations for both GAN or auto-encoders approaches, which could probably be added after the reference to Gretton et al.'s work, or rather as a comparative alternative to GANs (although your use differs greatly from theirs), see [1], [2] or [3] for instance.\\n\\n[1] From optimal transport to generative modeling: the VEGAN cookbook (Bousquet et al., 2017)\\n[2] Wasserstein auto-encoders (Tolstikhin et al., 2018) - presented at ICLR 2018\\n[3] On the Latent Space of Wasserstein Auto-Encoders (Rubenstein et al., 2018)\", \"title\": \"extra bibliography\"}",
"{\"title\": \"Github repo under construction\", \"comment\": \"(EDIT: some examples of transfer have been uploaded for the in-between time, you may have a look at the solo_transfers directory in the repository and hear a Violin solo transferred to Alto-Saxophone and reversely an Alto-Saxophone solo transferred to Violin)\\n\\nThis is right, the Github repo is empty at the moment.\\n\\nWe are currently working on the content and codes, by the end of next week we will have prepared and put online most of the audio examples, demonstrations and visualisations. Codes and new results will follow.\\n\\nThank you for pointing it out, we invite you to visit the repo later again.\"}",
"{\"comment\": \"It seems that the github repo is empty.\", \"title\": \"The github repo is empty\"}"
]
} |
|
rJ4vlh0qtm | SSoC: Learning Spontaneous and Self-Organizing Communication for Multi-Agent Collaboration | [
"Xiangyu Kong",
"Jing Li",
"Bo Xin",
"Yizhou Wang"
] | Multi-agent collaboration is required by numerous real-world problems. Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks. For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture. However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity. Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available. Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication (SSoC) learning scheme. By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way. Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states. In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner. The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions. Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart. With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines. | [
"reinforcement learning",
"multi-agent learning",
"multi-agent communication",
"deep learning"
] | https://openreview.net/pdf?id=rJ4vlh0qtm | https://openreview.net/forum?id=rJ4vlh0qtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skekr57teV",
"Skl9IBL9A7",
"SJgGnWw92m",
"SkxIWUbt2Q",
"BJgUO5sOn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545316919031,
1543296337780,
1541202346397,
1541113342389,
1541089901958
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1087/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1087/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1087/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1087/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1087/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers raised a number of major concerns including the incremental novelty of the proposed and a poor readability of the presented materials (lack of sufficient explanations and discussions). The authors decided to withdraw the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"A Revised Version\", \"comment\": \"In the revised paper, we have updated a complete appendix which is missing in the original submission. Thanks.\"}",
"{\"title\": \"not clear about the originality\", \"review\": \"This paper proposes a spontaneous and self-organizing communication learning scheme in multi-agent RL setup. The problem is interesting. I mainly have one concern regarding its originality.\\n\\nFrom a technical perspective, it's not clear to me that there's much novelty in this approach. I guess it might be the case the focus of the paper is to propose a framework or scheme. However, almost all the ingredients/components are standard.\\n\\nRegarding clarity, it's not clear to me:\\n* how the structure from Figure 2 can be reproduced. \\n* how statistically significant the evaluation results are.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper presents a study on multi-agent communication. The main innovation from previous work is the introduction of an explicit Speak binary action that controls whether or not an agent will emit a message. The proposed model is test on two tasks, multi-camera surveillance and battle tasks with a large number of population.\\n\\nOverall, this paper is clear (although model details are missing, the authors point at the Appendix, but the Appendix is missing) and the authors compare to a number of baselines. I appreciate the use of multi-agent communication in cases where the number of agents very large, as this is a very good stress test for current algorithms and can potentially help identifying novel challenges. \\n\\nMy main concern is that in a collaborative setting I don't see why we should expect that occluding information is better than revealing information? Isn't always revealing everything the best strategy? When only 3 cameras, I really cannot think of why a model would get better performance by choosing to not reveal information. Figure 4b somewhat confirms that as there seems to be a lot of variance on the Ssoc. Have you checked how stable are your results across runs? I could believe that occluding information can be beneficial if the number of agents is very big and there is redundancy. But in the limit, not revealing information should only facilitate training -- and indeed this seems to be happening in 6a as Ssoc is learning faster but meanfield is catching up. Could there be an ablation experiment in which everything stays the same in the model but the agents always activate the Speak action? This would answer the question of how crucial this main Speak feature is for Ssoc.\\n\\nCan you elaborate on this? Moreover, since the Appendix appears to be missing, can you comment on stable results were across runs?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Paper can be improved by adding more ablation studies around the architecture and doing a more through empirical evaluation.\", \"review\": \"Summary:\\nThis paper expands on the work on 'emergent communication' with 2 innovations: \\n- The architecture has a separate 'message channel' that processes the incoming and outgoing messages mostly independently of the hidden state of the agent. There are also dedicated architecture elements for the interaction between the hidden state and the message stream. \\n- The outgoing message is gated with a 'speak' action: only when the agent takes the speak-action at time step t is a message sent out at timestep t+1.\", \"comments_for_improvement\": \"-The paper proposes a rather complicated architecture, with many moving part. In the paper's current form it is extremely hard to see which part of this architecture contribute to the success of the method. A set of ablation studies on the different components would indeed be very helpful. \\n-Using the word 'thought' to describe the hidden state of the agent is rather distracting.\\n-Equation (1): This just seems to be the policy gradient term for a factorised action space across 'environment action' and 'communication action'. The only obvious difference is that the policy here is shown to condition on the state representation s_t, rather than on the input. Is that intended?\\n-The paper suffers from a lot of undefined notation, e.g. the s_t above. Please clarify.\\n-In Figure 2b) the MCU is shown to produce the action a_t as an output. That seems like a mistake. \\n-Figure 4): The results seem to be extremely unstable, which is a well known issue for independent learning. Recent work (MADDPG, COMA) has shown that centralised critics can drastically avoid these instabilities and improve final performance. Did you compare against using a centralised critic, V(central state), rather the V(observation)? Also, using a single seed on this kind of unstable learning process renders the results highly non-conclusive. \\n-In Figure (5), what are the red-arrows? Do these correspond to the actual actions taken by the agents or are they simply annotations? It would be good to see how far the communication range is by comparison. Also, why is there a blob of 'communicating' agents far from the enemy? \\n-Are different methods in the large scale battle task trained in self-play and then pitched against other methods in a round-robin tournament after training has finished or are they trained against each other? \\n-In Figure 6 (a), why are average rewards changing over the course of training? I would expect this to be a zero-sum setting in self-play. \\n-I couldn't find any supplementary material referenced in the text for the details. Instead the paper seems to have another copy of the paper itself attached in the pdf. This makes it hard to evaluate the paper given that few details around training are provided in the main text. \\n\\nOverall I am concerned that the learning method used in the paper (independent baseline) is known to be unstable and to produce poor results in the multi-agent setting (see COMA and MADDPG). This raises the concern that the communication channel is mostly useful for overcoming the issues introduced from having a decentralised critic.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJGven05Y7 | How to train your MAML | [
"Antreas Antoniou",
"Harrison Edwards",
"Amos Storkey"
] | The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem.Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++. | [
"meta-learning",
"deep-learning",
"few-shot learning",
"supervised learning",
"neural-networks",
"stochastic optimization"
] | https://openreview.net/pdf?id=HJGven05Y7 | https://openreview.net/forum?id=HJGven05Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylATrFZeN",
"Byg5xz2rCX",
"Hkg2spUSRQ",
"H1eHVoEaaX",
"H1ewhME6TQ",
"BklkMRGpam",
"Skg14bX92X",
"rJg-0D-927",
"rJxOlkMPh7",
"HJgieSXZ5X",
"rJlF4Lf-97",
"HkgRQ8EAtQ",
"rke46mAatQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544816070212,
1542992369799,
1542970788288,
1542437676581,
1542435502656,
1542430214560,
1541185831233,
1541179337165,
1540984560090,
1538499826746,
1538496049253,
1538307622390,
1538282428344
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1086/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1086/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1086/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1086/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1086/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1086/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1086/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1086/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1086/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1086/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1086/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes several improvements for the MAML algorithm that improve its stability and performance.\", \"strengths\": \"The improvements are useful for future researchers building upon the MAML algorithm. The results demonstrate a significant improvement over MAML. The authors revised the paper to address concerns about overstatements\", \"weaknesses\": \"The paper does not present a major conceptual advance. It would also be very helpful to present a more careful ablation study of the six individual techniques.\\nOverall, the significance of the results outweights the weaknesses. However, the authors are strongly encouraged to perform and include a more detailed ablation study in the final paper. I recommend accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}",
"{\"title\": \"2nd Response to Reviewer 1\", \"comment\": \"Thanks for your prompt response. I think your point about the _automation_ of things is correct. I will amend the paper to be more precise in that claim as per your request. Regarding the automation of additional parts of the system, I am currently working on that, but it felt like it exceeded the scope of this paper hence breaking it into smaller easier to digest papers, that tackle one thing at a time. In my experience, papers that try to do too many things at once are often incredibly hard to write, and even harder to read.\\n\\nI will modify the particular claim shortly. Thanks for your time.\"}",
"{\"title\": \"response to authors\", \"comment\": \"< The alpha also includes a sign. >\\nOk that makes sense. It might be worth adding a sentence that says this (if there isn't one already).\\n\\n< Thus, random initialization suffices for that aspect, which does reduce the need for explicitly choosing a learning rate. >\\nOk, so you're saying that one of maml's hyperparameters is now a set of less-sensitive hyperparameters. That sounds useful, but it's very different from from the claim you make in the paper that maml++ gives \\\"automatic learning for most of the system\\u2019s hyperparameters\\\". There are two problems with this claim\\n\\n1.) As far as I see, the only thing you've _automated_ is the setting of the inner loop learning rate, and in so doing you added more hyperparameters that need to be set. It's good they're not so sensitive, but they still have to be set. It's also good that your settings make the system overall easier to optimize, but that's not the same as automation.\\n2.) You haven't gotten rid of \\\"most\\\" of the hyperparameters. There's still the outer loop learning rate and the other optimizer hyperparameters (e.g. \\\\beta_1 and \\\\beta_2 in Adam). In the most generous interpretation, you've made half of the hyperparameters less sensitive. Additionally, all of the architecture hyperparameters e.g. number of layers, number of units per layer, etc etc still need to be set by the user.\\n\\nOverall, this seems to me like a significant over-claiming issue. Replacing the language about \\\"automating most hyperparameters\\\" with something about \\\"reducing inner loop hyperparameter sensitivity\\\" would be sufficient.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for taking the time to review our paper. Before I start delving into the technical aspects of this response. To address your concerns, I will use an enumeration that matches the indexes of your concerns.\\nThe paper is indeed targeted towards a particular class of algorithms. That class being end-to-end differentiable gradient-based meta-learning. MAML and Meta-learner LSTM [1] are two instances of that particular class of algorithms. Our proposed techniques can be applied to any algorithm of that class, given that they utilize inner-loop optimization processes as part of their learning. So, even though this work is indeed targeted towards a particular class of models, that class is general enough and applicable to enough domains that we felt that an investigation of the type presented in this paper was necessary. In fact, the work in this paper was the result of the first author\\u2019s attempts to build systems that learn various other components (i.e. instead of just learning a highly adaptable parameter initialization, he was attempting to learn loss functions/update functions and dynamic generation of parameter initializations given a task among others). What he realized, however, was that MAML was really hard to actually work with, being very inflexible to architecture configuration, causing gradient degradation problems, instability in training and requiring lots of manual inner loop learning rate tuning. In attempting to fix those problems, so he could build on top more complicated systems, this paper came to be. \\nIn MAML, the resulting inference model is effectively an unrolled 5 layer network over N steps. If that N=5, then the resulting model has a depth of effectively 25 layers. In standard deep networks, gradient degradation can be greatly reduced or altogether removed via the usage of skip-connections. Since in MAML we can\\u2019t really apply skip-connections from a subsequent model to a previous one (because that would further complicate the gradients), we decided that the best way to inject clean/stable gradients to all iterations of the network would be to use 2 losses for each step-wise network. One loss, providing an implicit gradient, coming from subsequent iterations of the network (i.e. the original MAML loss), and another per-step loss, providing an explicit gradient, coming directly from evaluating the model on the target set. This way, every network iteration receives stable gradients which keep the network stable during the early epoch training. Eventually, the importance of earlier steps becomes 0, which means that the original MAML loss is used instead. However, since the network has already learned a stable parameterization, the stability remains throughout training (we empirically confirmed this).\\nWe conducted an ablation study on 20-way 1-shot Omniglot, as shown in table 2. We did want to conduct even more exhaustive ablation studies across all Omniglot and Mini-Imagenet tasks, however, due to computing constraints we had to restrict ourselves. Using the \\u201chardest\\u201d Omniglot 20-way 1-shot task as the ablation study\\u2019s subject seemed like a sensible thing to do since it was cheaper computationally, but \\u201chard\\u201d enough for the results to generalize well in other tasks.\\nIndeed, annealing various components is not as novel as some of the other proposals in the paper. However, since this paper was essentially an engineer\\u2019s handbook on how to train MAML-like models, we felt that people should be aware of the effect those techniques have on the system\\u2019s performance.\\nIndeed, there is other literature on meta-learning learning rates. Our approach\\u2019s novelty lies in learning \\u201cper-step\\u201d and \\u201cper-layer\\u201d learning rates. By being able to learn per step learning rates, we allow the network to choose to decrease or increase it\\u2019s learning rates at each step, to minimize overfitting. Another interesting phenomenon, that we will address in a future blog post, is the fact that across all networks, we noticed that particular layers choose to \\u201cun-learn\\u201d (flipping the direction of the learning rate) at particular steps. We theorize that the network might be attempting to remove some existing knowledge to replace it with new knowledge, or using forgetting as a way to steer gradients for more efficient learning.\\n\\nRegarding the minor concerns, yes, we will fix the referencing inconsistencies and the batch size indexing problem.\\n\\nOnce again, I want to thank you for taking the time to review our work.\\n\\n1. Ravi, S. and Larochelle, H. (2016). Optimization as a model for few-shot learning.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for taking the time to review our paper. Further thanks for your very detailed, useful and constructive comments. We will now address your concerns below in the same order they were made:\\n\\nWe claim that we reduce the hyperparameter choices needed because once our methodologies are applied exactly as proposed, the resulting system will achieve very high generalization and fast convergence without any additional tuning. We have attempted to initialize the learning rates from a random uniform distribution (ranging from 0.1 to 0.01) in addition to initializing manually. Both methods, interestingly, converge to very similar learning rates. Thus, random initialization suffices for that aspect, which does reduce the need for explicitly choosing a learning rate.\\nRegarding the gradient directions. The alpha also includes a sign. So, in other words, the alpha also learns the direction of the learning rate, hence our claim. In fact, an interesting finding is that, in specific steps and layers, the network chooses to \\u201cunlearn\\u201d or flip the sign of the learning rate. Further investigation is required to understand this behavior, but a current working hypothesis is that the network is trying to \\u201cforget\\u201d particular parts of its weights, which somehow produces more efficient learning, in subsequent steps. We will further expand on this in a future blog post. \\n\\nAll of your suggestions and typo-locations are spot-on and we will take care to address all of those in the final version of the paper. Again, we really thank you for providing such a detailed and constructive review.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your review.\\n\\nRegarding the conceptual and technical novelty concerns.\\n\\nTo clarify, our main contribution comes in the form of carrying an investigation on how MAML can be stabilized and how the model can be modified such that it can consistently achieve faster convergence and strong generalization results without any hyperparameter tuning required. Then, once the investigation is completed and key problem-areas isolated, we use our investigation insights to improve the system. In fact, the whole reason for doing this was because we attempted to built new research ideas on top of MAML only to find out just how sensitive and unstable the system was. Therefore, we decided that finding the issues and fixing them would enable researchers working on gradient-based end-to-end meta-learning, such as MAML or Meta Learner LSTM [1] to concentrate on the new approach they want to build rather than trying to overcome instability issues of the base methodology. Furthermore, the industry would also benefit from this, as they would have an easier time training MAML based models. \\n\\nMost of the proposed approaches are novel and non-obvious (i.e. LSLR, BNWB+BNRS, and multi-step loss optimization). Overcoming gradient degradation issues by utilizing multi-step target-loss optimization which is annealed over time, is in our knowledge, done for the first time in this work. Furthermore, we provide novel contributions in the form of learning things \\u201cstep-by-step\\u201d.\\n\\nFor example, we propose that learning per-layer, per-step learning rates would benefit the system, more so than just learning per-layer learning rates and sharing them. The reason is that the model would be free to choose to decrease its learning rate or otherwise change it from step to step to reduce overfitting. This technique is both novel and non-obvious. Furthermore, LSLR is not something that is possible in standard deep learning, as learning the learning rates would require an additional level of abstraction (thus entering the meta-learning arena). \\n\\nAnother contribution with significant novelty comes in the form of proposing a step-by-step batch norm variant, designed for meta-learning systems that require inner loop optimization. Learning batch norm parameters for every step, as well as collecting per-step running statistics speeds up the system and allows batch normalization to truly work in this setting, whereas the previous variant of batch norm used, constrained things further, instead of achieving the improved convergence and generalization that batch norm can achieve in standard deep learning training setups. \\n\\nThe rest of the contributions, such as annealing the derivative order and using cosine scheduling for Adam are less novel, but nonetheless important to investigate. We show from our experiments that those approaches can improve the system, something which was previously unconfirmed. \\n\\nThe comparative performance (between MAML and MAML++) both in convergence speed and final generalization is significant and produces state of the art results. Furthermore, that performance is achieved far more consistently and with more stability across architectures. We hold the belief that the community would really benefit from this work, hence why we submitted it.\\n\\n1. Ravi, S. and Larochelle, H. (2016). Optimization as a model for few-shot learning.\"}",
"{\"title\": \"A paper with marginal novelty over an established framework.\", \"review\": \"[Summary]\\nThis work presents several enhancements to the established Model-Agnostic Meta-Learning (MAML) framework. Specifically, the paper starts by analyzing the issues in the original implementations of MAML, including instability during training, costly second order derivatives evaluation, missing/shared batch normalization statistics accumulation/bias, and learning rate setting, which causes unstable or slow convergence, and weak generalization. The paper then proposes solutions corresponding to each of these issues, and reports improved performance on benchmark datasets. \\n\\nPros\\nGood technical enhancements that fix some issues of a popular meta-learning framework\\nCons\\nLittle conceptual and technical novelty \\n\\n[Originality]\\nThe major problem I found in this work is the lack of conceptual and technical novelty. The paper basically picks up some issues of the well-established MAML framework, and applies some common practices or off-the-shelf technical treatments to fix these drawbacks and improve the training stability, convergence, or generalization, etc. E.g., it seems to me that the most effective enhancement comes from the use of adoption of learning rate setting (LSLR), or variant version of batch normalization (BNWB+BNRS) in Table 1, which have been the standard tricks to improve performance in the deep learning literature. Overall, the conceptual originality is little. \\n\\n[Quality]\\nThe paper does get most things well executed from the technical point of view. There does not seem any major errors to me. The results reported are also reasonable within the meta-learning context, despite lack of originality. \\n\\n[Clarity]\\nThe paper is generally well written and I did not have much difficulty to follow. \\n\\n[Significance]\\nThe significance of this work is marginal, given the lack of originality. The technical enhancements presented in the paper, however, may be of interest to people working in this area.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"In-depth discussions and improvements on MAML\", \"review\": \"In the work, the authors improve a simple yet effective meta-learning algorithm called Model Agnostic meta-learning (MAML) from various aspects including training instability, batch normalization etc. The authors firstly point out the issues in MAML training and tackle each of the issue with a practical alternative approach respectfully. The few-shot classification results show convincing evidence.\", \"some_major_concerns\": \"1. The paper is too specific about improving one algorithm, the scope of the research is quite narrow and I'm afraid that some of the observations and proposed solutions might not generalize into other algorithms;\\n2. Section 4, \\\"Gradient Instability \\u2192 Multi-Step Loss Optimization.\\\" I don't see clearly why the multi-step loss would lead to stable gradients. It causes much more gradient paths than the original version. I do see the point of weighting the losses from different step;\\n3. The authors should have conducted careful ablation study of each of the issues and solutions. The six ways of proposed improvements may make the the performance boost hard to understand. It would help to see which way of the proposed improvement contribute more than others;\\n4. Many of the proposed improvements are essentially utilizing annealing mechanisms to stabilize the training, including 1) anneals the weighting of the losses from different step; 2) anneal the second derivative to the first derivative;\\n5. For the last two improvements about the learning rate, there are dozens of literature on meta-learning learning rate and the proposed approach does not seem to be novel; \\n \\nMinors\\n1. The reference style is inconsistent across the paper, sometimes it feels quite messy. For example, \\\"Batch Stochastic Gradient Descent Krizhevsky et al. (2012)\\\" \\\"Another notable advancement was the gradient-conditional meta-learner LSTM Ravi & Larochelle (2016)\\\";\\n2. Equation (2) (3) the index b should start from 1, size of B should be 1 to B;\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Improving MAML\", \"review\": \"Paper summary - This paper provides a bag of sensible tricks for making MAML more stable, faster to learn, and better in final performance.\", \"quality___the_quality_of_the_work_is_strong\": [\"the results demonstrate that tweaks to MAML produce significant improvements in performance. However, I have some concern that certain portions of the text overclaim (see concerns section below).\", \"Clarity - The paper is reasonably clear, with some exceptions (see concerns section).\", \"Originality - The techniques described in the paper range from only mildly novel (e.g. MSL, DA), to very obvious (e.g. CA). Additionally, the paper's contributions amount to tweaks to a previously existing algorithm.\", \"Significance - The quality of the results make this a significant contribution in my view.\", \"Pros - Good results on a problem/algorithm of great current interest.\", \"Cons - Only presents (in some cases obvious) tweaks to a previous algorithm; clarity and overclaiming issues in the writeup.\", \"Concerns (please address in author response)\", \"The paper says \\\"we \\u2026 propose multiple ways to automate most of the hyperparameter searching required\\\". I'm not sure that this is true. The only technique that arguably removes a hyperparameter is LSLR. Even in this case, you still have to initialize the inner loop learning rates, so I'm not convinced that even this reduces hyperparameters. Perhaps I've missed something, please clarify.\", \"Section 4's paragraph on LSLR seems to say that you have a single alpha for each layer of the network. If this is right, then saying your method has a \\\"per layer gradient direction\\\" is very confusing. Each layer's alpha modulates the magnitude of that layer's update vector, but not its direction. The per-layer alphas together modify the direction of the global update vector. Perhaps I've misunderstood; equations describing exactly what LSLR does would be helpful. In any case, this should be clarified in the text.\", \"Suggestions (less essential than the concerns above)\", \"The write-up is redundant and carries unnecessary content. The paper would be better shorter (8 pages is not a minimum :)\", \"Section 1 covers a lot of background on the basics of meta-learning background that could be skipped. Other papers you cite (e.g. the MAML paper cover this).\", \"Section 2 goes into more detail about e.g. matching nets than is necessary.\", \"Section 2 explains MAML, which is then covered in much more detail in Section 3; better to leave out the Section 2 MAML paragraph.\", \"Sections 3 and 4 are very redundant. Combine them for a shorter (i.e., better!) paper.\", \"The paper says, \\\"Furthermore, for each learning rate learned, there will be N instances of that learning rate, one for each step to be taken. By doing this, the parameters are free to learn to decrease the learning rates at each step which may help alleviate overfitting.\\\" Does this happen empirically? Space could be freed up (see above) to have a figure showing whether or not this happens.\", \"The paper says, \\\"we propose MAML++, an improved meta-learning framework\\\" -- it's a little too far to call this a new framework. it's still MAML, with improvements.\", \"Typos\", \"\\\"4) increase the system\\u2019s computational overheads\\\" -> overhead\", \"\\\"composed by\\\" -> composed of\", \"\\\"Santurkar et al. (2018).\\\", \\\"Krizhevsky et al. (2012),\\\", \\\"Finn et al. (2017) \\\" -> misplaced citation parens\", \"\\\"a method that reduce\\\" -> reduces\", \"\\\"An evaluation ran consisted\\\" -> evaluation consisted\", \"The Loshchlikov and Hutter citation in the bibliography isn't right. It should be \\\"Sgdr: Stochastic gradient descent with restarts.\\\" (2016) instead of \\\"Fixing weight decay regularization in adam\\\" (2017).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Method of Learning Gradient Directions Clarification\", \"comment\": \"Meta-SGD learns alphas of dimensionality equal to the network parameters. Instead with LSLR we propose learning one alpha for each layer of the network. A component qualifies as a layer if it has learnable weights or biases in it. In addition, instead of just learning a learning rate and direction (alpha) for each layer to be used across all inner loop steps, we instead propose to learn different alphas for each inner loop step. This allows the network to choose to decay its alphas or otherwise change them, to maximize generalization performance (in some cases we noticed the network choosing to unlearn for some inner loop steps by using a negative learning rate and learn in others). So, to summarise, we learn one learning rate and direction for each layer for any given inner loop step. The network we used had 4 CNN layers along with a final softmax. That's a total of 5 layers, but since we learn learning rates for weights and biases separately, this means that the model learns a total of 10 learning rates and directions for any given step. For example, in the case where the model takes 5 inner loop steps, we have a total of 5 x 10 = 50 learning rates and directions, which is represented by 50 learnable parameters in the system.\"}",
"{\"comment\": \"When describing LSLR you likened your method to Meta-SGD, but in Meta-SGD the gradient direction is represented by the optimizer parameters \\\\alpha which has the same dimensionality as the learner parameters \\\\theta. In your method you claim that you reduce computational costs by learning \\\"per layer per step\\\" learning rates and directions. Can you please clarify how are your directions represented if not with the same number of parameters as used in Meta-SGD?\", \"title\": \"Method of Learning Gradient Directions Not Clear\"}",
"{\"title\": \"Re: Related Works with better results\", \"comment\": \"Thanks for your comment. Firstly, I'll reiterate that the main point of the paper is to improve MAML as a model itself. Furthermore, we did a very thorough literature review but missed out on the papers you have stated. The work in our paper had already taken full shape in May thus meaning that works 1 and 3 (that came later) escaped our radar. The second paper you mentioned, \\\"Neural Attentive Meta Learner\\\" was not included in many of the latest few-shot learning papers that came out in June 2018, thus making it harder for us to be aware of it. We did try to cover everything in the literature prior to starting our work, however as is often the case, one or two papers might escape ones review. Especially in this field, where papers keep coming out on a daily basis on arxiv. We shall add the approaches you mentioned in our result tables when editing is allowed again. Thank you for informing us of some literature we were previously unaware of.\"}",
"{\"comment\": \"https://arxiv.org/pdf/1805.08311.pdf has better Omniglot 5-way results and better Mini-Imagenet 5-way results\", \"https\": \"//arxiv.org/pdf/1807.02872.pdf has better Mini-Imagenet 5-way 5-shot results\", \"title\": \"Related works that have better results are missing?\"}"
]
} |
|
Hygvln09K7 | Meta Learning with Fast/Slow Learners | [
"[email protected]"
] | Meta-learning has recently achieved success in many optimization problems. In general, a meta learner g(.) could be learned for a base model f(.) on a variety of tasks, such that it can be more efficient on a new task. In this paper, we make some key modifications to enhance the performance of meta-learning models. (1) we leverage different meta-strategies for different modules to optimize them separately: we use conservative “slow learners” on low-level basic feature representation layers and “fast learners” on high-level task-specific layers; (2) Furthermore, we provide theoretical analysis on why the proposed approach works, based on a case study on a two-layer MLP. We evaluate our model on synthetic MLP regression, as well as low-shot learning tasks on Omniglot and ImageNet benchmarks. We demonstrate that our approach is able to achieve state-of-the-art performance. | [
"computer vision",
"meta learning"
] | https://openreview.net/pdf?id=Hygvln09K7 | https://openreview.net/forum?id=Hygvln09K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkgtE3dgxN",
"Bke71ZD5nX",
"S1gXHkH5hm",
"SJlSVKRY2Q"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544748081513,
1541202138780,
1541193530675,
1541167405073
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1085/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1085/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1085/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1085/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper introduces an interesting idea of using different rates of learning for low level vs high level computation for meta learning. However, the experiments lack the thoroughness needed to justify the basic intuition of the approach and design choices like which layers to learn fast or slow need to be further ablated.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Useful idea, requires more thorough experiments\"}",
"{\"title\": \"The paper addresses fast/slow learning modules in deep networks. Good paper. Needs more clarity/work.\", \"review\": \"The overall contribution makes sense. Consider solving a linear system i.e., learning an unknown matrix. Splitting it into two components (like in NMF or MMF) and learning each separately gives more control on the conditioning of the matrices. This is the basis of residual networks (at least the theory for linear resnets). Within this, the technical/theoretical results presented in the paper are sensible. Couple of issues:\\n1) Where are we breaking the slow/fast learners in terms of the depth of the network? I.e., How many of the layers are slow? Does this break point influence the overall convergence? \\n2) It is unclear what the aim of simulations is? The reported figures are not conveying useful information. It makes sense to do a repeatability experiment here with multiple sets of simulated datasets. \\n3) Put confidence intervals on the results (table/figure). \\n4) What is the nature and choice of g()? The evaluations uses LSTM but will the structure of g() influence the rate of learning? \\n5) The authors should choose a better reference than miracle for the\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting treatment to meta-learning with fast/slow learning components.\", \"review\": \"[Summary]\\nThe paper presents a novel learning framework for meta-learning that is motivated by neural learning process of human over long periods. Specifically, the process of meta-learning is divided into a slow and a fast learning modules, where the slowly-learnt component accounts for low-level representation that is progressively optimized over all data seen so far to achieve generalization power, and the fastly-learnt component is supposed to pick up the target in a new task for quick adaptation. It is proposed that meta-learning should focus on capturing the meta-information for the fast learning module, and leave the slow module being updated steadily without task-specific adaptation. Theoretical analysis is presented on a linear MLP examples to shed some light on the properties of the proposed algorithm. Results on both synthetic dataset and benchmarks justify the theoretical observation and advantages. \\n\\nPros\\nNovel treatment and formulation of meta-learning from the perspective of fast and slow learning process\\nCons\\nSome interesting cases not tested\\nPresentation could be improved \\n\\n[Originality]\\nThe paper approaches the recently popular meta-learning from a novel perspective by decomposing the learning process into slow and fast ones. \\n\\n[Quality]\\nOverall, the paper is well motivated and implemented with both theoretical study and empirical justification. There are a few questions / areas for further improvements, though:\\n- It seems that to initialize the slow module, another set of data is needed to pretrain it before the actual meta-learning takes place to learn to optimize the fast learner (as opposed to other meta-learning methods where all parameters in a base model were meta-learnt over the meta training set). How does this affect the performance? E.g., what if the slow module is only updated over the meta-training set (still without reinitialization across different batches) without pre-training?\\n- In the current formulation, the base model is decomposed into two distinct (slow and fast) modules. What is the rule to decide which layers should belong to slow or fast modules? How does different choice affect the performance? Can we decompose the base model into finer granularities for different learning behaviors? E.g., a third module module in-between the fast and slow ones that follows medium learning pace. \\n- The theoretical study can be better organized. The proofs can be left in appendix to make room for more discussion on conclusions, non-linear and / or non-Gaussian cases. \\n- The write-up can be improved too at some places: proper reference at line 4 of section 1 is missing; \\\\phi in (1) is not well defined, as well as \\u201cSOA\\u201d in section 2;\\n\\n[Clarity]\\nThe paper is generally clearly written, with a few places to improve (see comments above).\\n\\n[Significance]\\nThe paper brings in an interesting perspective to meta-learning. It can also inspire more follow-up work to better understand the problem.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Training slow and fast learners using different strategies is an interesting idea.\", \"review\": \"[Summary:]\\nThis paper presents a meta-learning architecture where the slow learner is trained by SGD and the fast learner is trained according to what the meta-learner guides. CNN is split into two parts: (1) bottom conv layers devoted to learn meaningful representation, which is referred to as slow learner; (2) top-fully connected layers involving task-specific fast learners. As in [Andrychowicz et al., 2016], the meta-learner guides the training of task-specific learners. In addition, slow learners are trained by SGD. The motivation is that low-level features should be meaningful everywhere while high-level features should vary wildly. They introduce \\u201cmiracle representations\\u201d and prove that fast/slow learning on a two-layer linear network should converge to somewhere near this miracle representation. They evaluate on few-shot classification benchmarks to evaluate how well this fast/slow meta-learning approach works.\\n\\n[Strengths:]\\nThe paper has a clear motivation. It is easy to read. Training slow/fast learners using different strategies is an interesting idea. \\n\\n[Weaknesses:]\\n- The technique used in this work is a mix of SGD and [Andrychowicz et al., 2016].\\n- The analysis is limited to a simple two-layer linear network. It is not clear whether this analysis is carried over to the proposed deep nets. \\n- Quantitative results did not compare to recent results such as Reptile[1] or MT-Nets[2].\\n\\n[Specific comments:]\\n- The current work is an improvement over [Andrychowicz et al., 2016], claiming that training conv layers and fully-connected layers with different strategies improves the generalization. I am wondering why the comparison to [Andrychowicz et al., 2016] is missing. You can use (fully) pre-trained CNN (which already learns meaningful representation using a huge amount of data) in the framework of [Andrychowicz et al., 2016]. \\n-As one of the points of the paper is that this meta-learning strategy enables life-long learning, it would have been nice to see an experiment using this, where the distribution of tasks changes as time goes on.\\n-The paper says SOA(State Of the Art); I think the term SOTA(State Of The Art) is more commonly used.\\n-The use of the term \\u201cmiracle\\u201d keeps changing(miracle solution, miracle representation, miracle W, miracle knowledge); the paper would be clearer if only one \\u201cmiracle X\\u201d was defined and used as these are all essentially saying the same thing.\\n\\nReferences\\n[1]https://arxiv.org/abs/1803.02999\\n[2]https://arxiv.org/abs/1801.05558\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1gDgn0qY7 | A Study of Robustness of Neural Nets Using Approximate Feature Collisions | [
"Ke Li*",
"Tianhao Zhang*",
"Jitendra Malik"
] | In recent years, various studies have focused on the robustness of neural nets. While it is known that neural nets are not robust to examples with adversarially chosen perturbations as a result of linear operations on the input data, we show in this paper there could be a convex polytope within which all examples are misclassified by neural nets due to the properties of ReLU activation functions. We propose a way to find such polytopes empirically and demonstrate that such polytopes exist in practice. Furthermore, we show that such polytopes exist even after constraining the examples to be a composition of image patches, resulting in perceptibly different examples at different locations in the polytope that are all misclassified. | [
"neural nets",
"robustness",
"examples",
"polytopes",
"study",
"approximate feature collisions",
"recent years",
"various studies",
"robust",
"perturbations"
] | https://openreview.net/pdf?id=H1gDgn0qY7 | https://openreview.net/forum?id=H1gDgn0qY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1lyuhKxxV",
"BJlUHIpjC7",
"S1g9bI6j0Q",
"rJeVuraoC7",
"rygTSrpi0X",
"ByeriKLT27",
"Hylxvzg9nQ",
"HklJbSZFnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544752231431,
1543390781847,
1543390721878,
1543390571945,
1543390533203,
1541396892970,
1541173847928,
1541113078535
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1084/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1084/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1084/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1084/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1084/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1084/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1084/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1084/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a novel view on adversarial examples, where models using\\nReLU are inherently sensitive to adversarial examples because ReLU activations\\nyield a polytope of examples with exactly the same activation. Reviewers\\nfound the finding interesting and novel but argue it is limited in impact.\\nI also found the idea interesting but the paper could probably be improved\\nas all reviewers have remarked. Overall, I found it borderline but probably not enough for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Response to your review\", \"comment\": \"Thank you for your review. Below are our responses to your questions and concerns:\", \"q1\": \"The perturbation set is generally a high-dimensional polytope. Although it has a compact representation in terms of intersection of hyperplanes, it may have many more vertices, so the endeavor of attempting to characterize all the vertices of this polytope may be infeasible.\", \"a1\": \"This is true; because the purpose of the paper is to demonstrate the existence of the polytope, our method only finds a subset of the vertices and therefore a subset of the polytope. Existence of this subset implies existence of a polytope that is as large as this subset, and so is an interesting finding. The mere existence of this subset demonstrates that an arbitrary number of examples with feature collisions can be generated.\", \"q2\": \"This technique of generating adversarial examples from combinations of image patches seems generally applicable, but it does not seems to produce good results here. The perturbations are still unnatural looking (eg. the images in Figure 7 are not exactly natural looking).\", \"a2\": \"The purpose of our paper is *not* to generate adversarial examples, but rather to demonstrate the existence of polytopes within which all examples share similar feature activations and therefore classifications. Our goal is to find a *large* polytope that has *similar* feature activations, whereas the goal of adversarial examples is to find an image with *small* perturbation that results in a *different* classification. So, the aim is to *maximize* perturbations, not to *minimize* perturbations. The purpose of generating examples from combinations of image patches is to find corners of the polytope that are perceptually distinct, rather than to generate natural-looking images.\"}",
"{\"title\": \"Response to your review\", \"comment\": \"Thank you for your review. First we would like to clarify some misunderstandings:\\n\\nThe purpose of the paper is *not* to generate adversarial examples, but rather to demonstrate the existence of polytopes within which all examples share similar feature activations and therefore classifications. The paper demonstrates that such polytopes can be found in the neighbourhood of *any* example (that we tested) and can be made to be classified as *any* target class by the neural net. Now, the question is: why is this interesting? \\n\\n- This shows that a neural net can map an infinite number of examples to the same or similar feature activations. This may not be desirable, because the different examples in the polytope are visually different (especially under the macro-level setting), but the neural net is essentially \\u201cblind\\u201d to these changes in visual appearance because the feature activations and therefore the classifications don\\u2019t change. \\n- This cannot be easily fixed by augmenting the training set because the polytopes arise from the properties of ReLU activation functions; training the neural net on a different training set can only make the weights of the neural net different and so cannot in general eliminate the existence of the polytopes.\", \"this_is_different_from_the_adversarial_example_setting_in_two_important_ways\": [\"The goal of adversarial examples is to find an example that is close to the initial image. Our goal is to find corners of the polytope that are as *far* from the initial image as possible. (In Tables 1, 2, 3 and 4, we show that the corners of the polytope are fairly far apart compared to the average pairwise distances between images from the dataset, which is desirable in our setting because this means the polytope is large.)\", \"The goal of adversarial examples is to find an example that is classified as another category. Our goal is to find a polytope that results in feature activations that are the *same* as the feature activations of some target image. The latter is in some sense stronger, since matching feature activations will guarantee matching classifications, but not the other way around. It is also different from the goal of adversarial examples, since the target image could be chosen to be in the same class as the initial image. (We show such an example in Figure 8.) This would not be a problem from the perspective of adversarial examples, but is still undesirable, because the target image can be visually very different from the initial image, but the feature activations are very similar.\"], \"below_are_our_responses_to_your_questions_and_concerns\": \"\", \"q1\": \"The experiments are very limited and show just 5 examples of generated images on MNIST and ImageNet.\", \"a1\": \"We have to show a limited number of images for reasons of space - showing more images would be somewhat boring and would not add much value to the paper, since the point of the paper was to show the existence of polytopes where features collide. The algorithm for finding such polytopes is simple enough for any reader to implement on their own and verify the existence of polytopes around images of their own choosing.\", \"q2\": \"In Sect 3.2 it is observed that it is hard for human eyes to notice the difference but that is clearly not the case for the figure reported. The same for Fig. 7 on the macro-level which are even more distorted.\", \"a2\": \"As explained above, our goal is different from that of adversarial examples - the point is to *maximize* distortion, not to *minimize* distortion. Our goal is to find a *large* polytope that has *similar* feature activations, whereas the goal of adversarial examples is to find an image with *small* distortion that results in a *different* classification.\", \"q3\": \"No comparison with other methods to generate adversarial examples are reported (e.g. Shafani et al 2018, Szegedy et al. 2013).\", \"a3\": \"As explained above, since our goal is *not* to generate adversarial examples, it would not be possible to compare to methods for generating adversarial examples.\", \"q4\": \"Figure 2, Figure 3 show the results, but it would also be interesting to observe what happens from the starting image to the final generated images.\", \"a4\": \"We have added intermediate images in Appendix C for all three macro-level difference experiments.\", \"q5\": \"The observation is only applicable to ReLU activations (but other activation functions may be in the last layer), limiting the impact of the paper.\", \"a5\": \"The ReLU activations do not have to be in the last layer for our method to work - as long as there are ReLU activations in *some* layer, then our method is applicable. In fact, in our paper, we used the first fully-connected layer as opposed to the last layer for finding feature collisions. The idea is that once the feature activations of an earlier layer collide, the feature activations of the following layers will collide as well. Because ReLU activations area quite common in neural net architectures, our method is broadly applicable.\"}",
"{\"title\": \"Response to your review (2/2)\", \"comment\": \"Q3: I'm not convinced that we can't come up with linear combinations of these patches that produce highly non-natural images with \\\"micro-level\\\" adversarial patterns ... Section 4.1: Why do you need a total variation penalty at all if you have constructed a patch-based drawing method that is supposed to be unable to produce unnatural high-frequency patterns?\", \"a3\": \"We did not make either of these claims in our paper - specifically, we made no claims about the impossibility of finding a linear combinations of patches to produce an arbitrary image, or about the utility of a patch-based parameterization without a total variation penalty. So, we are not sure why this is a criticism. In fact, it is obvious that it *is* possible to find a linear combination of patches to produce an arbitrary image (if given enough patches), which is a simple consequence of basic linear algebra. This is precisely why we are constraining the space of control parameters when performing optimization - we always enforce a *convex* combination of patches and use a regularizer to encourage spatial smoothness in the coefficients on the patches. We did not claim that having this regularizer is somehow undesirable or unnecessary; in fact, we very much designed this regularizer to go hand-in-hand with the patch-based parameterization.\", \"q4\": \"The examples actually look more suspicious than regular adversarial examples, since it looks like the original image has simply been blurred, which means the adversarial perturbations are more clear.\", \"a4\": \"As explained above, the goal of the paper is not to find adversarial examples - rather it is to find corners of a polytope that causes feature collisions with a target example, so that we can show such polytopes exist. Our goal is to find a *large* polytope that has *similar* feature activations, whereas the goal of adversarial examples is to find an image with *small* distortion that results in a *different* classification.\", \"q5\": \"Isn't a bounded polytope called a \\\"simplex\\\"? Perhaps there is a distinction that I'm not aware of, but the absence of the word \\\"simplex\\\" throughout the whole paper surprised me a bit. Perhaps this is a perfectly correct omission due to differences that I'm not aware of.\", \"a5\": \"A simplex in d-dimensional space can only have d+1 vertices (since they all have to be affinely independent), whereas a convex polytope doesn\\u2019t have this requirement. This means that a simplex is a bounded convex polytope, but not all bounded convex polytopes are simplices. Because any bounded convex polytope can be decomposed into simplices, the existence of a polytope implies the existence of a simplex, but the former is a stronger statement, which is why we talk about polytopes rather than simplices.\\n\\nThanks for the fixes; we\\u2019ve updated our paper and incorporated them.\"}",
"{\"title\": \"Response to your review (1/2)\", \"comment\": \"Thank you for your review. First, we would like to clarify some misunderstandings:\\n\\nThe purpose of the paper is *not* to generate adversarial examples, but rather to demonstrate the existence of polytopes within which all examples share similar feature activations and therefore classifications. The paper demonstrates that such polytopes can be found in the neighbourhood of *any* example (that we tested) and can be made to be classified as *any* target class by the neural net. Now, the question is: why is this interesting? \\n\\n- This shows that a neural net can map an infinite number of examples to the same or similar feature activations. This may not be desirable, because the different examples in the polytope are visually different (especially under the macro-level setting), but the neural net is essentially \\u201cblind\\u201d to these changes in visual appearance because the feature activations and therefore the classifications don\\u2019t change. \\n- This cannot be easily fixed by augmenting the training set because the polytopes arise from the properties of ReLU activation functions; training the neural net on a different training set can only make the weights of the neural net different and so cannot in general eliminate the existence of the polytopes.\", \"this_is_different_from_the_adversarial_example_setting_in_two_important_ways\": [\"The goal of adversarial examples is to find an example that is close to the initial image. Our goal is to find corners of the polytope that are as *far* from the initial image as possible. (In Tables 1, 2, 3 and 4, we show that the corners of the polytope are fairly far apart compared to the average pairwise distances between images from the dataset, which is desirable in our setting because this means the polytope is large.)\", \"The goal of adversarial examples is to find an example that is classified as another category. Our goal is to find a polytope that results in feature activations that are the *same* as the feature activations of some target image. The latter is in some sense stronger, since matching feature activations will guarantee matching classifications, but not the other way around. It is also different from the goal of adversarial examples, since the target image could be chosen to be in the same class as the initial image. (We show such an example in Figure 8.) This would not be a problem from the perspective of adversarial examples, but is still undesirable, because the target image can be visually very different from the initial image, but the feature activations are very similar.\"], \"below_are_our_responses_to_your_questions_and_concerns\": \"\", \"q1\": \"First of all, the 5 corners of the polytope all look the same to me ... this means the polytope is not that interesting and has only found an extremely small pocket of adversarial examples.\", \"a1\": \"It\\u2019s important to distinguish between two distinct questions: whether the polytope is small and whether the different examples in the polytope are perceptibly different. In Figure 3 for example, the average pairwise distance between the corners of the polytope is 1/4 to 1/8 of the average pairwise distance between images from ImageNet (as shown in Table 2), and so the polytope is not small. One can argue, however, that the corners in Figure 3 are not perceptibly different, which is why we introduced a method for finding polytope corners that are visually different at the macro-level.\", \"q2\": \"If you use a regular method of finding a single adversarial example, I'm sure the outcome wouldn't change within some ball around the sample (perhaps with very small radius, but nonetheless). In fact, a comparison between that ball's volume and the volume of the polytope would be interesting.\", \"a2\": \"As explained above, because our goal is *not* to find adversarial examples, we cannot compare to a regular adversarial example method. However, we have updated our paper to include a comparison in the appendix to a baseline that is similar to what you are suggesting in spirit, which is to use a ball centred at the centroid of the polytope that we find, whose radius is similar to the radius of the polytope. Specifically, we randomly select 2000 examples inside a ball centred at the centroid of our polytope whose radius is the minimum distance between the centroid and a corner of the polytope and compare the percentage of these examples that are classified as (or in other words, collide with) the target class. As shown in Table 5, only a small fraction of examples collide with the target class, compared to 100% success rate when drawing samples from the polytope. This demonstrates that the polytope we find is interesting and cannot be trivially replaced with a ball.\\n\\n(continued below)\"}",
"{\"title\": \"interesting observation and techniques, but results leave something to be desired\", \"review\": \"This paper studies a non-local form of adversarial perturbation, which, to my limited knowledge is new. The form of the perturbation is specific to ReLU activations, but it may be a large set. The authors also devise an algorithm to generate natural-looking perturbations in this set. Instead of updating a seed example through gradient descent, they propose to generate perturbations by combinations of image patches from multiple seed images. The weights of the combination are optimized by a gradient descent like algorithm in a similar manner as standard gradient-based approaches to generating adversarial examples. This produces perturbations that look like ```in-paintings'' or transplants of one seed image onto another. Here are a few comments:\\n\\n1. The perturbation set is generally a high-dimensional polytope. Although it has a compact representation in terms of intersection of hyperplanes, it may have many more verticies, so the endeavor of attempting to characterize all the verticies of this polytope may be infeasible. \\n\\n2. This technique of generating adversarial examples from combinations of image patches seems generally applicable, but it does not seems to produce good results here. The perturbations are still unnatural looking (eg. the images in Figure 7 are not exactly natural looking).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting but limited study\", \"review\": [\"This paper follow recent trend of adversarial examples which is on generating images with small differences in the input space, but that are misclassified by a large margin by a neural net. The key idea of the paper is that any negative component before a ReLU activation share the same zero feature after the ReLU. Thus, any neural network that has ReLU activations have a polytope in the input space that will have identical activations in the later layers. Based on this observation, the paper assert that such polytope always exist and describe how to find its corners with a gradient descent based method. Two simple experiments on MNIST and ImageNet datasets are carried to show the feasibility of the method in practice and the existence of images with feature collision, together with their average L2 distance from real images. Since the images are clearly not \\\"natural\\\" images, a further method based on selecting patches of real images is reported and tested on ImageNet. This shows that the approach can be further applied on macro-level differences.\", \"Strengths\", \"The observation of the existence of the polytope in presence of ReLU activation is interesting and can probably be used to further refine attacks for generating adversarial examples.\", \"The paper is clear and is comprehensive of all the basic steps.\", \"Examplar experiments show the possibility of using the key idea to generate adversarial examples\"], \"weaknesses\": [\"The experiments are very limited and show just 5 examples of generated images on MNIST and ImageNet. In Sect 3.2 it is observed that it is hard for human eyes to notice the difference but that is clearly not the case for the figure reported. The same for Fig. 7 on the macro-level which are even more distorted. Although this is minor, since the method is still shown to be working, the statements on the similarity of images seem incorrect. Beside the qualitative examples, the measurement of average similarity based on L2 is not so indicative at the perception level, but still interesting to see.\", \"No comparison with other methods to generate adversarial examples are reported (e.g. Shafani et al 2018, Szegedy et al. 2013).\"], \"minor_issues\": [\"Figure 2, Figure 3 show the results, but it would also be interesting to observe what happens from the starting image to the final generated images.\", \"Personally, I prefer to see related work after the introduction section. Reading it at the end breaks the flux of the paper.\", \"The observation is only applicable to ReLU activations (but other activation functions may be in the last layer), limiting the impact of the paper.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting premise of adversarial polytopes, but fall on implication and attempt to create macro-level different examples.\", \"review\": \"This paper presents an algorithm for finding a polytope of adversarial examples. This means that within a convex hull, you can move around freely and get a new adversarial example at each point, while still maintaining misclassification. It then couples this with a method of generating nearest neighbor patch-based images in an effort to create \\\"macro-level different\\\" examples. The premise is interesting, but the implications are questionable and I do not find the work in macro-level differences to be sound. This could be based in misunderstandings, so please let me know if you think that is the case.\", \"strengths\": [\"The notion of the polytope is interesting and the algorithm for finding such polytope seems perfectly reasonable.\", \"I think the goal of macro-level adversarial examples is interesting.\"], \"weaknesses\": [\"First of all, the 5 corners of the polytope all look the same to me (for instance fig 2). This is not encouraging, because it means that every single point in the polytope will also look exactly like the corners. To be frank, this means the polytope is not that interesting and has only found an extremely small pocket of adversarial examples. If you use a regular method of finding a single adversarial example, I'm sure the outcome wouldn't change within some ball around the sample (perhaps with very small radius, but nonetheless). In fact, a comparison between that ball's volume and the volume of the polytope would be interesting.\", \"The implication of these polytopes is not at all clear if it doesn't really allow us to generate adversarial example of a new flavor. The investigation into macro-level differences does not help the case, as I will explain.\", \"I am not at all convinced that there is any meaning to the examples with \\\"macro-level differences.\\\" It's a bit unclear to me how many patches are used per image, but assuming that a patch is centered over each pixel, it would mean that we have as many control parameters as we have pixels, which assuming the pixels each have three color values, is just 1/3 of the original degrees of freedoms. Now, the patches probably do constrain what we can paint a bit, but since the patches are applied with a pyramid, it means the center pixel will contribute more than any other for a given patch, so I'm not so sure. I'm not convinced that we can't come up with linear combinations of these patches that produce highly non-natural images with \\\"micro-level\\\" adversarial patterns. In fact, I think section 4.1 and figure 7 provide evidence to the contrary. Let me explain:\", \"Section 4.1: Why do you need a total variation penalty at all if you have constructed a patch-based drawing method that is supposed to be unable to produce unnatural high-frequency patterns? If you only had a handful of patches and they were all non-overlapping, then this would be impressive and.\", \"Figure 7: We can clearly see high-frequency patterns that create the shadow of an obelisk in 7(a). I think the same is true for \\\"erase\\\", although the pattern is not as recognizable. The examples actually look more suspicious than regular adversarial examples, since it looks like the original image has simply been blurred, which means the adversarial perturbations are more clear. I understand that these patterns were created using a complicated scheme of natural patches, but I think you made this method too powerful. The one interesting quality is the bottom right of the trimaran which looks like a shark - however, that is a singular occurrence in your examples and it certainly feels like the high-frequency patterns will contribute much more to class than the shark itself.\", \"Please let me know if I am misinterpreting the importance of the results in Figure 7, since this is an important culmination of this work.\"], \"other_comments\": [\"Some of notation is a bit confusing. In (1), why is p not bold but x and t are bold? They are all vectors. In Algorithm 1, x is not bold anymore.\", \"Algorithm 1 also seems quite unnecessary to include so explicitly.\", \"Isn't a bounded polytope called a \\\"simplex\\\"? Perhaps there is a distinction that I'm not aware of, but the absence of the word \\\"simplex\\\" throughout the whole paper surprised me a bit. Perhaps this is a perfectly correct omission due to differences that I'm not aware of.\"], \"minor_comments\": [\"abstract, \\\"We propose a way to finding\\\" -> either \\\"to->\\\"of\\\" or \\\"find\\\"\", \"page 3, \\\"and we can generate new colliding example\\\" -> \\\"a new colliding example\\\"\", \"page 3, \\\"taking arbitrary an convex combinations\\\" -> \\\"combination\\\"\", \"page 3, \\\"Given a target x\\\", I think you mean \\\"Given a target t\\\"\", \"page 5, \\\"As many gradient-based method\\\" -> \\\"methods\\\"\", \"page 8, \\\"carton\\\"? \\\"rubber\\\"? Those are not in figure 7(b).\", \"page 10, \\\"are crucial to less non-robust\\\" ? This sentence (which is the final sentence of the conclusion and thus has a certain level of importance) is not something that is novel to your paper. The impact of non-linearities on adversarial examples have been well-studied.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyxPx3R9tm | Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow | [
"Xue Bin Peng",
"Angjoo Kanazawa",
"Sam Toyer",
"Pieter Abbeel",
"Sergey Levine"
] | Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. | [
"reinforcement learning",
"generative adversarial networks",
"imitation learning",
"inverse reinforcement learning",
"information bottleneck"
] | https://openreview.net/pdf?id=HyxPx3R9tm | https://openreview.net/forum?id=HyxPx3R9tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJewlYGWgV",
"S1ljRntcT7",
"BJeNdhYq67",
"B1eIr2t5Tm",
"ByxhshHL6Q",
"BJxmQSxU6Q",
"SylvFMGra7",
"Byl41tz9nX",
"Bkx6mnnK3Q",
"rJx9PNrv3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544788207388,
1542261970978,
1542261868103,
1542261821707,
1541983395776,
1541960986827,
1541902975396,
1541183707927,
1541159973060,
1540998241794
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1083/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1083/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1083/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1083/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1083/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1083/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1083/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1083/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a simple and general technique based on the information bottleneck to constrain the information flow in the discriminator of adversarial models. It helps to train by maintaining informative gradients. While the information bottleneck is not novel, its application in adversarial learning to my knowledge is, and the empirical evaluation demonstrates impressive performance on a broad range of applications. Therefore, the paper should clearly be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Intuitive idea that leads to impressive results!\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Thank you for the insight and feedback. We have included additional experiments to further compare with previous techniques, along with some additional clarifications.\", \"re\": \"saliency maps\\nWe have added a colormap to Figure 5. The colors on the saliency map represent the magnitude of the discriminator\\u2019s gradient with respect to each pixel and color channel in the input image. The gradients are visualized for each color channel, which results in the different colors. The same procedure is used to compute the gradients for GAIL.\"}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Thank you for the insight and feedback, we have included new experiments in the paper, along with some additional clarifications.\", \"re\": \"Adapt beta based on gradient magnitudes\\nYes, it might be possible to formulate a similar constraint for adaptively updating beta according to the gradient magnitudes. A constraint on the gradient norm can be added, then a Lagrangian can be constructed in a similar manner to yield an adaptive update for beta.\"}",
"{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thank you for the insight and suggestions. We have added additional experiments and clarifications to the paper that aim to address each of your concerns -- we would really appreciate it if you could revisit your review in light of these additions and clarifications.\", \"re\": \"Spectral norm\\nWe have included additional image generation experiments with spectral normalization [Figure 8]. Spectral normalization does show significant improvement over the vanilla GAN on CIFAR-10 (FID: 23.9), but our method still achieves a better score (FID: 18.1). The original spectral normalization paper [Miyato et al., 2018] reported an FID of 21.7 on CIFAR-10.\"}",
"{\"comment\": \"Thanks for your response clarifying one part of the comment.\\n\\nWith respect to all the \\\"We never claimed ...\\\", the writing did not have factually false claims. However, isn't it normal to interpret that a statement like \\\"previous approaches used larger batch sizes and multiple GPUs and our approach did not\\\" is intended to \\\"sound\\\" as a contribution in comparison to prior work? 24 is larger than 8. 256 is also larger than 8. 2048 is also larger than 8. But it's not the same \\\"larger\\\". One is doable with a single V100. Another is doable with 32 V100s. Third is doable only on TPU. Wouldn't it make sense to say \\\"We used smaller batch size (8 instead of 24 as in Mescheder et al) on a single V100 and trained for fewer iterations because of resource constraints. We also generate at full resolution directly as in Mescheder et al instead of progressive growing done in Karras et al\\\"? Thanks for agreeing to refine the writing.\", \"title\": \"Response to Clarification\"}",
"{\"title\": \"Clarification\", \"comment\": \"Thank you for your comment.\\n\\nThe authors of the paper are not active on reddit and we do not have control over what reddit users post about our paper.\\n\\nWe used a batch size of 8 in our work, and we mention this in the paper for completeness, and since this is a bit different from Meschederer et al., who used a batch size of 24 with 4 GPUs. We do not state that the batch size from Meschederer et al. is \\u201cextremely large\\u201d in our paper, we state that it is \\\"larger\\\" than 8, which is factually true (it\\u2019s not clear how to state this in any other way\\u2026). We did not claim that the smaller batch size of 8 is a contribution of our work, and we did not claim that our paper is the first to train high-resolution GANs without progressive growing of resolution. We do have results for a network trained for 300k iterations and we will add these results to the paper.\\n\\nWe will refine the wording for the image generation experiments to further avoid these misinterpretations.\"}",
"{\"comment\": \"\\\"CelebAHQ: VGAN can also be trained on on CelebAHQ Karras et al. (2018) at 1024 by 1024 resolution directly, without progressive growing (Karras et al., 2018). We use Ic = 0.1 and train with VGAN-GP. We train on a single Tesla V100, which fits a batch size of 8 in our experiments. Previous approaches (Karras et al., 2018; Mescheder et al., 2018) use a larger batch size and train over multiple GPUs. While previous approaches have trained this for 300k iterations or more, our results are shown at 100k iterations.\\\"\\n\\nEven though the authors don't intend to, this statement is likely to be misinterpreted that VGAN is the first GAN paper to show high resolution GAN samples without progressive growing of resolution or large batch sizes. \\n\\nThe batch size used in Mescheder et al is 24 while the authors use 8. Why would you call 24 \\\"large\\\" and 8 \\\"small\\\"? Secondly, 100k iterations is sufficient to start seeing good samples with most GAN architectures when the architecture uses residual connections and more iterations are needed to get more modes and sharper samples. You have shown a total of 8 samples. It is hard to say whether or not they were carefully picked. \\n\\nAs evidence for why this is likely to be misleading, I am quoting a comment from reddit: \\\"Also of note: training 1024px image GANs without extremely large minibatches, progressive growing, or self-attention, just a fairly vanilla-sounding CNN and their discriminator penalization.\\\" Not providing the link because that breaks the anonymity of the paper. \\n\\nNeither is it claimed or shown by the authors that Mescheder et al's model wouldn't produce good samples with a lower batch size or fewer (100K) iterations. The benefit to get it working for large resolution comes from the careful architecture designed by Mescheder et al and not from the bottleneck.\", \"two_more_issues_with_the_claims_made_in_the_cifar_10_fid_metrics_section\": \"(a) \\\"VGAN is competitive with WGAN-GP and GP\\\": The gap between VGAN and WGAN-GP is higher than WGAN-GP and VGAN-GP. But the improvement over WGAN-GP is considered \\\"significant\\\" whereas the other gap is considered \\\"competitive\\\"? (b) Is there any reason to show the metrics at the end of 750K iterations specifically? The plot shows that WGAN-GP training curve has a bigger negative slope at the cutoff point (750k) while VGAN-GP has flattened by then. It is worth showing the readers what happens when you train even a bit more, ie 1 million iterations when the difference isn't even that significant. Even though \\\"VDB and GP are complementary techniques\\\" morally, empirical conclusions may often not turn out to be the case.\", \"title\": \"GAN experiments writing indicating incorrect interpretations?\"}",
"{\"title\": \"a constraint on the discriminator of GAN model to maintain informative gradients\", \"review\": \"This paper proposed a constraint on the discriminator of GAN model to maintain informative gradients. It is completed by control the mutual information between the observations and the discriminator\\u2019s internal representation to be no bigger than a predefined value. The idea is interesting and the discussions of applications in different areas are useful. However, I still have some concerns about the work:\\n1.\\tin the experiments about image generation, it seems that the proposed method does not enhance the performance obviously when compared to GP and WGAN-GP, Why the combination of VGAN and GP can enhance the performance greatly(How do they complementary to each other), what about the performance when combine VGAN with WGAN-GP?\\n2.\\tHow do you combine VGAN and GP, is there any parameter to balance their effect?\\n3.\\tThe author stated on page 2 that \\u201cthe proposed information bottleneck encourages the discriminator to ignore irrelevant cues, which then allows the generator to focus on improving the most discerning differences between real and fake samples\\u201d, a proof on theory or experiments should be used to illustrate this state.\\n4.\\tIs it possible to apply GP and WGAN-GP to the Motion imitation or adversarial inverse reinforcement learning problems? If so, will it perform better than VGAN?\\n5.\\tHow about VGAN compares with Spectral norm GAN?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Inovative technique, Impressive results\", \"review\": \"The paper \\\"Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow\\\" tackles the problem of discriminator over-fitting in adversarial learning. Balancing the generator and the discriminator is difficult in generative adversarial techniques, as a too good discriminator prevents the generator to converge toward effective distributions. The idea is to introduce an information constraint on a intermediate layer, called information bottleneck, which limits the content of this layer to the most discriminative features of the input. Based on this limited representation of the input, the disciminator is constrained to longer tailed-distributions, maintaining some uncertainty on simulated data distributions. Results show that the proposal outperforms previous researches on discriminator over-fitting, such as noise adding in the discriminator inputs.\\n\\nWhile the use of information bottleneck is not novel, its application in adversarial learning looks inovative and the results are impressive in a broad range of applications. The paper is well-written and easy to follow, though I find that it would be nice to give more insights on the intuition about information bottleneck in the preliminary section to make the paper self-contained (I had to read the previous work from Alemi et al (2016) to realize what information bottleneck can bring). My only question is about the setting of the constaint Ic: wouldn't it be possible to consider an adaptative version which could consider the amount of zeros gradients returned to the generator ?\", \"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good showcase of the application and benefits of the VIB in GANs, minor corrections suggested.\", \"review\": \"Summary:\\nThe authors propose to apply the Deep Variational Information Bottleneck (VIB) method of [1] on discriminator networks in various adversarial-learning-based scenarios. They propose a way to adaptively update the value for the b\\u00eata hyper-parameter to respect the constraint on I(X,Z). Their technique is shown to stabilize/allow training when P_g and P_data do not overlap, similarly to WGAN and gradient-penalty based approaches, by essentially pushing their representation distributions (p_z) to overlap with the mutual information bottleneck. It can also be considered as an adaptive version of instance noise, which serves the same goal. The method is evaluated on different adversarial learning setup (imitation learning, inverse reinforcement learning and GANs), where it compares positively to most related methods. Best results for \\u2018classical\\u2019 adversarial learning for image generation are however obtained when combining the proposed VIB with gradient penalty (which outperforms by itself the VGAN in this case).\", \"pros\": [\"This paper brings a good amount of evidence of the benefits to use the VIB formulation to adversarial learning by first showing the effect of such approach on a toy example, and then applying it to more complex scenarios, where it also boosts performance. The numerous experiments and analyses have great value and are a necessity as this paper mostly applies the VIB to new learning challenges.\", \"The proposition of a principled way of adaptively varying the value of B\\u00eata to actually respect more closely the constraint I(X,Z) < I_c, which to my knowledge [1] does not perform, is definitely appealing and seems to work better than fixed B\\u00eatas and does also bring the KL divergence to the desired I_c.\", \"The technique is fairly simple to implement and can be combined with other stabilization techniques such as gradient penalties on the discriminator.\"], \"cons\": [\"In my view, the novelty of the approach is somewhat limited, as it seems like a straightforward application of the VIB from [1] for discriminators in adversarial learning, with the difference of using an adaptive B\\u00eata.\", \"I think the B\\u00eata-VAE [2] paper is definitely related to this paper and to the paper on which it is based [1] and should thus be cited as the authors use a similar regularization technique, albeit from a different perspective, that restricts I(X,Z) in an auto-encoding task.\", \"I think the content of batches used to regularize E(z|x) w.r.t. to the KL divergence should be clarified, as the description of p^tilde \\u201cbeing a mixture of the target distribution and the generator\\u201d (Section 4) can let the implementation details be ambiguous. I think batches containing samples from both distributions can cause problems as the expectation of the KL divergence on a batch can be low even if the samples from both distributions are projected into different parts of the manifold. This makes me think batches are separated? Either way, this should be more clearly stated in the text.\", \"The last results for the \\u2018traditional\\u2019 GAN+VIB show that in this case, gradient penalty (GP) alone outperforms the proposed VGAN, and that both can be combined for best results. I thus wonder if the results in all other experiments could show similar trends if GP had been tested in these cases as well. In the imitation learning task, authors compare with instance noise, but not with GP, which for me are both related to VIB in what they try to accomplish. Was GP tested in Imitation Learning/Inverse RL ? Was it better? Could it still be combined with VIB for better results?\", \"In the saliency map of Figure 5, I\\u2019m unclear as to what the colors represent (especially on the GAIL side). I doubt that this is simply due to the colormap used, but this colormap should be presented.\", \"Overall, I think this is an interesting and relevant paper that I am very likely to suggest to peers working on adversarial learning, and should therefore be presented. I think the limited novelty is counterbalanced by the quality of empirical analysis. Some clarity issues and missing citations should be easy to correct. I appreciate the comparison and combination with a competitive method (Gradient Penalty) in Section 5.3, but I wish similar results were present in the other experiments, in order to inform readers if, in these cases as well, combining VIB with GP leads to the best performance.\", \"[1] Deep Variational Information Bottleneck, (Alemi et al. 2017)\", \"[2] beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework (Higgins et al. 2017)\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJgvg30ctX | Information Regularized Neural Networks | [
"Tianchen Zhao",
"Dejiao Zhang",
"Zeyu Sun",
"Honglak Lee"
] | We formulate an information-based optimization problem for supervised classification. For invertible neural networks, the control of these information terms is passed down to the latent features and parameter matrix in the last fully connected layer, given that mutual information is invariant under invertible map. We propose an objective function and prove that it solves the optimization problem. Our framework allows us to learn latent features in an more interpretable form while improving the classification performance. We perform extensive quantitative and qualitative experiments in comparison with the existing state-of-the-art classification models. | [
"supervised classification",
"information theory",
"deep learning",
"regularization"
] | https://openreview.net/pdf?id=BJgvg30ctX | https://openreview.net/forum?id=BJgvg30ctX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylDJXSygE",
"SJgNfs_S1V",
"HJlaLcd4kE",
"BylGzSm11E",
"SJe2kr7ykE",
"BJeP6YONRQ",
"r1xz-muEAQ",
"SJgsIed4Rm",
"B1l4KIw9nX",
"rJx5A-7d2Q",
"BygrQ9H8hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544667870939,
1544026892146,
1543961172986,
1543611658445,
1543611619853,
1542912447385,
1542910714045,
1542910035151,
1541203580069,
1541054929828,
1540934172822
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1082/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1082/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1082/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1082/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1082/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1082/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1082/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1082/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1082/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1082/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1082/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes an approach to regularizing classifiers based on invertible networks using concepts from the information bottleneck theory. Because mutual information is invariant under invertible maps, the regularizer only considers the latent representation produced by the last hidden layer in the network and the network parameters that transform that representation into a classification decision. This leads to a combined \\u21131 regularization on the final weights, W, and \\u21132 regularization on W^{T} F(x), where F(x) is the latent representation produced by the last hidden layer. Experiments on CIFAR-100 image classification show that the proposed regularization can improve test performance. The reviewers liked the theoretical analysis, especially proposition 2.1 and its proof, but even after discussion and revision wanted a more careful empirical comparison to established forms of regularization to establish that the proposed approach has practical merit. The authors are encouraged to continue this line of research, building on the fruitful discussions they had with the reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Fascinating perspective with promising initial results, but needs more careful comparison to other regularization methods\"}",
"{\"title\": \"Response to Reviewer 1 Continued\", \"comment\": \"We thank the reviewer for the update!\\n\\nThe main take-away of our work for practitioners should be a design principle for neural networks. We challenge the point of view that the merit of \\\"depth\\\" of neural networks is filtering out irrelevant information layer by layer. The success of ResNet (He et al., 2015) and DenseNet (Huang et al., 2017) clearly conveys the message that neural networks need to preserve more information from the input throughout the layers to perform well.\\n\\nHowever it's somehow counter-intuitive that neural networks that preserve all information (including possibly the irrelevant information) can generalize well. To understand this, we formulate an explicit objective that keeps only the relevant information and take a theoretical approach to draw a connection to the loss objectives that people are familiar with. We see that the product w^TF(X) is crucial for good generalization of invertible neural networks. In addition in Appendix C, we take another point of view to understand why invertibility is beneficial for classification.\\n\\nWe hope our work clarifies some mysteries of the class of invertible neural networks, which we believe is a good design principle for future work.\"}",
"{\"title\": \"Reply\", \"comment\": \"It is not clear to me if the proposed regularizer has practical merit, it rather seems like an alternative approach to a more or less well-studied problem. On the other hand, I thank the authors for the updates, the new results indicate the observed effects are consistent with other regularization techniques and across models.\\n\\nI updated my score, but not to clear acceptance due to the aforementioned reason.\"}",
"{\"title\": \"Response to Reviewer 3 Continued(2)\", \"comment\": \"2. Yes the variation on outputs also depends on the weights. We would like to argue that the variation has less effect on the inner product w^TF(X). Suppose random perturbation on the image leads to random variation \\\\epsilon on the feature F(X). On one hand regularizing w gives smaller perturbation to the product w^T\\\\epsilon. On the other hand since perturbation does not affect w, we can analyze the statistics of F(X)+\\\\epsilon with perturbation. We also plot the standard deviation in Figure 2 for each feature entry of 9 and observe that feature entries remain having relatively high values when putting the deviation into consideration, which is not the case for InvNet. We conclude that due to this high contrast of mean values among feature entries, the classifier can still make the correct prediction under variations of F(X).\"}",
"{\"title\": \"Response to Reviewer 3 Continued(1)\", \"comment\": \"1.1. Similarity of feature spaces\\n\\nWe calculate the principle component of 1000 features of each digit. To measure the similarity among the subspace generated by the top 5 principle components of each digit, we using the following \\\"metric\\\":\\nlet U and V be 100*5 matrices storing the principle component of features of class i and j, define the projection matrix onto the subspace as\\nP_U = U*(U^T*U)^{-1}*U^T, P_V = V*(V^T*V)^{-1}*V^T\\nThen if x is a vector lying in the intersection of spaces generated by U and V, x should be invariant under the projections:\\nx = P_U*x, x = P_V*x\\nIt follows that x is an eigenvector of matrix P_U*P_V with eigenvalue precisely 1.\\nIn fact the eigenvalues of P_U*P_V range in [0,1].\\n\\nWe use the SUM OF EIGENVALUES P_U*P_V to measure the similarity between subspaces generated by columns of U and V.\\nLarger sum indicates more similarity.\\n\\nWe choose not to use U^T*V as a measure because in high dimensional space, vectors tend to be orthogonal to each other so the resulting product does not give too much information.\", \"we_show_result_as_follows\": \"InvNet\\n 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Most Similar Digit\\n0| * 0.16 0.93 0.50 0.36 0.85 0.81 0.54 0.66 0.68| 2,5\\n1| 0.16 * 0.64 0.36 0.68 0.33 0.37 0.53 0.55 0.47| 2,4\\n2| 0.93 0.64 * 1.15 0.64 0.31 0.77 0.91 0.87 0.54| 0,3\\n3| 0.50 0.36 1.15 * 0.31 1.44 0.35 0.85 0.93 0.62| 2,5\\n4| 0.35 0.68 0.64 0.32 * 0.47 0.83 0.84 0.78 1.66| 7,9\\n5| 0.85 0.33 0.31 1.44 0.47 * 0.71 0.68 0.94 0.80| 3,8\\n6| 0.81 0.37 0.77 0.35 0.83 0.71 * 0.29 0.45 0.31| 0,4\\n7| 0.54 0.53 0.91 0.85 0.84 0.68 0.29 * 0.71 1.02| 2,9\\n8| 0.66 0.55 0.87 0.93 0.78 0.94 0.45 0.71 * 1.11| 5,9\\n9| 0.68 0.47 0.54 0.62 1.66 0.80 0.31 1.02 1.11 * | 4,8\", \"mean\": \"1.07, Std:1.35\\n\\nOur regularization does not improve the seperation among the feature spaces of different digits as the mean and standard deviation of the results do not differ too much between InvNet and RegInvNet. But after a close inspection, we observe that the feature learned by our regularization gives information on the similarity between features that is closer to human's conception. For example, if we compare the Most Similar Digit result from RegInvNet and InvNet, we find the following that matches our intuition\\n- 0 is more similar to 6,8 compared to 2,5\\n- 1 is more similar to 7 compared to 4\\n- 3 is more similar to 8 compared to 5\\n- 6 is more similar to 5 compared to 4\\n- 8 is more similar to 3 compared to 9\\n- 9 is more similar to 7 compared to 8\\n\\nOur explaination to this fact is that the features learned by the model should encode information about the relationship among the digits with different classes but not live in completely different space away from others. This idea is similar to the motivation of distillation (Hinton et al., 2015).\\n\\n-------------------------------------------------------------------------------------------------------\\n1.2. Difference in predictions\\n\\nAlthough the features alone learned from our regularizer share meaningful information, when they get multiplied by our classifier w, the output is clearly distinguished.\", \"we_perform_the_dot_product_between_principle_components_and_weights_and_find_the_follows\": \"-maximum of dot product between all principle components and the normalized (i+1)th column of w for each digit i\\nInvNet\\n 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |\\n0.37 0.44 0.41 0.47 0.48 0.48 0.62 0.35 0.44 0.35\\n\\nRegInvNet\\n 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |\\n0.93 0.95 0.90 0.92 0.88 0.93 0.95 0.94 0.91 0.93\\n\\n-maximum of dot product between all principle components of digit 0/9 and the normalized 10th/1st column of w\\nInvNet\\n0.40 / 0.30\\n\\nRegInvNet\\n0.29 / 0.45\\n\\nWe see that our classifier can find (almost) precisely the direction of important principle component (we observe it's either the first or the second) of the corresponding features. The interaction between classifier for digit0/9 and the principle components of feature of digit9/0 is similar between InvNet and RegInvNet.\\n\\nOur conlusion is our features are more meaningful and our classifier is sharper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank for the reviewer for the thoughts and comments! We provide detailed response to each comment as below.\\n\\n1. \\\"comment on AM-GM inequality for L2 loss\\\"\\n\\nThe naive l2 regularization only involves regularizing the parameters w. Our derivation on the mutual information objective shows that feature regularization on F(X) is also necessary for good generalization of deep neural network. We observe that naive regularization on w results in a blow up on the norm of F(X), meaning the neural network can absorb the regularization effect by upscaling the feature. We show in Figure 4 that our regularization can hold the magnitude of F(X) while compressing w.\\n\\nAs mentioned in (Neyshabur et al., 2015), neural networks are equivalent up to some scaling factors passed among layers. As a consequence, there exist unbalanced neural networks with large l2 weights but are equivalent to those with small l2 weights. So the common belief that l2 regularization can \\\"simplify\\\" a model does not necessarily make sense for deep model.\\n\\n\\n2. \\\"pick a simple, feedforward model or convNet, see the performance and then compare it with the regularizer\\\"\\n\\nWe perform our regularization on a simple structure named \\\"34-layer plain\\\" in (He et al., 2015) and the result on CIFAR10 is shown in Table 2. There are some marginal improvement on this network but it's less robust against the choice of our hyperparameters.\\n\\nOur theoretical framework is built upon the assumption that the models have decent invertibility property. We emphasize that invertible neural networks are an important class of deep networks by citing the related work in Section 3 and proving in Appendix C that lower bound for the classification error is itself lower bounded by a constant, which is attained if the network is invertible. We perform our experiments on ResNet because we observe it's composed of blocks in a intrinsic invertible functional form I+L so we expect it has a good invertibility property. We adopt the suggestion from Reviewer 1 that we should also perform all our experimental results on more theoretically grounded invertible neural networks such as i-RevNet.\\n\\n\\n3. \\\"sec 4.3: How does this claim not apply to all deep learning models, regardless of the penalizations you propose\\\"\\n\\nIn the literature of machine learning l2 regularization only makes sense for shallow structure like logistic regression and SVM. For logistic regression, we can interpret it with heuristics from Occam's razor or, from a probablitic point of view, a Gaussian prior. For SVM it means a larger margin for the support features. But these interpretation becomes less intuitive for deep learning as the interactions between parameters become extremely complicated. In our work we reduce the deep structure into a linear one with MI objective and invertibility of ResNet, and formally justify the use of l2 regularization on both features F(X) and parameters w in the last layer; under this setting we interpret the meaning of l1/l2 regularization for deep models from classical perspectives in Section 4.3.\\n\\nWe fix our description in the revised version.\\n\\n\\n4. \\\"I don\\u2019t see how fig (2) (L) is \\u201croughly Gaussian\\u201d\\\"\\n\\\"Also for fig (2, R): the coefficients are not sparse as you claim\\\"\\n\\nWe agree that the use of the term \\\"roughly Gaussian\\\" is imprecise. We have plotted the histogram of the values of feature entries of digit 9 in Appendix H of the revised version.\\n\\nConsider an perturbed image by some Gaussian noise, if the noise were to make an impact to the output of the model, it must modify the values of feature entries where the model assigns the corresponding weights with high values. But for our regularized model, the number of weights with high values is smaller compared to that of normal model so it's harder for a random noise to make huge impact to the output.\\n\\nWe fix our use of terms in the revised version.\\n\\n\\n5. \\\"\\u201cMutual information is bounded \\u2026 correct them\\u201d:Can you provide some formulas for this and make this concrete\\\"\\n\\nWe explain in detail in Appendix I of the revised version. Neural networks in known to be occasionally over-confident in its prediction which is in fact wrong. In particular we want to punish some logits with large absolute value |w^F(X)| but have the wrong \\\"sign\\\" (for binary case). We show that information objective does not provide good gradient to fix this problem.\\n\\n\\n6. \\\"prop 2.1 and 2.2: can you define what you mean by \\u201cempirical version\\u201d\\\"\\n\\nWe have added the necessary definition in our proposition stated in the main text of the revised version. In general by empirical version we mean a Monte Carlo approximation of the population quantities. We show in our proof that if the sample size N is large enough, our Monte Carlo approximation is accurate with high probability.\\n\\n\\nWe have fixed the citation and definition of MI as recommended in the revised version.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the comments! We provide the detailed response to questions as below.\\n\\n1. \\\"an entry with small feature mean should still be given high w_10 value if for all other 9 digits the same entry has even smaller feature mean\\\"\\n\\nNeural networks should assign high w_10 values to feature entries of 9 with high values, so their product can contribute to the logits significantly. We agree that neural networks may also assign high w_10 values to feature entries of 9 with small absolute value but relatively large compared with the same feature entries of other digits; in this case we expect w_10 should hold even higher value so the product w^TF(X) contributes significantly to the logits.\\n\\nOur derivation in (6) shows that we should regularize the inner product (w^TF(X))_i for class i, which is the sum of product from each entry. It is possible that for one entry we have small feature value but large classifier value, but if the product of them is relatively small compared to other entry product then we will not consider it as an important feature entry to classification.\\n\\nWe have reproduced the feature statistics plot of all digits for InvNet on MNIST in Appendix F of the revised version. We observe that each digits have their specific entries with high value assigned to both weights and feature means.\\n\\n\\n2. \\\"how the proposed model tends to overlook irrelevant information\\\"\\n\\nConsider an image perturbed by some Gaussian noise, if the noise were to make an impact to the output of the model, it must significantly modify the values of feature entries where the model assigns the corresponding weights with high values. But for our regularized model, the number of weights with high values is smaller compared to that of normal model so it's harder for a random noise to make huge impact to the output (unless this noise is maliciously designed).\\n\\nOur belief is under our regularization, the weight w is shaped into a \\\"sparse\\\" form adapted to the particular input data distribution, so it is hard for any irrelevant information induced on the data to make an impact to the model's output.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the insightful suggestions!\\n\\n1. We have implemented i-RevNet (Jacobsen et al., 2018) with our regularization, CP - confidence penalizing with entropy (Pereyra et al., 2017) and LS - label smoothing (Szegedy et al., 2015) over some choices of hyperparameters. The performance results on CIFAR100 over 5 trails are provided below. We also reproduce other experiments on i-RevNet in Appendix G of the revised version.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\n | baseline | Our Regularization | CP | LS |\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\n | | alpha1=0,alpha2=1e-3 | alpha1=1e-6,alpha2=1e-3 | beta=0.1 | beta=0.01 | eps=0.01 | eps=0.1 |\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\n mean | 75.25 | 75.60 | 75.59 | 75.64 | 75.33 | 75.35 | 75.87 |\\n std | 0.45 | 0.33 | 0.42 | 0.29 | 0.42 | 0.43 | 0.22 |\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nThere are improvements in performance on all three methods.\\n\\nWe don't claim that our regularizer is the state of art that uniformly outperforms all other existing regularizers. Our theoretical result gives justification to any regularizers that effectively control the norm of w^TF(X), which includes our regularizer, weight normalization (Salimans et al., 2016), batch normalization (Ioffe et al., 2015), etc.. The optimal set of hyperparameters depends on the architecture of the model, but for a reasonable choice of hyperparameters (the additional loss introduced by regularizer is comparable with the classification loss) we find our regularizer is always effective on large scale models that tend to overfit.\\n\\nAs mentioned in (Neyshabur et al., 2015), neural networks are equivalent up to some scaling factors passed among layers. As a consequence, there exist unbalanced neural networks with large l2 weights but are equivalent to those with small l2 weights. So the common belief that l2 regularization can \\\"simplify\\\" a model does not necessarily make sense for deep model. One of our main contributions is to interpret the use of l1&l2 regularization in the deep learning setting. We reduce the deep structure into a linear one with MI objective and invertibility of ResNet, and formally justify the use of l2 regularization on both features F(X) and parameters w in the last layer with an interpretation of compressing irrelevant information explained in our proposed information optimization problem.\\n\\nWe derive a theoretically gounded regularizer from our proposed information optimization problem. Our regularizer may seem unusual as it involves feature F(X). We experimentally verify that regularization on F(X) is necessary for deep learning. In addition we observe that naive regularization on w results in a blow up on the norm of F(X), meaning the neural network can absorb the regularization effect by upscaling the feature. We show in Figure 4 that our regularization can hold the magnitude of F(X) while compressing w.\\n\\nWe believe we provide a new perspective to understand regularizations for deep models.\\n\\n\\n2. We have fixed the citation format in the revised version.\"}",
"{\"title\": \"Theoretically grounded regularizer that penalizes confident predictions, experimental section needs to be improved\", \"review\": \"The authors propose a regularizer placed on the final linear layer of invertible networks that penalizes confident predictions, leading to better generalization. The algorithm is theoretically grounded and even though SOTA networks do not meet some theoretical requirements in practice, it seems to be effective.\\n\\nThe ideas presented are interesting, but the paper is confusing at times and some motivations seem hand-wavy (see below).\\n\\nEven though penalizing overly confident predictions is an important topic, it has been attacked by various approaches in the past. It is not clear how the proposed method empirically compares to other approaches from the literature. On the theoretical side, proposition 2.1 and its proof are the main contribution. This very interesting observation could potentially be very useful in many tasks and shows once again why invertible neural networks are an important class of deep networks.\", \"main_concerns\": \"The authors do not compare their method to other approaches from the literature with similar goals, such as [1]. Therefore, it is hard to judge the performance of the proposed regularizer.\\n\\nThe authors claim that their InvNet is approximately invertible but there is no guarantee for this, making empirical conclusions unclear. The experiments would be more conclusive if a network that is fully invertible by construction is used. Such networks exist and perform on par with ResNets [2], so there is no reason not to use them. This would remove the need for analysis or discussion of this matter, as this issue clutters the main contribution and makes the claims rather fuzzy right now.\\n\\nMinor\\n\\n- Why are citations displayed in blue? This does not seem to be ICLR formatting standard.\\n\\n[1] Pereyra et al., \\\"Regularizing neural networks by penalizing confident output distributions.\\\"\\n[2] Jacobsen et al., \\\"i-RevNet: Deep Invertible Networks\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper has encouraging experimental result and the formulation is plausible, but I'm confused about how the proposed model tends to overlook irrelevant information.\", \"review\": \"This paper proposed to decompose the parameters into an invertible feature map F and a linear transformation w in the last layer. They aim to maximize mutual information I(Y, \\\\hat{Y}) while constraining irrelevant information, which further transfers to regularization on F and w. The authors also spend pages explaining how the hyper-parameters can be chosen.\", \"comments\": \"1. The experimental results showed a noticeable improvement on CIFAR-100 and is fairly robust to alpha_2.\\n2. The formulation seems plausible. \\n3. For Figure 2 and discussion in Section 4.2.1, I'm less convinced that the entries with high feature mean is 'relevant' and the others are not by looking at just digit 9 samples. For example, an entry with small feature mean should still be given high w_10 value if for all other 9 digits the same entry has even smaller feature mean. \\n\\n--------UPDATE AFTER READING THE AUTHORS' COMMENTS-----------\\n1. Appendix F lacks explanation. So I'm going to say what I meant in details. \\n\\nIn order to achieve high accuracy the model must assign high values on some entries of weights to separate the different classes. w_10 is a linear separator, not necessarily entry-wise (unless the features are independent). \\nI would take 1k features of each class and compute their principal components. Check if these components are different from class to class and plot the dot product of components and weights. If the following happens I would be more convinced:\\n1) principal components of digit 0 and digit 9 differs a lot AND \\n2) w_0 weights components of digit 0 higher but weights those of digit 9 lower\\n\\n2. \\\"But for our regularized model, the number of weights with high values is smaller compared to that of normal model ...\\\"\\nI'm not convinced. When perturbed by Gaussian noise, the variance on output does not necessarily depends on sparsity. In fact, it depends on the norm of the weights.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review (updated after readng other reviews and other responses)\", \"review\": \"In this paper, the authors propose to train a model from the point of view of maximizing mutual information between the predictions and the true outputs, with a regularization term that minimizes irrelevant information while learning. They show that the objective can be minimized by looking to make the final layer vectors be as uncorrelated as possible to the final layer representations, and simplify the same by applying Holder\\u2019s inequality to make the optimization tractable. They also apply an L1 penalty on the final layer. Experiments on CIFAR and MNIST show that using their regularizer to train DNN models yield gains in performance. The presence of the L1 penalty also makes the results more interpretable (to the extend possible by looking at a subset of features in the last layer of a DNN).\", \"comments\": [\"Meta Point: To really see that the regularization framework you\\u2019re proposing is good, why not just pick a simple, feedforward model or convNet, see the performance and then compare it with the regularizer you\\u2019re proposing? That will help hit the point home.\", \"Page 1: before jumping to equations (1) and (2), please formally define Mutual Information. The actual definition is much later in the text, but it\\u2019s better to define first.\", \"Beyond referring the user to section 3 on Page 1, please also mention a couple of key references in the appropriate locations.\", \"Page 3 paragraph 2: \\u201cMutual information is bounded \\u2026 correct them\\u201d : Can you provide some formulas for this and make this concrete? Or perhaps provide some references? This line is vague.\", \"prop 2.1 and 2.2: can you define what you mean by \\u201cempirical version\\u201d? Again, it\\u2019s probably good to have these terms crisply defined before using them.\", \"eqn (6) is interesting. Holder\\u2019s inequality gives you the product terms. Then you can also apply the AM-GM inequality, and get a sum. So then at the end of it all, you\\u2019re left with the standard elastic net penalty and not the product form. In that case, aren\\u2019t we back to just the usual regularization strategy? And in which case, should I interpret the results you have in sec 4 as \\u201cusing L1 penalties with L2 is good\\u201d ?\", \"To the point above, I guess one difference after the AM-GM step is that you will not have a squared L2 norm, but just L2. This is reminiscent of linear models where they use L2 loss instead of squared L2 loss. But on the penalty, squaring just adds smoothness. Can you comment on this?\", \"sec 4.2.1: I don\\u2019t see how fig (2) (L) is \\u201croughly Gaussian\\u201d. Can you explain? Maybe plot the histogram? Also for fig (2, R): the coefficients are approximately sparse. It\\u2019s not sparse as you claim since there are almost no zeros in the coefficients.\", \"I don\\u2019t get the point of sec 4.3: How does this claim not apply to all deep learning models, regardless of the penalizations you propose?\"], \"edit\": \"I have read the responses and the other reviews. The authors have addressed the few major points I had. I still think there are a few gaps that need to be addressed (as pointed by the other reviewers)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SJl8gnAqtX | Prob2Vec: Mathematical Semantic Embedding for Problem Retrieval in Adaptive Tutoring | [
"Du Su",
"Ali Yekkehkhany",
"Yi Lu",
"Wenmiao Lu"
] | We propose a new application of embedding techniques to problem retrieval in adaptive tutoring. The objective is to retrieve problems similar in mathematical concepts. There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts. Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships. Second, it is difficult for humans to determine a similarity score consistent across a large enough training set. We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of an abstraction and an embedding step. Prob2Vec achieves 96.88\% accuracy on a problem similarity test, in contrast to 75\% from directly applying state-of-the-art sentence embedding methods. It is surprising that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire. In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right. It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate. We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set. | [
"personalized learning",
"e-learning",
"text embedding",
"Skip-gram",
"imbalanced data set",
"data level classification methods"
] | https://openreview.net/pdf?id=SJl8gnAqtX | https://openreview.net/forum?id=SJl8gnAqtX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HyglQStBxV",
"r1eU8lgW67",
"B1l3_3TepQ",
"S1etFWjga7",
"BkeXRoYqhm",
"Syxz3gT_2Q",
"Skxyj9mRom",
"r1lDqoAiiX",
"Skgof-pjiX",
"HkeZu1ho9Q",
"B1gv_Bz_cX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1545078039720,
1541632078336,
1541622900267,
1541611904722,
1541213131122,
1541095594156,
1540401814629,
1540250511103,
1540243731354,
1539190632761,
1538954606977
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1081/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1081/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1081/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1081/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1081/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1081/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1081/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1081/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1081/Authors"
],
[
"~Mohammadamir_Kavousi1"
]
],
"structured_content_str": [
"{\"metareview\": \"I tend to agree with reviewers. This is a bit more of an applied type of work and does not lead to new insights in learning representations.\\nLack of technical novelty\\nDataset too small\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Lack of technical novelty\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"1- We briefly mentioned the way problem embedding with similarity metric is used in the recommendation system in this work, but here is more explanation on that. The most similar problem is not necessarily recommended to a student. On a high level, if a student performs well on problems, we assume he/she performs well on similar problems as well, so we recommend a dissimilar problem and vice versa. More specifically, we project the performance of students on problems they solved onto the problems that they have not solved. This way, we have an evaluation of the performance of students on unseen problems. A problem is recommended that is within the capacity of students close to their boundary to help them learn, and at the same time recommendation is done so that all the concepts necessary for students are practiced by them.\\nAn evaluation on real students is presented in part 2 of the comment titled \\u201cResponse to questions about Prob2Vec\\u201d on this page, and we observed that similar problems are more likely to be solved correctly at the same time or wrong at the same time.\\nThe math expressions are not ignored in our proposed Prob2Vec method. In the example given in the last paragraph on page 3 for example, math expressions are used to extract the concept n-choose-k. We both use math expressions and text to label problems with appropriate concepts.\\n\\n2- Prob2Vec only uses expert knowledge for rule-based concept extractor, but does not use selected informative words. The effort put for rule-based concept extractor is negligible compared to effort needed for annotation of all problems with their corresponding concepts. We both annotated all problems manually and used rule-based concept extractor for annotation. In the former method, we observed 100% accuracy in the similarity detection test and observed 96.88% accuracy in the latter method. However, the rule-based concept extractor needs much less manual effort than manual problem annotation and is capable to provide us with relatively high level of accuracy we need in our application. Note that our method is scalable as long as problems are in the same domain as the rule-based concept extractor is automated for a single domain, but for the case that problems span many different domains, it is the natural complexity of the data set that requires a more sophisticated rule-based concept extractor. Furthermore, in most realistic cases for education purposes, problems span a single domain not multiple ones.\\n\\nWe also like to grab your attention to the negative pre-training method proposed for training on imbalanced data sets. You may want to refer to part 2 of comment titled \\u201cResponse to Question on Negative Pre-Training\\u201d and part 1 of our response to reviewer2.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"1- The idea of using concepts to represent a problem is simple, but using it along with neural network based embedding gives us the opportunity to gain concept continuity as discussed on the last paragraph on page 7 and table 2, which is an active field of research in education.\\n\\nThe focus of this work is on problem embedding and its application in a recommendation system that uses problem embedding to project students\\u2019 performance for the problems they solved onto the problems that they have not solved yet. Using the evaluation on unseen problems, a problem is recommended that is within the capacity of students close to their boundary to help them learn, and at the same time we cover all the concepts necessary for them to learn. In the meanwhile, we got the interesting idea of negative pre-training on training with imbalanced training data and tested our hypothesis and included in the paper. Due to space limit, we did not include the literature review and comparison of other methods in terms of memory use and training complexity, but you can find them in the response of a previous comment below titled \\u201cResponse to Question on Negative Pre-Training\\u201d on this page to see the comparison. We can include the literature review for training on imbalanced data sets as well as comparison of other methods with negative pre-training in terms of memory use and training complexity in the final version. In summary, a) oversampling extremely suffers from over-fitting, b) SMOTE method that generates synthetic data sample is not feasible in word space, so the generated synthetic data (that are mathematical problems) are not of use for our training purpose, c) borderline-SMOTE both suffers from the same issue as SMOTE and its high complexity for finding the pairwise distance between all data samples, which is a burden in high dimensional data, and d) hybrid methods need m >> 1 weak learners in contrast to negative pre-training that uses a single learner. Memory use and training time is an issue for hybrid method when the weak learners are deep neural networks with too many parameters. We are currently running a broader experiment for negative pre-training on other data sets to gain more insight on it, but for the purpose of the task proposed in this work, it outperforms one-shot learning, which cannot be said that is the state-of-the art, but is a common practice. There is no notion of state-of-the-art in training on imbalanced data sets since due to our best knowledge, there is no method that outperforms all the other ones, and the performance of different methods depends more on the nature of the data set.\\n\\n2- The data set being small is the nature of the application since creating mathematical problems is a creative process, so it is hard to have a very big data set. The Prob2Vec method is performing well on this not relatively big data set, which is our goal, but if we have a bigger data set (as we have right now with more than 2400 problems), Prob2Vec may even have a better performance since with more data we can have a more precise concept and problem embedding.\\n\\n3- Thanks for your suggestion.\\n\\n4- It is difficult for humans to determine a similarity score consistent across a large enough training set, so it is not feasible to simply apply supervised methods to learn a similarity score for problems. Even if problem-problem similarity annotation is feasible, a lot of effort should go into the annotation, which is not scalable.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"1- There are two reasons that concept and problem embedding are performed in this work. Considering concept continuity is an important matter in education. Having concept embedding, concept continuity can be reached as is discussed in the last paragraph on page 7 and some other examples are given in table 2. By just having the most sophisticated concept extractor, the concept continuity cannot be retrieved. Furthermore, problem embedding is used by the recommender system to project the performance of students on the problems they solved onto other problems that they have not solved. This way, we have an idea of what problems should be recommended to them and which problems should not by having an evaluation of their ability to solve unseen problems and recommend problems in the boundary of their capacity, not way beyond, and to recommend problems in a way that covers all concepts necessary for students to learn. We have observed interesting patterns, e.g. similar problems are more likely to be solved correctly at the same time or wrong at the same time. Note that by just having the concepts of problems that are not in numerical form, performance projection may not be feasible and there is a need for using other methods like embedding.\\n\\n2- The data size being small is just the nature of the application. Creating new problems is a creative process and is not easy, given that with the insight we have on the application, the data size seems to suffice. Furthermore, since Prob2Vec is performing well for not a relatively big data set, it would definitely do well for big data sets since the more data we have, the more precise the concept and problem embedding are. The easy-tough-to-beat method proposed by Arora et al. is the state of the art in unsupervised sentence embedding that we compared our algorithm with. Please let us know if we missed anything.\\n\\nPre-training is a common practice in transfer learning (one-shot learning). The objective function does not differ from the objective function used for post training. Training on only negative samples with lower training epochs than the training epochs in post training just adjusts the weights of the neural network to a better starting point. If the training epochs in pre-training is relatively smaller than the training epochs in post training, due to curse of dimensionality, the warm start for post training results in better performance for NN classifier. To make it more clear what it means to train the neural network on a pure set of negative data samples, think about batch training. It's not likely, but possible, that a batch only has negative or positive samples. In the pre-training phase of our method, we intentionally used a pure set of negative samples (with fewer training epochs) to have a warm start for post training. As table 3 shows, our proposed method outperforms one-shot learning. Please look at part 1 of our response to reviewer2 and part 2 of comment titled \\\"Response to Question on Negative Pre-Training\\\" below.\"}",
"{\"title\": \"small technical contribution\", \"review\": \"The paper proposed a hierarchical framework for problem embedding and intended to apply it to adaptive tutoring. The system first used a rule-based method to extract the concepts for problems and then learned the concept embeddings and used them for problem representation. In addition, the paper further proposed negative pre-training for training with imbalanced data sets to decrease false negatives and positives. The methods are compared with some other word-embedding based methods and showed 100% accuracy in a similarity detection test on a very small dataset.\\n\\nIn sum, the paper has a very good application but not good enough as a research paper. Some of the problems are listed as follows:\\n1.\\tLack of technical novelty. It seems to me just a combination of several mature techniques. I do not see much insight into the problem. For example, if the rule-based concept extractor can already extract concepts very well, the \\u201cproblem retrieval\\u201d should be solved by searching with the concepts as queries. Why should we use embedding to compare the similarity? Also, the title of the paper is about problem retrieval but the experiments are about similarity comparison, there seems a gap. \\n2.\\tData size is too small, and the baselines are not state-of-the-art. There are some unsupervised sentence embedding methods other than the word-embedding based models. \\nSome clarity issues. For example, Page 6. \\u201cis pre-trained on a pure set of negative samples\\u201d\\u2014 what is the objective function? How to train on only negative samples?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Proposes a method for mathematical problem embedding but the contribution is not strong\", \"review\": \"This paper proposes a method for mathematical problem embedding, which firstly decomposes problems into concepts by an abstraction step and then trains a skip-gram model to learn concept embedding. A problem can be represented as the average concept (corresponding to those in the problem) embeddings. To handle the imbalanced dataset, a negative pre-training method is proposed to decrease false and false positives. Experimental results show that the proposed method works much better than baselines in similar problem detection, on an undergraduate probability data set.\", \"strong_points\": \"(1)\\tThe idea of decomposing problems into concepts is interesting and also makes sense. \\n(2)\\tThe training method for imbalanced datasets is impressive.\", \"concerns_or_suggestions\": \"1.\\tThe main idea of using contents to represent a problem is quite simple and straightforward. The contribution of this paper seems more on the training method for imbalanced data sets. But there are no comparisons between the proposed training method and previous related works. Actually, imbalance data sets are common in machine learning problems and there are many related works. The comparisons are also absent in experiments.\\n2.\\tThe experimental data set is too small, with only 635 problems. It is difficult to judge the performance of the proposed model based on so small data set. \\n3.\\tThe proposed method, which decomposes a problem into multiple concepts, looks general for many problem settings. For example, representing a movie or news article by tags or topics. In this way, the proposed method can be tested in a broader domain and on larger datasets.\\n4.\\tFor the final purpose, comparing problem similarity, I am wondering what the result will be if we train a supervised model based problem-problem similarity labels?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to questions about Prob2Vec\", \"comment\": \"1-\\tWe believe that it is easier to keep consistency in concept labeling than similarity annotation for a set of training problems. Furthermore, concept labeling in Prob2Vec is automated by a rule-based concept extractor, where the rules for concept extraction are relatively easy to find for experts. However, similarity annotation requires much more expert effort to prepare a relatively large training data set. In general, it is difficult to determine a similarity score consistent across a large enough training set, so it is not feasible to simply apply supervised methods to learn a similarity score for problems.\\n\\n2-\\tWe divided the probability course into 26 modules, where each module is on a specific topic. About 300 students who practiced on our platform were asked about the performance of the recommendation system after they practiced for each module (some students practiced a module for more than once at their own will). Hence, we got around 7000 feedback, where about 76% of them had positive responses on the performance of the recommender system. Furthermore, we observed that similar problems are likely to be done correctly at the same time or wrong at the same time by students.\"}",
"{\"comment\": \"My background on natural language processing suggests that you could\\u2019ve also annotated similarity among a set of training problems and trained a supervised machine learning model to predict the similarity of the unseen problems in the test set. Do you have any ideas if this can result in a better or comparable performance to Prob2Vec in your similarity detection test?\\n\\nHave you surveyed the performance of your proposed recommendation system based on Prob2Vec and fluency projection on problems based on their similarity scores to see how it works besides having good performance on the similarity detection test?\", \"title\": \"General questions about Prob2Vec\"}",
"{\"title\": \"Review of \\\"Prob2Vec: Mathematical Semantic Embedding for Problem Retrieval in Adaptive Tutoring\\\"\", \"review\": \"This paper proposes a new application of embedding techniques for mathematical problem retrieval in adaptive tutoring. The proposed method performs much better than baseline sentence embedding methods. Another contribution is on using negative pre-training to deal with an imbalanced training dataset.\\n\\nTo me this paper is just not good enough - the method essentially i) use \\\"a professor and two teaching assistants\\\" to build a \\\"rule-based concept extractor\\\" for problems, then ii) map problems into this \\\"concept space\\\" and simply treat them as words. There are several problems with this approach. \\n\\nFirst, doing so does not touch the core of the proposed application. For tutoring applications, the most important thing is to select a problem that can help students improve; even if you can indeed select a problem that is the most similar to another problem, is it the best one to show a student? There are no evaluations on real students in the paper. Moreover, the main difference between math problems and other problems is that there are math expressions; I do not think that using words/concept labels only is enough without touching on the math expressions.\\n\\nSecond, the proposed method does not sound scalable - the use of a professor and two teaching assistants to construct the concept extractor, and the use of an expert TA to select a small set of informative words. I am not sure how this will generalize to a larger number of problem spanning many different domains.\\n\\nI also had a hard time going through the paper - there aren't many details. Section 2.1 is where the method is proposed, yet most of the descriptions there are unclear. Without these details it is impossible to judge the novelty of the \\\"rule-based concept extractor\\\", which is the key technical innovation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to Question on Negative Pre-Training\", \"comment\": \"Thanks for your comment. Here are the responses to your two questions:\\n\\n1- The ratio of the number of training epochs in the first and second phases of the negative pre-training method is a hyper-parameter of this method. In our simulations, the number of training epochs in the first phase is half of those in the second phase. Note that if the number of training epochs in the first phase goes to zero, negative pre-training would become a pure down sampling. On the other hand, if the number of training epochs in the first phase is much larger than those in the second phase, the neural network cannot learn the structure of data in the second phase. Hence, we believe the ratio should not be large, but relatively small.\\n\\n2- As you mentioned, it is not feasible to rank methods for classification over unbalanced data sets, but their complexity in memory use and training time can definitely be discussed. Based on our extensive literature review on classification on unbalanced data sets, we found the following methods that are compared in complexity with negative pre-training below:\\n\\na) Under/Over sampling: under sampling (down sampling) has its own benefits of very low complexity and high speed, but as we see in our paper, the cost is low performance. Over sampling usually suffers from over-fitting specially when the imbalance in data set is high. In our case, if we want to use over sampling, we need to at least replicate each positive data sample for 50 times which is prone to extreme suffer from over-fitting. Negative pre-training obviously has more training time than under sampling (but gives better performance), but it needs about half memory and training time compared to over sampling (in case that over sampling is done to completely balance the training data set).\\n\\nb) SMOTE: this method generates synthetic data in order to bring balance for negative and positive data samples. For extreme imbalance in training data, this method can be prone to over-fitting as well. Regarding memory usage and training time, negative pre-training needs about half of those compared to SMOTE (in case that SMOTE is used to completely balance the training data set).\\n\\nc) Borderline-SMOTE: this method is in nature similar to SMOTE, but adds synthetic data in the border of the negative and positive sample. The method that is used to find the data samples in the border has high complexity, where the pairwise distance between the positive samples and all other samples should be measure (which can be hard for high dimensional data). Hence, although this method outperforms SMOTE, it needs strictly two times more memory and training time compared to negative pre-training, but for data with high dimension, it can be much worse or even impossible to find the pairwise distances between all data samples.\\n\\nd) Hybrid method: in this method, different weak learners are trained over the unbalanced training data set, then Adaboost method is used to combine the weak learners into a weighted sum that represents the boosted classifier. The comparison of hybrid method with negative pre-training in terms of memory usage and training time depends on how many weak learners we want to have and train. For m weak learners, we need to train m distinct neural networks, while we only have a single neural network in negative pre-training. Hence, this method is more complex than our proposed method and needs to store the weights of m neural networks that can be infeasible for deep networks (it is usually the case that m >> 1).\"}",
"{\"comment\": \"I have two questions on your proposed negative pre-training algorithm as follows:\\n\\n1- Do you use the same number of training epochs for the first and second phases of negative pre-training? If yes, why, if no, what's the intuition behind it?\\n\\n2- I know it's not applicable to compare the performance of your negative pre-training method with all other existing methods for classification with having imbalanced training data sets, and there is not such a notion of state-of-the-art algorithm for such methods, and probably the most prominent one is down sampling to avoid training complexity and over-fitting, but do you have any comparison of training complexity in terms of memory use and rough training time of negative pre-training and other algorithms for classification with having imbalanced training data sets?\", \"title\": \"Negative Pre-Training Details\"}"
]
} |
|
BJgLg3R9KQ | Learning what and where to attend | [
"Drew Linsley",
"Dan Shiebler",
"Sven Eberhardt",
"Thomas Serre"
] | Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived "top-down" attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than "bottom-up" saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers. | [
"Attention models",
"human feature importance",
"object recognition",
"cognitive science"
] | https://openreview.net/pdf?id=BJgLg3R9KQ | https://openreview.net/forum?id=BJgLg3R9KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByxJXDflx4",
"H1gDHhddR7",
"Sklu7PO_C7",
"Bke5rI__CX",
"ryg0WL__07",
"HyeS0BduCm",
"HJxxoO1ram",
"S1xOmfJSTQ",
"rylRhZkSam",
"S1gAO-ySTQ",
"S1gU756h3X",
"r1eVYyoh3Q",
"HygHbsciim"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544722199210,
1543175230701,
1543173919547,
1543173698464,
1543173638485,
1543173581392,
1541892247995,
1541890592501,
1541890485610,
1541890422034,
1541360157812,
1541349243874,
1540233980798
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1080/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1080/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1080/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1080/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a large-scale annotation of human-derived attention maps for ImageNet dataset. This annotation can be used for training more accurate and more interpretable attention models (deep neural networks) for object recognition. All reviewers and AC agree that this work is clearly of interest to ICLR and that extensive empirical evaluations show clear advantages of the proposed approach in terms of improved classification accuracy. In the initial review, R3 put this paper below the acceptance bar requesting major revision of the manuscript and addressing three important weaknesses: (1) no analysis on interpretability; (2) no details about statistical analysis; (3) design choices of the experiments are not motivated. Pleased to report that based on the author respond, the reviewer was convinced that the most crucial concerns have been addressed in the revision. R3 subsequently increased assigned score to 6. As a result, the paper is not in the borderline bucket anymore.\\nThe specific recommendation for the authors is therefore to further revise the paper taking into account a better split of the material in the main paper and its appendix. The additional experiments conducted during rebuttal (on interpretability) would be better to include in the main text, as well as explanation regarding statistical analysis.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review\"}",
"{\"title\": \"Revision plus a diff with the submitted version.\", \"comment\": \"We have uploaded two versions of the revision. (1) The most recent version is the revision. (2) The second-most recent version is a diff between the revision and our original ICLR submission. We hope this will help in evaluating our work.\"}",
"{\"title\": \"Details on the new draft\", \"comment\": \"In this newest draft we have overhauled explanations and readability of the entire manuscript. We have also fixed the notation issues you raised and included a clearer description of the operations of GALA. We performed another analysis of participant learning on the ClickMe game, as suggested, and found no difference in performance on the first ten versus the second set of ten trials (49.30% vs. 52.20%; this result is now included in the Appendix). Finally, we have removed the \\u201chuman-in-the-loop\\u201d description of GALA training with ClickMe maps. We have also changed the title of the manuscript to: \\u201cLearning what and where to attend.\\u201d\"}",
"{\"title\": \"Details on the new draft\", \"comment\": \"In this newest draft we have expanded our explanations for experiments and results, detailed all statistical tests that were used, and incorporated a discussion of the computational neuroscience inspiration for GALA into the main text. We have also included a new analysis in which we quantify attention interpretability on images from Microsoft COCO, and emphasized our quantification of interpretability on ClickMe images.\"}",
"{\"title\": \"Details on the new draft\", \"comment\": \"In this newest draft we have reworked our descriptions of methods, changed our model schematic figure, and detailed all statistical tests. Thank you for these suggestions!\"}",
"{\"title\": \"Revision\", \"comment\": \"We have uploaded a revision of the manuscript that addresses each of the points that we outlined in the meta response below. We would like to draw your attention in particular to a new analysis introduced in this draft, in which we quantified the \\u201czero-shot\\u201d model interpretability of the GALA module trained with ClickMe maps on a large set of images from Microsoft COCO with a method inspired by [1]. As we mention in Section 4.4, GALA trained with ClickMe is significantly more interpretable by this metric than GALA trained without ClickMe (significance testing done with randomization tests, as is now described in the manuscript). We have also included Appendix Figure 8, which shows examples of the visual features favored by each model: the difference between the two models is dramatic. In total, we now have quantitative and qualitative evidence that GALA attention is more interpretable when it is co-trained with ClickMe on the ClickMe dataset (it explains a greater fraction of human ClickMe map variability) and on Microsoft COCO (more interpretable attention according to this new analysis).\\n\\nWe believe this version of the manuscript is greatly improved and we thank you all for your comments. We hope the manuscript now answers any remaining questions or concerns you may have.\\n\\n[1] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network Dissection: Quantifying Interpretability of Deep Visual Representations. Computer Vision and Pattern Recognition (CVPR), 2017.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the detailed comments and the very thorough review! Below are our responses to your suggestions on improving the paper.\\n\\n1. We are overhauling Sections 3 and 4 to fix notation issues, improve readability, and clarify the figure. Along these lines and as you suggested, we will include a brief description of the GALA at the beginning of Section 3. The W_expand and W_shrink operations are borrowed from the manuscript of the original Squeeze-and-Excitation [1] module. We will revamp our description of these, which will also incorporate more of the neuroscience motivation. \\n\\n2. The regularization term forces attention maps in the network to be similar to human feature importance maps. We agree that this is why the maps for different layers in Fig. 4 look similar vs. the attention maps from a GALA trained without such constraints, which are distinct. We felt that the improved interpretability, performance, and similarity to human feature maps that fell out of using this attention supervision justified its use at each layer. We also agree that the right pairing of properly supervised attention with a much shallower network could yield a far more parsimonious architecture for problems like object recognition than the very deep and very powerful ResNets.\\n\\n3. We agree that the image dataset we used to compare ClickMe with Clicktionary maps is far from ideal, and we will note this in the manuscript. However, these were the only images available for such an analysis. Although it is underpowered, this analysis is also consistent with the other results we report about how the feature importance maps derived from these games are highly consistent and stereotyped between participants (section 2).\\n\\nAlso, you raise a good point about the split-half comparison we use to demonstrate that participants do not learn CNN strategies in ClickMe. However, such a strategy would amount to a sensitivity analysis of the CNN without knowing how much of the image it was looking at: expanded versions of the bubbles placed by human players were used to unveil those regions to the CNN. The average CNN performance of 53.64% in the first half vs. 53.61% in the second half of participants' trials also does not suggest an effective sensitivity analysis. We will perform another analysis of participant performance to see if learning took place within the first tens of trials, and report this in the manuscript.\\n\\n4. This is a good point. How about: \\u201cLearning what and where to attend with human feedback\\u201d\\n\\n[1] Hu J, Shen L, and Sun G. Squeeze-and-excitation networks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.\"}",
"{\"title\": \"Response\", \"comment\": \"We really appreciate the comments and we are working to correct the issues you raised.\\n\\nWe have devised an analysis that we hope can address your main critique, which involves measuring the similarity of the attention masks from GALA to object instance annotations using intersection-over-union (IOU), similar to [1]. We would like to note, however, that this is another flavor of an analysis that we present in the paper that we believe is an even more direct way of measuring interpretability: the similarity between attention masks and ClickMe maps, which describe visual features important to human observers. Please let us know if you have anything else in mind that would improve our argument of the interpretability of the attention maps from the GALA-ResNet-50 trained with ClickMe.\\n\\nTo address your other comments, as we detailed to Reviewer 2, we will expand our description of the statistical tests used in the manuscript. We will also improve our justification for the experimental design, including a definition and more context for rapid visual recognition experiments. This experimental design has been used extensively in visual neuroscience (e.g., [2-3]), and we apologize for presenting it without appropriate context and motivation for why we chose it and the kinds of constraints that it places on participants to make visual decisions. Along these lines, we will add a discussion of the neuroscience inspiration of the GALA module to the main text. Finally, we chose the verb \\u201cattend\\u201d over one like \\u201cfocus\\u201d because of its meaning in neuroscience and how the GALA module works, but will gladly re-evaluate the usage if you can point to where in the manuscript it does not make sense to you.\\n\\n[1] Bau D, Zhou B, Khosla A, Oliva A, and Torralba A. Network dissection: Quantifying interpretability of deep visual representations. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.\\n[2] Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature, 1996.\\n[3] Serre T, Oliva A, Poggio T. feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 2006.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the review and comments. We are working on fixing the issues that you raised, and believe that correcting them will greatly improve the quality of the manuscript.\\n\\nWe are fixing the issues with notation, defining the variables that we neglected to in the original draft, overhauling our model figure, and improving the transitions between sections of the manuscripts. We thank you for pointing out that the statistical tests were unclear. We will incorporate the following test descriptions into the manuscript.\\n\\nFor the behavioral experiment, this involved randomization tests, which compared the performance between ClickMe vs. Salicon groups at every \\u201cpercentage of image revealed by feature source\\u201d bin. A null distribution of \\u201cno difference between groups\\u201d was constructed by randomly switching participants\\u2019 group memberships (e.g., a participant who viewed ClickMe mapped images was called a Salicon viewer instead), and calculating a new difference in accuracies between the two groups. This procedure was repeated 10,000 times, and the proportion of these randomized scores that exceeded the actual observed difference was taken as the p-value. This randomization procedure is a common tool in biological sciences [1].\\n\\nA similar procedure was used to derive p-values for the correlations between model features and ClickMe maps. As we mention in the manuscript in our description of calculating the null inter-participant reliability of ClickMe maps: \\u201cWe also derived a null inter-participant reliability by calculating the correlation of ClickMe maps between two randomly selected players on two randomly selected images. Across 10,000 randomly paired images, the average null correlation was $\\\\rho_r=0.18$, reinforcing the strength of the observed reliability.\\u201d The p-values of correlations between model features and ClickMe maps are the proportion of per-image correlation coefficients that are less than this value.\\n\\n[1] Edgington, E. Randomization tests. The Journal of Psychology: Interdisciplinary and Applied,1964.\"}",
"{\"title\": \"General response to reviewers\", \"comment\": \"We thank the reviewers for their detailed and constructive comments. In this initial response, we want to acknowledge the raised critiques and present our plan for addressing them. Please let us know if you feel we have omitted anything. We believe that these revisions will greatly improve the manuscript.\\n\\nTo summarize, the revisions will address the following points:\\n\\n1. We will clarify and improve the methods section by replacing our model figure, fixing notational issues, explaining our statistical testing procedures, and defining terms noted by the reviewers.\\n2. We will improve the flow and organization of the manuscript. This includes moving the computational neuroscience background to the related work, and expanding it.\\n3. We will improve our motivation for the experimental design, and take more care to walk the reader through the results as well as the effect of ClickMe-map supervision on attention.\\n4. We will include a link to a GitHub repository with Tensorflow code for the model.\\n5. We will add a new analysis to quantify how co-training a GALA-ResNet with ClickMe maps increases the interpretability of its attention maps.\"}",
"{\"title\": \"Review\", \"review\": [\"The paper presents a new take on attention in which a large attention dataset is collected (crowdsourced) and used to train a NN (with a new module) in a supervised manner to exploit self-reported human attention. The empirical results demonstrate the advantages of this approach.\", \"*Pro*:\", \"Well-written and relatively easily accessible paper (even for a non-expert in attention like myself)\", \"Well-designed crowdsourcing experiment leading to a novel dataset (which is linked to state-of-the-art benchmark)\", \"An empirical study demonstrates a clear advantage of using human (attention) supervision in a relevant comparison\", \"*Cons*\", \"Some notational confusion/uncertainty in sec 3.1 and Fig 3 (perhaps also Sec 4.1): E.g. $\\\\mathbf{M} and {L_clickmaps} are undefined in Sec 3.1.\", \"*Significance:* I believe this work would be of general interest to the image community at ICLR as it provides a new high-quality dataset and an attention module for grounding investigations into attention mechanisms for DNNs (and beyond).\", \"*Further comments/questions:*\", \"The transition between sec 2 and sec 3 seems abrupt; consider providing a smoother transition.\", \"Figure 3: reconsider the logical flow in the figure; it took me a while to figure out what going on (especially the feedback path to U\\u2019).\", \"It would be beneficial to provide some more insight into the statistical tests casually reported (i.e., where did the p values come from)\", \"The dataset appears to be available online but will the code for the GALA module also be published?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting and relevant paper with poor justification for study design and analysis\", \"review\": \"SUMMARY\\n\\nThis paper argues that most recent gains in visual recognition are due to the use of visual attention mechanisms in deep convolutional networks (DCNs). According to the authors; the networks learn where to focus through a weak form of supervision based on image class labels. This paper introduces a data set that complements ImageNet with circa 500,000 human-derived attention maps, obtained through a large-scale online experiment called ClickMe. These attention maps can be used in conjunction with DCNs to add a human-in-the-loop feature that significantly improves accuracy.\\n\\nREVIEW\", \"this_paper_is_clearly_within_scope_of_the_iclr_conference_and_addresses_a_relevant_and_challenging_problem\": \"that of directing the learning process in visual recognition tasks to focus on interesting or useful regions. This is achieved by leveraging a human-in-the-loop approach.\\n\\nThe paper does a fair job in motivating the research problem and describing what has been done so far in the literature to address the problem. The proposed architecture and the data collection online experiment are also described to a sufficient extent.\\n\\nIn my view, the main issue with this paper is the reporting of the experiment design and the analysis of the results. Many of the design choices of the experiments are simply listed and not motivated at all. The reader has to accept the design choices without any justification. The results for accuracy are simply listed in a table and some results are indicated as \\u201cp<0.01\\u201d but the statistical analysis is never described. Interpretability is highlighted in the abstract and introduction as an important feature of the proposed approach but the evaluation of interpretability is limited to a few anecdotes from the authors\\u2019 review of the results. The paper does not present a procedure or measure for evaluating interpretability.\\n\\nOTHER SUGGESTIONS FOR IMPROVEMENT\\n\\n- The verb \\u201cattend\\u201d is used in many places where \\u201cfocus\\u201d seems to be more appropriate.\\n\\n- \\u201cwe ran a rapid experiment\\u201d: what does rapid mean in this context?\\n\\n- \\u201cthe proposed GALA architecture is grounded in visual neuroscience\\u201d : this and many other statements are only elaborated upon in the appendix. I understand that page limit is always an issue but I think it is important to prioritise this and similar motivations and put at least a basic description in the main body\\n\\nUPDATE\\n\\nMy most serious concerns have been addressed in the revised version.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of \\\"Learning what and where to attend with humans in the loop\\\"\", \"review\": \"This paper proposes a new approach to use more informative signals (than only class labels), specifically, regions humans deem important on images, to improve deep convolutional neural networks. They collected a large dataset by implementing a game on clickme.ai and showed that using this information results in both i) improved classification accuracy and ii) more interpretable features.\\n\\nI think this is good work and should be accepted. The main contribution is three fold: i) a publicly available dataset that many researchers can use, ii) a network module to incorporate this human information that might be inserted into many networks to improve performance, and iii) some insights on the effect of such human supervision and the relation between features that humans deem important to those that neural nets deem important.\", \"some_suggestions_on_how_to_improve_the_paper\": \"1. I find Sections 3 & 4 hard to track - some missing details and notation issues. Several variables are introduced without detailing the proper dimensions, e.g., the global feature attention vector g (which is shown in the figure actually). The relation between U and u_k isn't clear. Also, it will help to put a one-sentence summary of what this module does at the beginning of Section 3, like the last half-sentence in the caption of Figure 3. I was quite lost until I see that. Some more intuition is needed, on W_expand and W_shrink; maybe moving some of the \\\"neuroscience motivation\\\" paragraph up into the main text will help. Bold letters are used to denote many different things - in Section 4 as a set of layers, in other places a matrix/tensor, and even an operation (F). \\n\\n2. Is there any explanation on why you add the regularization term to every layer in a network? This setup seems to make it easy to explain what happens in Figure 4. One interesting observation is that after your regularization, the GALA features with ClickMe maps exhibit minimal variation across layers (those shown). But without this supervision the features are highly different. What does this mean? Is this caused entirely by the regularization? Or there's something else going on, e.g., this is evidence suggesting that with proper supervision like human attention regions, one might be able to use a much shallower network to achieve the same performance as a very deep one?\\n\\n3. Using a set of 10 images to compute the correlation between ClickMe and Clicktionary maps isn't ideal - this is even less than the number of categories among the images. I'm also not entirely convinced that \\\"game outcomes from the first and second half are roughly equal\\\" says much about humans not using a neural net-specific strategy, since you can't rule out the case that they learned to play the game very quickly (in the first 10 of the total 380 rounds). \\n\\n4. Title - this paper sound more \\\"human feedback\\\" to me than \\\"humans-in-the-loop\\\", because the loop has only 1 iteration. Because you are collecting feedback from humans but not yet giving anything back to them. Maybe change the title?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SygLehCqtm | Learning protein sequence embeddings using information from structure | [
"Tristan Bepler",
"Bonnie Berger"
] | Inferring the structural properties of a protein from its amino acid sequence is a challenging yet important problem in biology. Structures are not known for the vast majority of protein sequences, but structure is critical for understanding function. Existing approaches for detecting structural similarity between proteins from sequence are unable to recognize and exploit structural patterns when sequences have diverged too far, limiting our ability to transfer knowledge between structurally related proteins. We newly approach this problem through the lens of representation learning. We introduce a framework that maps any protein sequence to a sequence of vector embeddings --- one per amino acid position --- that encode structural information. We train bidirectional long short-term memory (LSTM) models on protein sequences with a two-part feedback mechanism that incorporates information from (i) global structural similarity between proteins and (ii) pairwise residue contact maps for individual proteins. To enable learning from structural similarity information, we define a novel similarity measure between arbitrary-length sequences of vector embeddings based on a soft symmetric alignment (SSA) between them. Our method is able to learn useful position-specific embeddings despite lacking direct observations of position-level correspondence between sequences. We show empirically that our multi-task framework outperforms other sequence-based methods and even a top-performing structure-based alignment method when predicting structural similarity, our goal. Finally, we demonstrate that our learned embeddings can be transferred to other protein sequence problems, improving the state-of-the-art in transmembrane domain prediction. | [
"sequence embedding",
"sequence alignment",
"RNN",
"LSTM",
"protein structure",
"amino acid sequence",
"contextual embeddings",
"transmembrane prediction"
] | https://openreview.net/pdf?id=SygLehCqtm | https://openreview.net/forum?id=SygLehCqtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skg2zkObxE",
"rJxHUVu7JE",
"B1gMEDYZ1N",
"Byxe0vrhA7",
"ryl0Ewj_C7",
"HygYQvsOR7",
"BJeiRUs_0m",
"SylE1Li_0m",
"BkgcoSjdAQ",
"HkeFnFy3nX",
"rygB3rus3Q",
"BkeQG-Te3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544810260057,
1543894093440,
1543767849928,
1543423944068,
1543186230095,
1543186208693,
1543186131485,
1543185884395,
1543185826060,
1541302704844,
1541273005271,
1540571403318
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1078/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1078/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1078/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1078/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1078/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1078/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1078/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1078/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1078/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1078/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1078/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1078/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers and authors had a productive conversation, leading to an improvement in the paper quality. The strengths of the paper highlighted by reviewers are a novel learning set-up and new loss functions that seem to help in the task of protein contact prediction and protein structural similarity prediction. The reviewers characterize the work as constituting an advance in an exciting application space, as well as containing a new configuration of methods to address the problem.\\n\\nOverall, it is clear the paper should be accepted, based on reviewer comments, which unanimously agreed on the quality of the work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Clear accept\"}",
"{\"title\": \"Response to follow up\", \"comment\": \"Unfortunately, we can no longer submit additional revisions to OpenReview. However, we have made the requested updates to the manuscript for the camera-ready version (assuming it is accepted!). We detail those updates and respond to the reviewer\\u2019s additional comments below.\\n\\n4. Contact map prediction performance\\nWe have included a reference to Appendix A.5 in the manuscript and have updated those results to a table for easier examination. While we appreciate the reviewer\\u2019s interest in seeing state-of-the-art contact prediction performance and we too are interested to see if our embeddings could be used to improve contact prediction models, we want to emphasize that the primary goal of our model is structural comparison, and contact prediction is merely an auxiliary task. Furthermore, due to the nature of our train/test splits, it would be extremely difficult to correctly compare our contact prediction performance with the state-of-the-art methods on those datasets.\\n\\nThat said, to give some idea of the differences in performance, we now include results for contact prediction using our full SSA model on the publicly released protein structures from CASP12 that were used for contact prediction. This is, unfortunately, a very small dataset, but it gives some idea of how our model compares with the best co-evolution based deep convolutional neural network models. We find that our model performs much better than these methods when predicting all contacts but performs worse when predicting only distant contacts (|i-j| > 11). This is, perhaps, expected, because the co-evolution methods focus on distant contact prediction, whereas our model is trained to predict all contacts, of which the vast majority are local. Interestingly, we do achieve similar distant contact prediction performance to GREMLIN, a purely co-evolution based model (no neural network). To us, this suggests that our embeddings are likely to have utility as additional inputs to the state-of-the-art models. We have included these results and discussion in Appendix A.5.\\n\\n7. Discussion of limitations:\\nWe have added some discussion of single- vs. multi-domain proteins in the conclusion.\\n\\nThe broken table reference has also been fixed. Thank you for pointing this out.\\n\\nWe plan to release source code with the camera-ready version of the manuscript!\"}",
"{\"title\": \"In favor of accepting\", \"comment\": [\"I suggest accepting the paper as an application paper since\", \"it has a clear methodological component\", \"the evaluation is solid and results promising\", \"the reviewers addressed my comments and revised their manuscript\"]}",
"{\"title\": \"Manuscript clearly improved; Only few minor follow-up comments.\", \"comment\": \"I appreciate that you rigorously addressed all my comments. I have only few minor follow-up comments. I will increase my rating once you addressed these outstanding comments.\\n\\n4. Contact map prediction performance\\nThank for including contact map prediction results in Appendix A5. Please reference Appendix A5 (and all other sections) in the main text. Can you also include state-of-the art results? You can consider showing performance number in a table for clarity. I agree that your model is not tweaked for contact map prediction but it is informative to know its performance relative to the state of the art for assessing if embeddings can be used as (additional) input of a contact map predictor.\\n\\n7. Discussion of limitations\\nThanks for clarifying that your model can handle variable length sequences, which is certainly an advantage. Can you briefly discuss the problem with multi-domain proteins in the conclusions sections?\", \"page_8\": \"The reference \\u2018Appendix Table ??\\u2019 is broken. Please fix it.\\n\\nI encourage you to publish your source code at the end of the review period.\"}",
"{\"title\": \"Response (final part)\", \"comment\": \"11. Distance thresholds anywhere in the range of 6-12a are common. We chose 8a somewhat arbitrarily from this range based on the average size of an amino acid.\\n\\n12. In the contact prediction component, we use the elementwise products and absolute differences between the embeddings as features rather than the concatenated features primarily because we want the feature vectors to be symmetric (h_ij = h_ji). This featurization has shown success for semantic relatedness in NLP in papers like \\u201cImproved Semantic Representations From Tree-Structure Long Short-Term Memory Networks\\u201d Tai et al. 2015. We use sigmoid activation to give the predicted contact probabilities.\\n\\n13. We have revised this section in order to improve the clarity of table 1.\\n\\n14. This would be an interesting analysis to complement secondary structure prediction for examining whether the embeddings are capturing local structural properties of the sequence. However, space and time constraints make this difficult to include. We are definitely interested in following up on this direction in future work.\\n\\n15. TMalign actually performs slightly worse when using the maximum or geometric mean instead of arithmetic mean. With geometric mean we get 0.80729 accuracy, 0.60197 Pearson\\u2019s correlation, and 0.37059 Spearman\\u2019s correlation on the 2.06 test set and 0.8126 accuracy, 0.80102 Pearson\\u2019s correlation, and 0.38548 Spearman\\u2019s correlation on the 2.07 test set. Using the maximum of the two scores performs even worse getting 0.79255 accuracy, 0.51852 Pearson\\u2019s correlation, and 0.28407 Spearman\\u2019s correlation on the 2.06 test set and 0.80742 accuracy, 0.75921 Pearson\\u2019s correlation, and 0.34023 Spearman\\u2019s correlation on the 2.07 test set.\"}",
"{\"title\": \"Response continued\", \"comment\": \"6. Embedding time is very fast (~0.03 ms per amino acid on an NVIDIA V100) and easily fits on a single GPU (requiring <9GB of RAM to embed 130,000 amino acids) with both time and memory scaling linearly with sequence length. Computing the SSA for sequence comparison scales as the product of the sequence length, O(nm), but is easily parallelized on a GPU. Computing the SSA for all 237,016 pairs of sequences in the 2.07 new test set required on average 0.43 ms per pair (101 seconds total) when performed serially on a single NVIDIA V100 GPU. Training was performed on a single NVIDIA V100 and required ~3 days to train. We now include this description in the manuscript.\\n\\n7. We are afraid there has been a misunderstanding which we hope to have now resolved in the manuscript. The model easily handles variable length sequences. This is not a limitation of our model. During training and prediction, the memory required to embed an amino acid sequence into a sequence of vectors scales linearly with the sequence length. The memory required to form the pairwise similarity matrix between two sequences scales as the product of their lengths (one entry is required for each pair of positions). Contact prediction does require memory scaling quadratically with sequence length (one entry for each pair of positions within the sequence). One real limitation is that the model is trained on protein domains and the encoder gives embeddings that are highly contextualized. This means that if the model is run on multidomain protein sequences it isn\\u2019t clear that the domains will map to the same embedding vectors as the domains in isolation. On the one hand this makes sense and is desirable, because we know structure is highly context dependent. On the other hand, we might think that the structure of domains should be the same regardless of surrounding context and that these sequences should receive the same embeddings as a result. There is definitely room for further work in this area.\\t\\n\\n8. We have updated the manuscript to include the requested experiment in table 3. To address the reviewer\\u2019s concern, the biLSTM+CRF with 1-hot encoded amino acids as features performs poorly with an overall score of 0.52, barely matching the performance of MEMSAT-SVM overall. This result indicates that the biLSTM+CRF architecture is not enough to achieve good performance.\", \"minor_comments\": \"9. s\\u2019 is defined by the learned embeddings and in this sense is learned. The reason we do not pass the embeddings of each sequence into a separate model to predict the similarity is two-fold. (1) we want the learned embeddings to have an interpretable correspondence \\u2013 i.e. the distance between two embeddings encodes their semantic similarity. Feeding the embeddings directly into some unconstrained model would not impose this condition. Furthermore, we want all of the \\u201ccomparative structural information\\u201d to be encoded directly into the embeddings rather than potentially emerging from the downstream model. (2) the sequences are of variable length so it isn\\u2019t clear how such a model should be structured, but, in section 4.2, we compare against two alternative methods for comparing between sequences of vector embeddings.\\n\\n10. The motivation behind using ordinal regression is that it explicitly captures ordering information about the similarity labels (similarity 0 is less than similarity 1 is less than similarity 2 etc.). The ordinal regression framework imposes this property directly in the structure of the model and ensures that predicted similarity increases monotonically with the alignment score.\"}",
"{\"title\": \"Response to the reviewer\", \"comment\": \"We thank the reviewer for their detailed critique and helpful suggestions. We apologize for the lack of clarity in various aspects of the manuscript and have made revisions to address these concerns. We have also now included results for the requested additional experiments. Please see below for our responses to the reviewer\\u2019s specific comments.\", \"major_comments\": \"1. We have updated the manuscript to include an explanation (Section 3.4) of how hyperparameters were chosen. We note that performance on a validation set held-out from the training set and separate from the test set was used to select amino acid resampling probability and the loss interpolation parameter, lambda. Amino acid resampling probability was chosen to be 0.05 as we had observed a small improvement in performance on the validation set for models trained with resampling over models trained without resampling. Lambda was chosen from {0.5, 0.33, 0.1} to give best structural similarity prediction accuracy on the validation set. We observe that decreasing lambda corresponded to increasing accuracy over this range. (See Appendix A.2 for more details)\\n\\nWe wish to emphasize that the rest of the hyperparameters were not tuned based on performance on any data, (held-out or otherwise), and we would expect that the results should be insensitive to reasonable settings, In particular, learning rate was set to 0.001 following common practice with the ADAM optimizer and we observed good convergence on the training set with this setting. The smoothing factor was set to 0.5 in order to slightly upweight pairs of sequences with high similarity that would otherwise be rare during in trining. In particular, we choose 0.5 in order to have roughly two examples of family level similarity per minibatch. \\n\\n2. Table 2 is an ablation study of the model components. Table 1 is a comparison with other protein similarity scoring methods. We now make these points clearer in their corresponding sections and table captions.\\n\\n3. In order to use structure prediction for predicting SCOP co-membership given a pair of protein sequences, we would have to first predict structure for each sequence and then compare the predicted structures to predict co-membership using TMalign. This seems redundant, because we have already compared with TMalign using the actual structures for proteins in our datasets and found that our full SSA model is significantly better at predicting SCOP co-membership. That said, it would be interesting to investigate how predicted structures may perform. However, these methods require significant time per protein (I-TASSER reports 1-2 days per structure), so generating predictions for our ASTRAL 2.07 new test set of 688 sequences is not possible before the end of the review period, let alone predicting structures for all 5,602 sequences in our ASTRAL 2.06 test set.\\n\\n4. Our focus was not to train the best contact map prediction method, but rather to use the contact prediction task to better embed local structure at each position and to improve the structure similarity prediction task. Because the model is predicting contacts directly from raw sequence, we think it is unlikely to perform as well as the best co-evolution based methods. Nonetheless, we have now included the contact prediction performance results in the Appendix.\\n\\n5. Variable length sequences require no special consideration with our model. The biLSTM naturally handles sequences of arbitrary length. During training, no truncation of any kind is performed. Our model is not limited to fixed length sequences. We now make this clearer in the manuscript. In terms of sequences lengths, the mean length in the training set is 175.9 amino acids with a standard deviation of 110.1 amino acids. The minimum and maximum lengths are 20 and 1,449 respectively. The mean length in the 2.06 test set is 179.7 with a standard deviation of 114.3 and minimum and maximum sequence lengths of 21 and 1,500. In the 2.07 new test set, the mean length is 190.3 with a standard deviation of 148.7 and minimum and maximum lengths of 25 and 1,664.\"}",
"{\"title\": \"Response to the reviewer\", \"comment\": \"We apologize to the reviewer for the lack of clarity in the manuscript particularly regarding the formulation of the problem. We have made significant revisions to the manuscript to improve clarity and better justify the specific modeling decisions. Other than presentation, we feel that many of the reviewer\\u2019s concerns are already addressed in the manuscript but perhaps were not made clear in our writing. We hope that our revised manuscript improves on this. We address the individual concerns of the reviewer below.\", \"clarity\": \"1. The goal is to learn sequence embeddings using structural information. A secondary goal is to have a model that is good at predicting structural similarity from sequence. Structure prediction is not a goal \\u2013 in fact, we are specifically trying to avoid doing structure prediction. We have updated our manuscript in an effort to improve our description of the problem.\\n\\n2. The alignment part is described in section 3.2.1 of the manuscript and also appears in Figure 1. It is used to define a scalar similarity between two amino acid sequences using their sequences of vector embeddings.\\n\\n3. The motivation for learning a vector representation of each amino acid position is to try to capture local structure information (based on sequence). Primarily, this allows the embeddings to be used as features in sequence to sequence prediction problems (like transmembrane prediction) that would not be possible with single vector representations of the sequence. It also gives interpretability in the sense that we can ask how sections of sequence or individual positions differ between protein sequences.\\n\\n4. The particular architecture of a 3-layer biLSTM allows embeddings to potentially be functions of distant sequence elements. Appendix Table 4 shows a comparison of a few embedding architectures. Specifically, we compare linear, fully connected, and 1-layer biLSTM models. We have clarified this detail in section 4.2. Single direction LSTMs would not be able to capture whole sequence information when forming the embedding at each position but only information on one side of that position (i.e. z_i would be a function of amino acids up to position i but not after position i if a single direction LSTM encoder was used). Clearly, local structure depends on the sequence to either side.\", \"quality\": \"1. We have updated the manuscript to better justify the specific modeling components.\\n\\n2. It\\u2019s unclear what exactly the reviewer is suggesting here. The goal is to learn embeddings that capture structural information as a function of sequence. At \\u201cprediction time\\u201d only sequences are observed. If the question is: why not try to predict specific structural properties, again, it\\u2019s not clear what properties the reviewer has in mind. We are trying to predict the properties that give rise to the SCOP classification. The SCOP category membership represents expert knowledge in protein structure and already groups proteins by structural properties.\\n\\n3. Although we don\\u2019t understand exactly why the pretrained language model helps, the intuition is that the language model hidden states capture information about the space of natural protein sequences that is useful for learning about structural similarity. It is transferring information about what natural proteins \\u201clook like\\u201d from a large set of proteins to this problem where we have a relatively smaller number of sequences. We show empirically in table 2 that the language model improves structure similarity prediction results.\", \"originality\": \"The language model is novel in application to protein sequences. The SSA component is novel for defining similarity between sequences and learning embedding models from observed global similarity. Using contact prediction for learning sequence embeddings is also novel.\\n\\nResults\\n1. The pairs of sequences considered have no more than 95% sequence identity. In fact, the vast majority of pairs of sequences have much less than this. The average percent identity between sequence pairs is 13% in both the ASTRAL 2.06 test set and ASTRAL 2.07 new test set. Furthermore, we compare with all baselines on exactly the same sequence pairs, so the comparison is valid. We now include this information in the manuscript.\\n\\n2. We did not consider several train/test splits to be necessary, because the test split is already large (much larger than commonly used in biology applications) with 20% of the sequences (5,602) being held out. The test set composed of proteins added in the 2.07 release of ASTRAL is admittedly smaller (688 sequences), but still comprises many more sequences than commonly used in the field.\\n\\n3. We have made an effort to better describe the baselines in the revised manuscript.\"}",
"{\"title\": \"The embedding model is learned from structure\", \"comment\": \"We thank the reviewer for their comments and apologize for the confusing nature of some parts of the manuscript. We have revised the manuscript with the goal of improving clarity and hope that the new version will give a better understanding of our work. We address the reviewer\\u2019s specific comments below.\\n\\n1. We are afraid there has been a misunderstanding about backprop through the LSTM embedding model. The model is trained end-to-end-- loss from the two tasks (structural similarity prediction and contact map prediction) are both back-propagated through their respective modules as well as through the LSTM embedding model. To emphasize, the LSTM embedding model is updated using strutural information, and the embeddings are in fact learned from structure. We note that in section 4.2, we show that a baseline model using only a linear transformation of the LM hidden states has far worse performance. \\n\\n2. We have updated the manuscript to include missing related work, specifically \\u201cDetecting Remote Evolutionary Relationships among Proteins by Large-Scale Semantic Embedding\\u201d Melvin 2011. However, as pointed out by the reviewer, this work does fall generally into the category of methods using direct sequence alignment tools. The authors learn a low dimensional projection of a feature vector defined by alignment of the query sequence against a database of protein domains using either psiblast or HHalign. Thus, their method gives single vector embeddings of protein domains based on existing sequence alignment tools in contrast to our work in which we learn a model that gives a vector representation for each position of the protein sequence using raw sequence as features.\"}",
"{\"title\": \"Are protein sequence emebeddings learned from structure?\", \"review\": \"This work learns embeddings for proteins. They use techniques from deep learning on natural langauge that are typically applied to scentences and words, and apply them correspondinly to proteines and amino acids. Thus they learn a vector representation using a bi-directional LSTM for amino acids by training the amino acid equivalent of a language model.\\n\\nThe authors then multitask 2 models using the embeddings that perform contact prediction (using an mlp and CNN) and structural class similarity model, which appear to perform very well.\\n\\nTheir SSA - soft symmetric alignment mechanism is neat and gives a single scalar value for a pair of proteins (by comparing their strings of emebedded amino acids by L1 distance), and it is descriptive enough feature for a simple ordinal regression to output a set structural similarity scores via a linear classifier (one for each strength of similarity re. the SCOP classification hierachy). It seems to work well, but I am unable to judge how good this is with respect to more recent work in this field. I would suspect being able to backprop to the embedding LSTMs through the SSA at this point would give much better results. \\n\\nAuthors only give 2 recent refeneces for protein embedding work [12,13] but should also take a look at this work: Melvin et. al, PLOS Computational Biology, this work uses structural class labels from SCOP to supervise the embedding. Although they do mention profile HMM in 'related work' which was used to create features in that work.\\n\\nThese authors, as far as I can tell do not \\\"backprop\\\" to the amino acid embeddings (and the LSTMs) from the contact or similarty loss. So the bi-LSTM-produced feature vectors, although trained unsupervised from many proteins, are not trained with structural supervision as claimed in the title (they state this in last paragraph of 3.1) and so the embeddings are not related to structural similarity directly. They do, however, seem to produce good features for the tasks they then tackle.\\nThey say in the conclusion that the SSA model is fully differentiable, but I don't see where they \\\"backprop\\\" through it.\\n\\nI would say (if this assessment is correct) then the title is very misleading, although the work and final results look good.\", \"update\": \"the authors have assured me in comments that the model is trained end to end - changing rating to good..\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Promising idea but falls short in write-up / evaluation\", \"review\": \"Thanks for the detailed responses. After reading the author response and the updated paper, I am satisfied on several of my concerns, many of which were due to the writing in the earlier submission. The updated results on various comparisons are also good. I have updated my score accordingly. Some qualitative analysis of the results would have been nice -- examples of protein pairs where they do well and other methods have difficulty as they don't use the structural similarity info / global sequence info used by this paper. But maybe those can be in a journal submission.\\nMy only remaining concern is on the lack of reporting average performance on the test data (which used to be the norm until recently for papers submitted to ML conferences).\", \"summary\": \"This paper proposes an approach for embedding proteins using their amino-acid sequences, with the goal that embeddings of proteins with similar structure are encouraged to be close in the embedded space. A stacked 3-layer Bi-directional LSTM is used for embedding the proteins. The structure information is obtained from the SCOP database which is used in an ordinal regression framework, where the output is the structural similarity and inputs are the embeddings. Along with the ordinal regression, another loss term to incorporate contacts of amino-acid residues is used. Results are shown on structural similarity prediction and secondary structure prediction.\", \"clarity\": \"1. The introduction of the paper is not very well written and it takes some time to figure out the exact problem being addressed. Is it learning sequence embeddings, or predicting structure from sequence or searching for similar structures in a database. Defining a clear goal -- input/output of their pipeline is important before describing the applications of the method, such as predicting structural similarity. \\n2. Due to the write-up, the method comes across as having too many modeling components without a very clear motivation for why these help the problem at hand. Where is the alignment part?\\n3. Why is each sequence embedded as a matrix? What is the motivation for a vector representation at each amino-acid position?\\n4. The authors need to explain the particular choice of 3 layers of bi-directional LSTMs. Why three? And why Bi-LSTM and not LSTMs?\", \"quality\": \"1. While the problem being addressed is interesting, the work lacks a clear reasoning behind the choice of modeling components which makes it seem ad-hoc.\\n2. Structural similarity is defined using the hierarchy of protein structure classes and the numbers seemed a bit arbitrary to me. Why not have a vector to encode the different aspects of structure? Have they looked at prior work?\\n3. How does the pre-trained language model on Pfam sequences help? Why is the output from it concatenated; have other composition functions been considered?\", \"originality\": \"The various components of the model are not novel, but the particular framework of putting them together is novel.\", \"results\": \"1. While the authors claim that some prior methods only work with high sequence similarity, their own evaluation only considers pairs of sequences with 95% identity. HHalign for instance, considers sequences with ~20% identity.\\n2. Why weren't several train/test splits of the data tried, so that performance can be reported with std. error bars?\\n3. Methods against which they compare have not been described properly.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good application paper but evaluation must be strengthened\", \"review\": \"General comment\\n==============\\nThe authors describe two loss functions for learning embeddings of protein amino acids based on i) predicting the global structural similarity of two proteins, and ii) predicting amino acid contacts within proteins. As far as I know, these loss functions are novel and the authors show clear improvements when using the learned embeddings in downstream tasks. The paper is well motivated and mostly clearly written. However, the evaluation must be strengthened and some aspects of it clarified. Provided that the authors address my comments below, I think it is a good ICLR application paper.\\n\\nMajor comments\\n=============\\n1. The authors should describe how they optimized hyperparameters such as the learning, lambda (loss section 3.3), or the smoothing factor (section 3.4). These should be optimized on an evaluation set, but the authors only mentioned that they split the dataset into training and holdout (test) set (section 4.1).\\n\\n2. The way the authors present results in table 1 and table 2 is unclear. Both table 1 and table 2 contain results of the structural similarity tasks but with different baselines. \\u2018SSA w/ contact predictions\\u2019 is also undefined and can be interpreted as \\u2018with\\u2019 or \\u2018without\\u2019 contacts predictions. I therefore strongly recommend to show structural similarity results in table 1 and secondary structure results in table 2 and include in both tables i) \\u2018SSA full\\u2019, \\u2018SSA without contact predictions\\u2019, and \\u2018SSA without language model\\u2019.\\n\\n3. The authors should compare SSA to the current state-of-the art in structure prediction in addition to baseline models.\\n\\n4. The authors should evaluate how well their method predicts amino acid contact maps.\\n\\n5. The authors should describe how they were dealing with variable-length protein sequences. Are sequences truncated and embedded to a fixed length? What is the mean and variance in protein sequence lengths in the considered datasets? The authors should point out that their method is limited to fixed length sequences.\\n\\n6. The authors should briefly describe the training and inference time on a single GPU and CPU. How much memory is required for training with a certain sequence length, e.g. 400 amino acids per sequence? Does the model fit on a single GPU?\\n\\n7. The authors should discuss limitations of their method, e.g. that it cannot handle variable length sequences and that the memory scales quadratically by the the sequence length.\\n\\n8. CRF (SSA) (table 3) includes a biLSTM layer between SSA and the CRF. However, the biLSTM can learn a non-linear projection of embeddings learned by SSA such that it is unclear if improvements are due to the embeddings learned by SSA or the biLSTM+CRF architecture. The authors should therefore train a biLSTM+CRF model on one-hot encoded amino-acids and include it as baseline in table 3.\\n\\n\\nMinor comments\\n=============\\n9. The way the similarity score s\\u2019 is computed (section 3.2.1) should be motivated more clearly. Why do the authors compute the score s\\u2019 manually instead of predicting it, e.g. using a model that takes the embeddings z of both proteins as input and predicts a single scalar s\\u2019? \\n\\n10. How does ordinal regression (section 3.2.2) perform compared with a softmax layer? Why do the authors compute s\\u2019 and then train logistic regression classifiers on s\\u2019 to predict the similarity level, instead of predicting the similarity level directly based on the embeddings z?\\n\\n11. Why do the authors use a distance threshold of 8A (section 3.3)? Is this common practice in the field?\\n\\n12. Why do the authors use the not product and the absolute difference as features instead of the embeddings z directly? Which activation function is used to predict contact probabilities (sigmoid, softmax, \\u2026)?\\n\\n13. The authors should reference and describe the results presented in table 1 more clearly.\\n\\n14. Optional: the authors should analyze if learned embeddings are correlated with amino acid and structural properties such as their size, charge, or solvent accessibility. Do embeddings clusters by certain properties? This can be analyzed, e.g., using a tSNE plot. \\n\\n15. How does TMalign perform when using the maximum or geometric average instead of the arithmetic average of the two scores (section 4.1)\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByxLl309Ym | Conditional Inference in Pre-trained Variational Autoencoders via Cross-coding | [
"Ga Wu",
"Justin Domke",
"Scott Sanner"
] | Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging. If the decomposition into query and evidence variables is fixed, conditional VAEs provide an attractive solution. To support arbitrary queries, one is generally reduced to Markov Chain Monte Carlo sampling methods that can suffer from long mixing times. In this paper, we propose an idea we term cross-coding to approximate the distribution over the latent variables after conditioning on an evidence assignment to some subset of the variables. This allows generating query samples without retraining the full VAE. We experimentally evaluate three variations of cross-coding showing that (i) can be quickly optimized for different decompositions of evidence and query and (ii) they quantitatively and qualitatively outperform Hamiltonian Monte Carlo. | [
"conditional inference",
"variational autoencoders",
"query",
"vaes",
"popular generative model",
"decomposition",
"evidence variables",
"conditional vaes",
"attractive solution",
"arbitrary queries"
] | https://openreview.net/pdf?id=ByxLl309Ym | https://openreview.net/forum?id=ByxLl309Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJgxwub2yV",
"SklMscm3CX",
"SkxXa4gKRX",
"S1xb5Tak6Q",
"S1xh9yst37",
"rkl3pcrM2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544456279549,
1543416473930,
1543206074914,
1541557640896,
1541152660210,
1540672195888
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1077/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1077/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1077/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1077/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1077/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1077/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes to approximate arbitrary conditional distribution of a pertained VAE using variational inferences. The paper is technically sound and clearly written. A few variants of the inference network are also compared and evaluated in experiments.\", \"the_main_problems_of_the_paper_are_as_follows\": \"1. The motivation of training an inference network for a fixed decoder is not well explained.\\n2. The application of VI is standard, and offers limited novelty or significance of the proposed method.\\n3. The introduction of the new term cross-coding is not necessary and does not bring new insights than a standard VI method.\\n\\nThe authors argued in the feedback that the central contribution is using augmented VI to do conditioning inference, similar to Rezende at al, but didn't address reviewers' main concerns. I encourage the authors to incorporate the reviewers' comments in a future revision, and explain why this proposed method bring significant contribution to either address a real problem or improve VI methodology.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not well motivated and lack of novel contribution\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your feedback.\\n\\nMy score of 4 was mainly due to the lack of original contribution. The paper is technically sound, clearly written and interesting to read, but the inference methods discussed are already known and well-understood in the variational-inference community. There isn't anything in the authors' feedback that convinces me I missed something, so I'm afraid my review will remain the same.\\n\\nI sincerely hope that the authors find positive and constructive feedback in our reviews, and that they appreciate our good intentions and time we put in to help improve the paper. I wish the authors best of luck with their work.\"}",
"{\"title\": \"Author Feedback\", \"comment\": \"Thanks for reviewers for their comments.\\n\\nThe goal of the paper is to infer p(y|x). The reviewers miss the central point of the paper which is using the framework of Augmented VI to understand why it is justifiable to do inference targeting p(z|x) and the looseness therefore entailed. Note that Rezende at al tackle EXACTLY this problem and design a custom algorithm, which we compared against. If it's all so simple, why would they even do that?\", \"reference\": \"Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and\\napproximate inference in deep generative models. In Proceedings of the 31th International\\nConference on Machine Learning (ICML), pp. 1278\\u20131286, 2014.\"}",
"{\"title\": \"A paper that needs work in terms of motivation, exposition, and evaluation\", \"review\": \"(apologies for this belated review)\\n\\nSummary\\n\\nThe authors consider the task of imputing missing data using variational auto-encoders. To do so, they assume a fixed pre-trained generative model, perform variational inference to infer a posterior on latent variables given a partial image, and then use this approximate posterior to predict missing pixels. They compare a variety of parameterizations of the variational distribution to HMC inference, and evaluate on MNIST, Celeb-A and the Anime data. \\n\\nComments\\n\\nThere are many things about this paper that I don\\u2019t understand. My main concern is that I fail to follow why the authors are interested in this task. In what settings would we be interested in performing non-autoencoding variational inference in order to impute missing data? Moreover, in cases where are interested in performing such imputations, what would we like to use the results for? This paper seems like a nice demo, but I\\u2019m not entirely convinced I see a compelling application. \\n\\nMy second concern is about the baselines that are considered. If I were interested in carrying out this inference task, my inclination would not be to run an HMC chain to convergence, but instead to do something like annealed importance sampling (AIS), where at each step I run an iteration of HMC on a large batch of samples on a sequence of target densities that interpolate between the prior and full joint p(x, Z). If computational cost is a concern, I imagine this would not be more expensive than training a density estimator. Moreover, whereas HMC is generally not known to be a good method for estimating marginal likelihoods, AIS methods generally perform much better.\\n\\nFinally I find the language used in this paper confusing. Cross-coding seems a misnomer for the technique that the authors propose. Isn\\u2019t this simply a form of variational inference in which q\\u03c8(Z) approximates p\\u03b8(\\u0396 | x)? The term \\u201c-coding\\u201d suggests that we somehow define an encoder that accepts the query as input. Moreover, isn\\u2019t the XCoder network just a neural density estimator? \\n\\nFinally, Lemma 1 seems like a really roundabout way of deriving a lower bound. The authors could instead just write:\\n\\n\\tlog p(x)\\n\\t>=\\n\\tE_q(Z,Y)[log p(x, Y, Z) - log q(Z, \\u03a5)]\\n\\t=\\n\\tE_q(Z,Y)[log p(x | \\u0396) + log p(Y | Z) + log p(Z) - log q(Z) - log p(\\u03a5 | Z)]\\n\\t=\\n\\tE_q(Z,Y)[log p(x | \\u0396) + log p(Z) - log q(Z)]\\n\\t=\\n\\tE_q(Z)[log p(x, \\u0396) - log q(Z)]\\n\\nThis avoids confusing terminology such as cross-coding, and shows that what the authors are doing is in fact just variational inference. Am I missing something here?\\n\\nI am also confused about how the comparison to HMC is set up. If you\\u2019re training q\\u03c8(Z), then you presumably need generate a certain number samples at training time. Shouldn\\u2019t you add this number of samples number of samples you generate in HMC, in order to get a more apples to apples comparison in terms of the amount of computation performed? As it stands, it is hard to evaluate whether these methods are given a similar number of samples. \\n\\nFinally, I am not quite sure what to make of the experimental evaluation. We see some scatter plots on MNIST with a 2D latent space, and some faces of celebrities in which there is arguably some sample diversity, although most of this diversity arises in blurry looking hairstyles. However, since the authors condition on the eyes, rather than, say, the nose or mouth, it is hard to know how good a job the network is doing at generalizing to multiple plausible faces. \\n\\nOverall, I find it difficult to judge the merit of this paper. Is this task in fact hard? Is it useful? Are the results good? Maybe the authors can give us some additional guidance on these questions.\\n\\nQuestions\\n\\n- I\\u2019m a bit worried that not all the samples that we see in Figure 6 may have equally high probability under the posterior. Could the authors compute and report importance weights?\\n\\n\\tW = p(x, Z) / q(Z)\\n\\t\\n- Could the authors say something about the effective sample size that we obtain when using the learned distribution q(Z) as a proposal? \\n\\t\\n\\tESS = (\\u03a3_k w^k)^2 / (\\u03a3_k (w^k)^2)\\n\\n- Should it be the case that the ESS is low, and the weights are high variance, could the authors generate a sufficient number of samples to ensure the the ESS = 25 (i.e. the number of images in the figure) and then show the 25 highest-weight samples (or resample 25 images with probability proportional to their weight)?\\n\\n\\t\\nMinor \\n\\n\\n- Equation (3): There\\u2019s an extra p_\\u03b8 in the first integral\\n\\n- In the proof in Appendix 6.1 \\n\\n\\tKL[ q\\u03c8(Z) \\u2016 p\\u03b8(Z | x) ] + KL[ q\\u03c8(Y | Z) \\u2016 p\\u03b8(Y | Z, x)]\\n\\nit would be clearer to explicitly denote the expectation over q\\u03c8(Z)\\n\\n\\tKL[ q\\u03c8(Z) \\u2016 p\\u03b8(Z | x) ] + E_q\\u03c8(Z)[ KL[ q\\u03c8(Y | Z) \\u2016 p\\u03b8(Y | Z, x)] ]\\n\\t\\n(I had to google lecture notes to find out that this expectation is sometimes implicit, which \\nas far as I know is not very standard).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"I don\\u2019t quite see what is new about this paper\", \"review\": \"This paper proposes the use of unamortized Black Box Variational Inference for data imputation (given a fixed VAE with a factorized decoder), where the choice of variational distribution is a standard flow model.\\n\\nThe exploitation of the decoder factorization and the choice to set q(y | z) = p(y | z) was explored in the Bottleneck Conditional Density Estimation paper.\\n\\nTo my understanding, this paper fails to contextualize their work with the existing literature and is simply an exercise in the rote application of existing inference procedures to a well-established inference problem (data imputation). \\n\\nUnless the authors can convince me of the novelty of their approach or what I have overlooked in their proposal, I do not recommend this paper for acceptance.\", \"references\": \"Ranganath, et al. Black Box Variational Inference. AISTATS 2014.\\nShu, et al. Bottleneck Conditional Density Estimation. ICML 2017.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting read, but little original contribution\", \"review\": \"Paper summary:\\n\\nGiven a pre-trained VAE (e.g. over images), this paper is about inferring the distribution over missing variables (e.g. given half the pixels, what is a plausible completion?). The paper describes an approach based on variational inference with normalizing flows: given observed variables, the posterior over the VAE's latents is inferred (variationally) and plausible completions for missing variables are sampled from the VAE decoder.\", \"technical_quality\": \"The presented method is technically correct. The evaluation carefully compares different types of normalizing flow and HMC, and seems to follow good practices.\\n\\nI have a suggestion for improving the GVI method. The way it's described in the paper, GVI requires computing the determinant of a DxD matrix, which costs O(D^3), and there is no guarantee that the matrix is invertible. However, this approach over-parameterizes the covariance matrix of the modelled Gaussian. Without losing any flexibility, you can use a lower triangular matrix with strictly positive diagonal elements (e.g. the diagonal elements can be parameterized as the exp of unconstrained variables). That way, the determinant costs O(D) (it's just the product of diagonal elements) and you ensure that the matrix is invertible (because the determinant is strictly positive), without hurting expressivity. You can think of this as parameterizing the Cholesky decomposition of the covariance matrix.\\n\\nAlso, there are more flexible normalizing flows, such as Inverse Autoregressive Flow, that can be used instead of the planar flow used in the paper.\", \"clarity\": \"The paper is written clearly and in full detail, and the mathematical exposition is clear and precise.\", \"some_typos_and_minor_suggestions_for_improvement\": [\"It'd be good to move Alg. 1 and Fig. 1 near where they are first referenced.\", \"Page 2: over to \\\\theta --> over \\\\theta\", \"Eq. 3: p_\\\\theta appears twice in the middle.\", \"one can use MCMC to attempt sampling --> one can use MCMC to sample\", \"Eq. 5: should be q_\\\\psi as subscript of E.\", \"Fig. 7, caption: should be GVI vs. NF.\", \"In references, should be properly capitalized: Hamiltonian, Langevin, Monte Carlo, Bayes, BFGS\", \"Lemma 1: joint divergence is equivalent to --> joint divergence is equal to\", \"Lemma 1: in the chain rule for KL, the second KL term should be averaged w.r.t. its free variables.\"], \"originality\": \"In my opinion, there is little original contribution in this paper. The inference method presented (variational inference with normalizing flows) is well-known and already in use. The paper applies this method to VAEs, which is a straightforward application of a well-known inference method to a relatively simple graphical model (z -> {x, y}, with x, y independent given z).\\n\\nI don't see the need for introducing a new term (cross-coder). According to the paper, a cross-coder is precisely a normalizing flow (i.e. an invertible smooth transformation of a simple density). I think new terms for already existing ideas add cognitive load to the community, and are better avoided.\", \"significance\": \"In my opinion, constructing generative models that can handle arbitrary patterns of missing data is an important research direction. However, this is not exactly what the paper is about: the paper is about inference in a given generative model. Given that there is (in my opinion) no new methodology in the paper, I wouldn't consider this paper a significant contribution.\\n\\nI would also suggest that in a future version of the paper there is more motivation (e.g. in the introduction) of why the problem the paper is concerned with (i.e. missing data in generative models) is significant. Is it just for image completion / data imputation, or are there other practical problems? Is it important as part of another method / solution to another problem?\", \"review_summary\": \"\", \"pros\": [\"Technically correct, gives full detail.\", \"Well and clearly written, precise with maths.\", \"Evaluation section interesting to read.\"], \"cons\": [\"No original contribution.\", \"Could do a better job motivating the importance of the problem.\"], \"minor_points\": [\"I don't completely agree with the way VAEs are described in sec. 2.1. As written, it follows that VAEs must have a Gaussian prior and a conditionally independent decoder. Although these are common choices in practice, they are not necessary: for example, one could take the prior to be a Masked Autoregressive Flow and the decoder a PixelCNN.\", \"Same for observation 1. This is not an observation, but an assumption; that is, the paper assumes that the decoder is conditionally independent. This is of course an assumption that we can satisfy by design, but it's a design choice that restricts the decoder in a specific way.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rkx8l3Cctm | Safe Policy Learning from Observations | [
"Elad Sarafian",
"Aviv Tamar",
"Sarit Kraus"
] | In this paper, we consider the problem of learning a policy by observing numerous non-expert agents. Our goal is to extract a policy that, with high-confidence, acts better than the agents' average performance. Such a setting is important for real-world problems where expert data is scarce but non-expert data can easily be obtained, e.g. by crowdsourcing. Our approach is to pose this problem as safe policy improvement in reinforcement learning. First, we evaluate an average behavior policy and approximate its value function. Then, we develop a stochastic policy improvement algorithm that safely improves the average behavior. The primary advantages of our approach, termed Rerouted Behavior Improvement (RBI), over other safe learning methods are its stability in the presence of value estimation errors and the elimination of a policy search process. We demonstrate these advantages in the Taxi grid-world domain and in four games from the Atari learning environment. | [
"learning from observations",
"safe reinforcement learning",
"deep reinforcement learning"
] | https://openreview.net/pdf?id=rkx8l3Cctm | https://openreview.net/forum?id=rkx8l3Cctm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gQIQUZeE",
"HkgmEkXjRm",
"r1gqUGTVaQ",
"BklNNWpNaQ",
"ByxNik6ETQ",
"HJxxDyTEpX",
"rJlp0ZcnhX",
"BJlViRtY3m",
"Syx8_Y4t2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544803147020,
1543348010611,
1541882449818,
1541882156374,
1541881756064,
1541881688389,
1541345749408,
1541148315837,
1541126510402
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1076/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1076/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1076/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1076/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1076/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1076/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1076/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1076/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1076/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper studies safer policy improvement based on non-expert demonstrations. The paper contains some interesting ideas, and is supported by reasonable empirical evidence. Overall, the work has a good potential. The author response was also helpful. That said, after considering the paper and rebuttal, the reviewers were not convinced the paper is ready for publication, as the significance of this work is limited by a rather strong assumption (see reviews for details). Furthermore, the presentation of the paper also requires some work to improve (see reviews for detailed comments).\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea, but limited applicability\"}",
"{\"title\": \"Summary of the revision\", \"comment\": \"Dear reviewers,\\n\\nThank you for your thoughtful feedback. We have updated the manuscript according to your suggestions:\\n\\n1. In section 4 (Average Behavior and Its Value Function) we introduced two alternatives for estimating the value function with their pros and cons:\\n(1) TD learning - without using the average behavior estimation.\\n(2) Approximated MC learning.\\n\\n2. In the Taxi example (section 5 - Safe Policy Improvement) we compared the improvement steps with TD and MC learning. \\n\\n3. We reran the Atari experiments with both TD and MC estimators - results are presented and discussed in section 6.\\n\\nTo summarize, our experiments show that: \\n(1) In the tabular case (Taxi example), there is no clear winner between TD and MC. \\n(2) As we expected, with NN (Atari example) the MC learning method provided much better results.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for his review.\", \"we_would_appreciate_if_the_reviewer_could_reevaluate_the_significance_of_our_work_in_the_light_of_the_following_answer\": \"\", \"please_let_us_focus_in_the_first_part_of_our_answer_in_section_5\": \"The entire derivation in section 5 does not necessarily assume that Q^D equals Q^\\\\beta. Our derivation only assumes that we are given some approximation of Q and required to take an improvement step. In this sense, Lemma 5.2 is applicable also for a standard setup where only a single policy is observed. When a single policy is observed, the entire error -i.e. \\\\eps(s,a) in Eq. (7) is due to the randomness in the sampled trajectories. In this case, if Q was learned with MC trajectories the SE of \\\\eps is given by Eq. 6. \\n(Please refer to our answer to reviewer 1 regarding a similar justification for TD learning.) \\n\\nIn the multiple policies setup, we may decompose the estimation error into two terms \\\\eps(s,a) = \\\\eps^D(s,a) + \\\\eps'(s,a) where \\\\eps^D is the fixed error due to the difference between Q^D and Q^\\\\beta and \\\\eps' is the r.v. error of estimating Q^D from N MC trajectories.\\nNotice that the number of visitations is still N_s * beta(a|s) thus the SE of \\\\eps' is still given by Eq. 6 and hence, the need for the reroute constraint remains.\\nWhile \\\\eps^D(s,a) may contribute to the improvement penalty, it is bounded by the proximity of P(pi|s) and P(pi|\\\\tilde{s},\\\\tilde{a}\\\\xrightarrow[k]{} s). All our experiments suggest that, generally, the improvement due to taking favorable actions exceeds possible degradation due to minor evaluation errors. This pattern repeats both in the tabular and in the deep cases. However, the second term \\\\eps'(s,a) may still grow to the extreme if uncontrolled and this is the main source of performance degradation when taking greedy/TV constrained steps or a single PPO optimization.\\n\\nWe would also like to highlight two important results which we believe are important to the RL literature. \\n\\nFirst, our derivation shows that both TRPO and PPO overlooked this consideration in their trust region definition. Therefore while they are considered as a guaranteed improvement procedure, this caveat may lead to potential drops in their learning pattern. The reroute constraint is designed to limit such drops.\\n\\nIn addition, our method is resilience to small errors of \\\\eps^D(s,a) such that if action ranking does not changes, then improvement is guaranteed. This is in contrast to policy gradient methods which relies on the quantitative Q-value and not the categorical action ranking. Lastly, since we do not optimize the policy network, our improvement step is exact and free of neural networks optimization pitfalls such as convergence to local minima or overfitting. This is true for a 1-step improvement and in an iterative improvement process it is naturally generalized to something very similar to the \\\"supervised policy update\\\" paper [1] but contrary to their constraints our constraint is well motivated. Please also refer to our answer to reviewer 1 for potential generalization to iterative RL.\", \"regarding_section_4\": \"All reviewers raised questions about the seemingly unpopular choice of learning from MC returns. Particularity since directly estimating \\\\beta with Off-Policy methods is justified and does not require any assumptions or approximations about the difference between Q^D and Q^\\\\beta. Our experiments with 1-step TD learning of Q^\\\\beta provided lower scores than with MC evaluation. This led us to explain the rationale of MC learning via propositions 4.1, 4.2. In order to properly compare between TD and MC methods, we will make the following amends:\\n\\n(1) Present the two approaches of evaluating Q^\\\\beta: 1-step TD and approximated MC.\\n(2) add our results of RBI learning with the evaluated 1-step TD Q^\\\\beta. \\n(3) discuss possible reasons why MC is better in this setting.\", \"we_postulate_that_there_are_2_main_reasons_why_td_is_inferior_to_mc_in_finite_small_datasets\": \"(1) TD require evaluation of \\\\beta, even with 1-step since Q^\\\\beta(s,a) = r(s,a) + \\\\gamma * \\\\sum_{a'} \\\\beta(a'|s')Q^\\\\beta(s',a'). Since our evaluation of \\\\beta inherently contains errors, they propagate to the Q-value evaluation.\\n(2) TD, in contrast to MC, contains bootstrapping. Therefore, errors from poorly evaluated states propagate to other states. In an iterative RL settings, these can be corrected but in a fixed size dataset it is better to avoid bootstrapping with MC methods.\", \"references\": \"[1] Vuong QH, Zhang Y, Ross KW. Supervised Policy Update. arXiv preprint arXiv:1805.11706. 2018 May 29.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the insightful comments.\\n\\nRegarding other Atari LfD works like [1,2] and \\\"OpenAI - Learning Montezuma\\u2019s Revenge from a Single Demonstration\\\":\\n\\nThese works utilize the demonstrations to guide a RL agent which actively interacts with the environment (first two via regularization and the last via curriculum learning). In contrast, our reported score is obtained without any learning from self interaction with the environments. Instead we focus on finding safe initial policy which beats the average behavior.\\n\\nA fairer comparison would have been with the initial score reported by DQfD (which is the score after a training phase of 750,00 mini-batches only with the demonstration data). In this case, while they do not provide the exact score, their starting point in the graph shows that their initial policy (while much better than random - as expected) is inferior to our learned policy (mspacman < 1000, revenge ~1300, qbert ~5000, spaceinvaders is absent). However, this comparison is still a little bit misleading since the datasets of demonstrations is different and while their dataset is of few demonstrations of a single expert we have a dataset much more observations of many different players (the statistics of the scores of those players is found at [5]\\n). Therefore, in order to test our approach against this benchmark, we implemented DQfD and trained it on our dataset. The score are provided in our paper and show that it obtains lower score than our behavioral cloning phase in all games. On the other hand, RBI significantly improves upon the behavioral cloning in 2 out 4 games (Qbert +83% and Revenge +36%) and slightly improves Mspacman (+4.5%) (in spaceinvaders we observe -2% drop). \\n\\nIn contrast to this line of work, both papers [3,4] consider a model based approach where they assume enough historical data to build a model and a simulator. Specifically [3] assumes that it has an estimated model with a bounded error on the transition probabilities. Moreover, its approximated solution requires \\\"fixing the model\\\" by counting over the entire state-space. These assumptions are often too restrictive when we learn a policy from raw sensory input with Deep Neural Network and a fixed size dataset. Therefore, we resort to a model-free approach. \\n\\nTo conclude, as shown by our experimental results, our approach is the best performer when you are required to provide a safe initial policy given multiple weak demonstrators in learning from raw sensory input. This setup is of practical interest in the autonomous car setting for example where you can collect human drivers statistics but you must provide initial safe policy. Another example is Robot teleoperation where a robot is teleoperated by many operators (with different policies) for a limited period and later the robot is required to act autonomously.\\n\\nPlease also refer to our answer to reviewer 3, regarding learning the Q-value from MC vs TD method.\", \"references\": \"[1] Hester T, Vecerik M, Pietquin O, Lanctot M, Schaul T, Piot B, Horgan D, Quan J, Sendonaris A, Dulac-Arnold G, Osband I. Deep Q-learning from Demonstrations. arXiv preprint arXiv:1704.03732. 2017 Apr 12.\\n[2] Pohlen T, Piot B, Hester T, Azar MG, Horgan D, Budden D, Barth-Maron G, van Hasselt H, Quan J, Ve\\u010der\\u00edk M, Hessel M. Observe and Look Further: Achieving Consistent Performance on Atari. arXiv preprint arXiv:1805.11593. 2018 May 29.\\n[3] Ghavamzadeh M, Petrik M, Chow Y. Safe policy improvement by minimizing robust baseline regret. InAdvances in Neural Information Processing Systems 2016 (pp. 2298-2306).\\n[4] Laroche R, Trichelair P, Asri LE. Safe Policy Improvement with Baseline Bootstrapping. arXiv preprint arXiv:1712.06924. 2017 Dec 19.\\n[5] Kurin V, Nowozin S, Hofmann K, Beyer L, Leibe B. The Atari Grand Challenge Dataset. arXiv preprint arXiv:1705.10998. 2017 May 31.\"}",
"{\"title\": \"Response [1/2] to Reviewer 1\", \"comment\": \"We thank the reviewer for the insightful comments.\", \"q\": \"Could we also use a similar policy update for policy improvement in reinforcement learning?\", \"a\": \"Yes. A direct consequence of this work is a new iterative policy algorithm which has two parts: \\n(1) the learner estimates the past policy (with KL divergence loss) and evaluates its Q-value function. \\n(2) the actor takes the policy and its value and calculates in each step a policy which maximizes the reroute constraint (and then saves to a memory buffer the generated policy, the state and the reward). We identify two important advantages of this approach: \\n(1) With respect to greedy algorithms like DQN: it allows to increase the safety level of each improvement step - as we showed in the paper, greedy is a very precarious approach particularly in less explored areas. \\n(2) With respect to policy gradient methods: The optimization of the policy, i.e. the calculation \\\\beta \\\\to \\\\pi does not depend on the parametric form and hence it is exact and avoids NN optimization traps (overfitting, local minima etc). Therefore, the rerouted policy (behavioral followed by Max-Reroute optimization) should provide better trajectories and support faster learning rate.\\nNote that to substantiate these claims for iterative RL an additional experimental work is required, therefore we provide them here just as a hunch and motivation. However, in this work we believe that we have firmly proved them (both theoretically and experimentally) for a single improvement step: our Reroute constraint is better than greedy step/ TV constraint step and a single PPO optimization step.\\n\\n\\nAbout proposition 4.2:\\n\\nProposition 4.2 suggests a hypothetical policy which has a value function of Q^D. The structure of this policy, termed \\\\beta^D_{\\\\tilde{s},\\\\tilde{a}} is similar to the structure of the average behavior \\\\beta with a single difference: the weights P(p^i|s) in \\\\beta are changed to P(p^i|\\\\tilde{s},\\\\tilde{a} \\\\xrightarrow[k]{} s). This is equivalent to P(p^i|s) but with a dataset \\\\mathcal{D}_{\\\\tilde{s},\\\\tilde{a}}^k which is a subset of D and contains all the state-action pairs that are k-step away from a state-action pair \\\\tilde{s},\\\\tilde{a}. Therefore, the difference |Q^\\\\beta - Q^D| is bounded by how much P(p^i|\\\\tilde{s},\\\\tilde{a} \\\\xrightarrow[k]{} s) deviate from P(p^i|s). We experimentally show that: (1) this deviation is generally small - i.e. the TV distance between \\\\beta and \\\\beta^D_{\\\\tilde{s},\\\\tilde{a}} is low (2) if we only consider the Q-value ranking (since our improvement step is based only on the action ranking), we obtain very high Pearson's rank correlation score. In addition, please refer to our answer to reviewer 3 concerning: (1) why we claim that Q^D is sufficient for our policy improvement step and; (2) comparison to TD methods.\", \"about_the_selection_types_in_the_taxi_example\": \"To compare between Q^D and Q^\\\\beta we tried many datasets with different synthetic policies which were based on a different mixture between semi-random actions (taking a random action in 75% and optimal policy otherwise) and optimal actions. The mixture was based on different states allocations (i.e. selection) wherein one set, named S* the policy is optimal and in the complement the policy is semi-random. The exact definitions of selections are in appendix D. Essentially, they simulate different types of datasets: \\n(1) random selection simulates a dataset of weak demonstrators with different policies. \\n(2) the two other selections simulate datasets where different players demonstrate optimal policies in different parts of the MDP (those parts are unknown to the agent).\"}",
"{\"title\": \"Response [2/2] to Reviewer 1\", \"comment\": \"The figure captions need to be much more exhaustive:\\n\\nAgreed. Somehow we missed this part. Each iteration is a backward learning step applied both to the learned value and the learned policy. During the entire process, we evaluated 3 different policies: (1) behavior (2) behavior + TV constrained step and; (3) behavior + reroute constrained step. In addition, we plot the learning curve of the DQfD baseline. In addition, the left figure plots also the performance of a PPO optimization step applied to the behavior policy learned after ~1.5M iterations. We will update this information to the figures and the comparison with PPO paragraph.\", \"q\": \"How would we solve Equation 8 with continuous actions?\", \"a\": \"Eq. 8 and the reroute constraint is motivated by the Standard Error of the Q-value evaluation and Lemma 5.2. Generalization to continuous action space requires several considerations. First, for Learning from Observations (LfO), learning the average policy requires some model for the density function of \\\\beta, we assume that unlike common parameterizations of iterative learners, Guassian model or even Mixture of Gaussian would not qualitatively represent the \\\\beta distribution. It may be possible that a quantile network is a proper choice. The second challenge is to quantify or bound the expected error or its variance. Also here, there is no general solution and it depends on the parameterization. To conclude, we do believe that designing a safe trust region for continuous control is a desirable goal and it may provide better results in real-world data than trust regions that do no take into account estimation error (like KL). However, we believe that it deserves an additional future research. \\n\\n\\n(1) Could you add an algorithm box for estimating the Q-function? (2) Do we estimate every Q-function in isolation using MC estimates and then just use the weighted average?\\n\\n(1)Sure. \\n(2) One of the key points in our approach is avoiding estimating the Q-function of each and every different player. This is both computationally prohibitive and in addition, it requires estimating the conditional probability P(p^i|s), i.e. the probability of sampling a player p^i given a sampled state s. We show that learning with MC trajectories and L1-loss circumvents this obstacle.\", \"about_the_comparison_to_dqn\": \"Notice that we indeed compared our results to DQfD [1] which is the DQN method with additional 2 regularization terms designated to a learning from demonstrations trajectories: L2-regularization to prevent overfitting a fixed small dataset and more importantly a penalty for choosing actions different from the action demonstrated by the demonstrator. For all the games in our dataset, we show that our method is always better than DQfD. In our experiments, DQN (i.e. DQfD without the regularization terms) provided lower scores than DQfD. Please see also the comment to reviewer 2 regarding the correct comparison between DQfD and our RBI method.\\n\\nRegarding learning the Q-value from MC vs TD method, please refer to our answer to reviewer 3.\", \"references\": \"[1] Hester T, Vecerik M, Pietquin O, Lanctot M, Schaul T, Piot B, Horgan D, Quan J, Sendonaris A, Dulac-Arnold G, Osband I. Deep Q-learning from Demonstrations. arXiv preprint arXiv:1704.03732. 2017 Apr 12.\\n[2] Kearns MJ, Singh SP. Bias-Variance Error Bounds for Temporal Difference Updates. InCOLT 2000 Jun 28 (pp. 142-147).\"}",
"{\"title\": \"A good paper with interesting theory and algorithmic contribution. The weaknesses are the clarity as well as the limited experiments.\", \"review\": [\"The paper looks at learning a policy from multiple demonstrators which should also be safely improved by an reinforcement learning signal. They define the policy as a mixture of policies from the single demonstrators. The paper gives a new way to estimate the value function of each policy where the overall policy is defined as mixture of the single policies. The paper subsequently looks at the standard error of the value function estimation and then define the policy improvement step in the presence of value estimation error. The resulting reroute constraint for the policy improvement step is evaluated on the taxi toy task as well as on 4 different atari domains.\", \"This paper presents an interesting ideas which is also based on an exhaustive theoretical derivation. However, the paper is lacking clarity and motivation which makes it almost impossible to understand at the first pass. Moreover, the presented results are promising but not exhaustive and the resulting algorithm is also restricted to discrete action domains. More comments see below:\", \"The paper consists of 2 parts, the average behavior policy and its value function and the safe policy improvement step. The relation between these two parts are not clear. Is the policy improvement step only working if the policy is defined as in section 4 and the value function computed as in section 4?\", \"Proposition 4.2 needs to be much better motivated and explained. It is totally unclear at this part of the paper why proposition 4.2 is used.\", \"Please explain why proposition 4.2 indicates that Q^D \\\\approx Q^\\\\beta\", \"The selection type of S in the taxi example is also unclear.\", \"How would we solve Equation 8 with continuous actions / parametrized policies \\\\pi? Without this extension, the algorithm is quite restricted.\", \"the figure captions need to be much more exhaustive. I am not sure I understand the x axis of Figure 4 (right). What iterations are shown here? We only do one improvement step of the behavior policy, without any resembling, is that right?\", \"Could we also use a similar policy update for policy improvement in reinforcement learning?\", \"Could you add an algorithm box for estimating the Q-function? Do we estimate every Q-function in isolation using MC estimates and then just use the weighted average?\", \"It would be interesting to also compare the value function learning method proposed in the paper in isolation to other value function learning methods such as DQN. while the presented method is simple (learn from MC estimates), this is also known to be very data inefficient.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Contribution to safe RL with a weak empirical validation\", \"review\": \"The paper plugs the ideas of TRPO/PPO into the value based RL. Though there is no big surprise in terms of the tools used, this is interesting to know that safe policy improvement is possible in this setting.\\n\\nNevertheless for a conference as ICLR which is interested in the performance of ML tools, I have two concerns:\\n\\n- The scores obtained on all tests tasks on Atari game are quite far from the state of the art. As an example OpenAI announced to be able to score 74k at Montezuma's revenge with a single demonstration using PPO and a carefull selection of the initializations states (see blog post https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/). I understand that the setting is not directly comparable but the goal of RL is to learn good policies. This remark would vanish is the authors could come with a real use case where for some reason their approach is the best performer.\\n\\n- The proposed approach is benchmarked wrt few algorithms while there exist a lot in the safe RL literature. The setting is often slightly different but adaptation is often possible. In particular I'd like more positioning wrt what is proposed by the work of Petrik&all (https://papers.nips.cc/paper/6294-safe-policy-improvement-by-minimizing-robust-baseline-regret.pdf the paper is cited but the first author is incorrect). What are the deep differences that make this paper setting more interesting (in terms of what can be done from an applied perspective) or more challenging in terms of mathematical tools. Here I feel the core difference is a comparison against an average of policies which becomes the new baseline to beat.\\n\\nAlso not that at EWRL'18 an alternative approach for value based safe RL was presented https://arxiv.org/pdf/1712.06924.pdf\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Assumption in this paper significantly deteriorate the significance of the results\", \"review\": \"In this paper, the authors study the problem if learning for observation, a reinforcement learning setting where an agent is given a data set of experiences from a potentially arbitrary number of demonstrators. The authors propose a method which deploys these experience to initialize a place. Then estimate the value of this policy in order to improve it.\\n\\nThe paper is well written and it is easy to follow. \\n\\nMost of the theoretical results are interesting and the derivations are kinda straightforward but not fully matching the main claim in the paper. Mainly the contribution in this paper heavily depends on an assumption that Q^D and Q^\\\\beta are close to each other. This assumption simplifies the many things resulting in a simple algorithm. But this assumption is too strong while the main challenge in the line of learning from observation comes from the fact that this assumption does not hold. Under this assumption and the similarity in distributions mentioned in proposition 4.2 make the contribution of this paper significantly weak.\\n\\nPlease let me know if you do not actually use this assumption in your results and justification.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkNSehA9FQ | Open Vocabulary Learning on Source Code with a Graph-Structured Cache | [
"Milan Cvitkovic",
"Badal Singh",
"Anima Anandkumar"
] | Machine learning models that take computer program source code as input typically use Natural Language Processing (NLP) techniques. However, a major challenge is that code is written using an open, rapidly changing vocabulary due to, e.g., the coinage of new variable and method names. Reasoning over such a vocabulary is not something for which most NLP methods are designed. We introduce a Graph-Structured Cache to address this problem; this cache contains a node for each new word the model encounters with edges connecting each word to its occurrences in the code. We find that combining this graph-structured cache strategy with recent Graph-Neural-Network-based models for supervised learning on code improves the models' performance on a code completion task and a variable naming task --- with over 100\% relative improvement on the latter --- at the cost of a moderate increase in computation time. | [
"deep learning",
"graph neural network",
"open vocabulary",
"natural language processing",
"source code",
"abstract syntax tree",
"code completion",
"variable naming"
] | https://openreview.net/pdf?id=SkNSehA9FQ | https://openreview.net/forum?id=SkNSehA9FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByltV_RJeV",
"B1l4NLJk1E",
"BJlH3PWsaQ",
"HkepXwWspm",
"ryxAy3xsTX",
"SJgyeZli6Q",
"BkxrXrewT7",
"B1xbctmITX",
"S1eEG1gLam",
"HJehnx0bam",
"HJexTpnWaQ",
"S1xu_TT1T7",
"rylubM3k6Q",
"ByewoRiy67",
"rklbFlAO2X",
"HkgDMtLd37",
"rJxfsg4Dh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544706097500,
1543595564386,
1542293421179,
1542293285488,
1542290406371,
1542287591096,
1542026525512,
1541974409221,
1541959435685,
1541689523575,
1541684664192,
1541557615977,
1541550591883,
1541549727150,
1541099641305,
1541069071307,
1540993178220
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1075/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1075/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1075/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer3"
],
[
"~Miltiadis_Allamanis1"
],
[
"ICLR.cc/2019/Conference/Paper1075/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1075/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1075/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1075/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1075/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1075/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduces fairly complex methods for dealing with OOV words in graphs representing source code, and aims to show that these improve over existing methods. The chief and valid concern raised by the reviewers was that the experiments had been changed so as to not allow proper comparison to prior work, or where comparison can be made. It is essential that a new method such as this be properly evaluated against existing benchmarks, under the same experimental conditions as presented in related literature. It seems that while the method is interesting, the empirical section of this paper needs reworking in order to be suitable for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Results do not justify method complexity\"}",
"{\"title\": \"A good example\", \"comment\": \"Thank you to the reviewers of this paper for engaging in discussion not just with the authors, but with one another, and providing substantial and detailed reviews. You are an excellent example for the community, and demonstrate the high standard according to which papers should be evaluated in ML conferences. Your efforts are deeply appreciated.\\n\\nAC\"}",
"{\"title\": \"claims\", \"comment\": \"of you agree with (b), then no baseline is [1]. The difference is not merely in the readout.\\n\\nI take the results of Miltos as a positive sign, but I cannot realistically give a good recommendation to the authors to implement what he has done. Specifically for (a), this is central for all of the prior works that do such models.\\n\\nWhat I can recommend to the authors to improve the paper is to change the architecture to be not so unusual in comparison to prior works that deal with program variables (e.g. to include scope analysis) and to implement more fair comparison.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks again for all your attention to our paper, AnonReviewer3. I'm sorry that our rebuttal came off as aggressive - that certainly wasn't our intention. We're very grateful for all your feedback!\", \"just_to_hopefully_resolve_the_remaining_open_points\": \"1) \\\"...without 10x more data.\\\"\", \"we_should_have_been_clearer_here\": \"we were referring only to other work on Java. It's been our experience (and demonstrated in e.g. [2]) that performance between programming languages are pretty incomparable. [1] doesn't study Java, and we also show that their model performs less well than our method (on our Java dataset and tasks) in our paper. But instead of \\\"better performance\\\" we should have written \\\"better performance on our same tasks on Java\\\", which would have avoided our claim sounding incorrect.\\n\\n2) \\\"...improve performance over a reasonable baseline...\\\"\\nI suppose we'll have to agree to disagree here.\\n\\n3) \\\"The Closed Vocab + AugAST entry in Table 2 is the same model as in [1]\\\"\\nIt is as far as we can tell, but maybe we've missed something?\\n\\n4) \\\"...why is the attention not to supernodes...\\\"\\nThis is a great idea for an architecture, and an interesting question! It's just not one we studied. But it's not obvious to me that it is so much simpler and better than our method that our contribution is pointless in light of it.\\n\\n5) \\\"...no state-of-the-art results or even comparable results...\\\"\\nAgain, I suppose we'll have to agree to disagree. Our naming task is identical to that of [1] and other prior work, and we feel our Fill-In-The-Blank task is comparable to [1] and other prior work as well (I'll defer to Miltos's comment here). But if you don't feel our comparisons to the model in [1] and our ablation studies show state-of-the-art results and justify \\\"why this idea and not a similar one\\\", then I'm not sure what more we can offer to convince you.\"}",
"{\"title\": \"?\", \"comment\": \"I apologize for discussing the incorrect claims. I don't see how further discussion about them being in gray or white area leads to any positive outcome.\", \"correct_me_if_i_am_wrong_on_these\": \"a) Conceptually, the paper removes supernodes of prior works [1,2,3] (one node per variable) and introduces one node per subtoken. \\nb) None of the baselines include the one node per variable.\"}",
"{\"title\": \"Incorrect claims:\", \"comment\": \"\\\"We are not aware of any work that achieves better performance than ours without using >10x more data\\\". Already shown that related work [1] has similar amount of data and gets better precision.\\n\\n\\\"If the former, we feel the experiments refute that, showing that the cache improved performance quite a bit.\\\". The cache is not shown to improve over a reasonable baseline.\\n\\n\\\"The Closed Vocab + AugAST entry in Table 2 is the same model as in [1]\\\"\\n\\nremark that prior works don't count method naming, which our model does\\n\\n---\\n\\nOverall, I disagree with the premise that to fix the paper, one needs to only fix the presentation. The incorrect claims come from the aggressiveness to rebute the review. The problem with the work is that it does not demonstrate the need for what it introduces and why is it a good idea. Miltos's comment also points at least at two differences with [1] - i) attention and ii) the vocabulary and is not clear how their improvements relate to this work except at high level.\\n\\nIn terms of optimality of the architecture, the biggest problem comes from the fact that both tasks need to predict facts about a variable. If attention to existing variables is used, why is the attention not to supernodes summarizing all occurrences of a variable? If a variable is to be named, the same question applies - why isn't it one supernode for all occurrences of the predicted variable? Would these natural changes eliminate much of the need for the contribution of the paper?\\n\\nAt the moment I see the work as presenting an idea, but without depth of what tasks is this good at (no state-of-the-art results or even comparable results to other work) and why this idea and not a similar one.\"}",
"{\"comment\": \"Hi,\\nI've followed the discussion here and as one of the authors of [1] I would like to mention two points with respect to the \\\"FillInTheBlank\\\" task:\\n\\n* I don't feel that the formulation used by the authors is unnecessarily complex. In particular, I find the fact that they avoid the painful speculative analysis for each candidate variable, which we needed to do, very appealing.\\n\\n* Although it's true that VarMisuse doesn't suffer from a vocabulary problem, per se, the idea of connecting all nodes with the same subtokens to a single \\\"supernode\\\" is interesting. Based on a workshop presentation by the authors earlier this year, we did implement this trick within our existing VarMisuse code, which gave a +5% absolute performance increase, placing the VarMisuse accuracy to around 90%. In that sense, I find this a valuable idea.\\n\\n(full disclosure: I am aware of the identity of the authors but I have no conflict of interest with them)\\n\\n-Miltos\", \"title\": \"Thoughts on the FillInTheBlank task\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks again for the comments and clarifications, AnonReviewer3.\\n\\nWe agree with all your descriptions of the prior works you mention. We had read them all before publishing our paper. But I'm afraid I don't see what \\\"incorrect claims\\\" we made - could you be more specific?\\n\\nWe also certainly agree with your overarching point that our architecture, optimized as it is for performing multiple tasks, is unlikely to be state-of-the-art for any one of them. If you don't feel that developing an architecture for learning representations of source code that are useful for multiple tasks is a worthwhile goal, then I doubt there's anything we can say to convince you our paper is meritorious.\"}",
"{\"title\": \"Incorrect claims in rebuttal about prior works, please read the references.\", \"comment\": \"First, there are comparisons for two tasks: FillInTheBlank vs NameMe.\\n\\nFillInTheBlank is addressed in\\n[1] Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs\\nwith graphs. ICLR 2017\\n\\n1). The formulation is unnecessarily complex. The final task is nearly identical to [1], but the means to solve it introduce problems that do not exist for this task.\\n4) See 6. [1] solves VarMisuse without the need for any cache.\\n6) VarMisuse from [1] does not suffer from vocabulary problems, because they pick from variables in scope and do softmax from their computed vectors. See also row \\\"Node Labels: Tokens instead of subtokens\\\" in their ablation study that there is no loss in accuracy from vocabulary. Vocabulary does not play a role here if the task is defined properly.\\nAlso [1] does not have C++ dataset, only C# dataset.\\n\\nNameMe is task that is previously solved with other methods and architectures.\\n[2] Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from Big\\nCode\\n[3] Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav. A General Path-Based Representation for Predicting Program\\n[2,3] are solving much more complex problem with higher precision. Have a look also at: https://arxiv.org/pdf/1809.05193.pdf , which uses neural networks.\\n\\n1) See how the task is formulated in [2,3]. What is proposed in Figure 1 is strictly more complicated and NameMe solves only a subset of the task from [2,3]. Experimentally, there is code from these prior works to compare to if results are meant to become comparable\\n2) This is a weakness of the paper, not a strength. Evaluation does not help conveying a message.\\n8) Method names are also counted in [2,3]. Again, aiming to make the results incomparable to prior works certainly does not make the submission stronger.\\n\\nIdeally, the a selling point of this paper may be that one architecture is \\\"good\\\" in both of these tasks. However, it is unlikely to be state-of-the-art for any of them and one architecture for the two tasks needs stronger application motivation.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you so much for the detailed comments! These are really helpful.\", \"regarding_1\": \"You're exactly right about the model structure, and I'm completely with you that \\\"graph\\\" is a term so flexible as to be often unhelpful. We just had to pick a name for the model feature we were introducing, and we hoped \\\"graph-structured cache\\\" was clear and correct: it's a collection of words represented as nodes in a graph.\\nBut I entirely see how the name \\\"graph-structured cache\\\" might cause a reader expect to see a complicated adjacency structure within the cache nodes. There is \\\"depth\\\" due to the message passing in the GNN, but I'll ask my coauthors and other readers if we can find a clearer name.\", \"regarding_2\": \"This is entirely our fault for not being clearer.\\nIn the Fixed Vocab baseline model, vector(name) = f(vector(word_1), ..., vector(word_n)). (No CharCNN involved. <unk> token used if word_k isn't in the vocabulary.)\\nIn the CharCNN baseline model, vector(name) = CharCNN(name). (No splitting name into words.)\\nBut in our GSC model there is no single vector(name) exactly: a variable's name is \\\"embedded\\\" as CharCNN(name) along with edges connecting the variable to word nodes in graph-structured cache. E.g. initializing a node containing a variable named \\\"getGuavaDictionary\\\" involves producing a vector CharCNN(\\\"getGuavaDictionary\\\") and also adding \\\"WORD_USE\\\"/\\\"reverse_WORD_USE\\\" edges pointing to/from the \\\"get\\\", \\\"guava\\\" and \\\"dictionary\\\" nodes in the GSC, each of which contains CharCNN(word).\\nSo the baseline models are indeed standard NLP approaches, but ours (as far as we know) is new. I'll edit the document to make this entirely clear.\\n\\nThanks very much again for helping us improve our presentation!\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for the response. I would like to clarify my points centering around your main contribution:\\n\\n\\\"3. Further augment the Augmented AST by adding a Graph\\u2013Structured Cache. That is, add a\\nnode to the Augmented AST for each vocabulary word encountered in the input instance. Then\\nconnect each such \\u201ccache node\\u201d with an edge (of edge type WORD USE) to all variables whose\\nnames contain its word.\\\"\\n\\nFirstly, I think that the title is misleading because it's too much to claim that your vocabulary model uses \\\"Graph\\u2013Structured Cache\\\". Of course, most (or every) math objects can be represented by graphs. But here, the graph is too shallow. You have a layer of words and connect it to phrases (or variable/function names, which can be considered as phrases). \\n\\nSecondly, your model does remind me of subword approaches in NLP. For instance, I believe that you split variable/function names into tokens (such as \\\"getGuavaDictionary\\\" into \\\"get\\\", \\\"Guava\\\", \\\"Dictionary\\\"). Then\\nvector(name) = f(vector(tok_1),...,vector(tok_n)). If tok_k isn't in your voca, you use a charNN to combute vector(tok_k). If I understand it correctly, this method is widely used in NLP. \\n\\nIf you think I misunderstood your model, I'm willing to change my review and I hope you will write your model in a clearer way.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": [\"Thanks very much for your time and comments, AnonReviewer1.\", \"Regarding your overall comment, we agree, though our hope is that the graph-structured cache strategy we propose will be of general use in open set/open vocabulary learning problems. We simply focused on a particularly acute open vocabulary learning problem in this paper to explore its utility. We will add this motivation more clearly to the paper to make it more relevant to a wider audience.\", \"Regarding the minor comments, thank you for your careful reading! We will upload a version ASAP that addresses all of these, but to answer your questions:\", \"Sect 4.: We actually do introduce them for field names and method names depending on the task - we will make that clearer.\", \"Sec5 5.: We checked for duplicate code with CPD (https://pmd.github.io/latest/pmd_userdocs_cpd.html) and didn't find a worrying amount. Out of the 500k nonempty, noncomment lines of code in our dataset, about 92k lines were duplicates of some other line in the dataset, with the majority of contiguous, duplicated lines of code containing fewer than 150 tokens and only being duplicated once. We didn't find any duplicated files in our code.\", \"Table 1: The Pointer Sentinel model can incorporate words from the vocabulary cache in its output by pointing to them with attention. The Closed Vocab model can only produce names using words from its closed vocabulary. So as you say, the only difference between the Pointer Sentinel model and our full model is the absence of the edges indicating word usage, but both are fairly different from the Closed Vocab model.\", \"Table 1: Yes, Pointer Sentinel/GSC use a CharCNN to embed node labels for all non-internal nodes in the AST, including variables like \\\"foo\\\".\", \"Page 6: About 53% are larger.\", \"Page 7: We do. If the model picks a non-variable, it is counted as a mistake. But this (essentially) never happens: non-variable nodes are identified by their node type, so the model learns within a few batches not to attend to any non-variable nodes.\"]}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks very much for your time and comments, AnonReviewer2. \\n\\nTo respond to your title, first con, and question, I'm worried that a big misunderstanding has been caused somehow. We don't use a \\\"subword vocabulary\\\" per se, and our embedding strategy is very ancillary to the main contribution of our work. We use a CharCNN embedding in some of the models - is what you are referring to? If so, it's very minor part to our main contribution, which is the usage of the graph structured cache. (Hence our title.) Were you referring to this cache as the shallow subword embedding? Or have I misunderstood your comment?\\n\\nTo respond to your second con, we are certainly continuing in the same direction as the excellent work in [Allamanis et al. 2018]. But our contributions extend quite a bit farther than that paper: we introduce an entirely new way of handling an open vocabulary, show that it improves performance on two well-studied tasks, present experiments with more Graph Neural Network architectures, and do all this on Java - a much more widely used language. Would you consider this a fair characterization, or are we overstating the case?\\n\\nThanks again!\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks very much for your time and comments, AnonReviewer3. \\n\\nFor readability, let me list my responses and questions with references to your review:\\n\\n1) \\\"...strange formulations of these tasks, which are overly complex...\\\"\\nDid you mean the tasks themselves were overly complex, or our formulations of them were overly complex? If the former, we should point out that we test on nearly identical tasks to [Allamanis et al. 2018].\\n\\n2) \\\"...does not demonstrate state-of-the-art performance...\\\"\\nWe are not aware of any work that achieves better performance than ours without using >10x more data. But even so, while models tailored to each task have certainly performed better, as far as we know we are the only model that uses the same pipeline (in fact, even the same hyperparameters) to achieve comparable to state-of-the-art performance on both these tasks on Java code. Did you have a reference in mind that does similar? If so, we want to make sure we cite it!\\n\\n3) \\\"Figure 1 shows the complexity of the approach,...\\\"\\nWe were aiming for thoroughness with this figure, hence showing all the steps. But compared to prior works, we only add one new step - we just depict the entire procedure in our figure.\\n\\n4) \\\"...introducing the vocabulary cache to then produce a vector at every node of the input tree of the program, which is unnecessary.\\\"\\nWere you saying the cache was unnecessary, or vectorizing every node was unnecessary?\\nIf the former, we feel the experiments refute that, showing that the cache improved performance quite a bit.\\nIf the latter, it is certainly *possible* that there exists an architecture for which this is unnecessary, but this is what every Graph-Neural-Network-based approach does. Doing so is not particularly computationally expensive.\\n\\n5) \\\"The FillInTheBlank task is badly defined already on the running example. ...one of the candidate variables is out of scope at the location to fill.\\\"\", \"this_was_intentional\": \"we wanted our models to learn to consider lexical scope via the AST representation. Indeed, as you say, we could likely improve performance by restricting the model's attention in a scope-aware way - but our objective was to compare architectures, not to maximize performance on this task.\", \"thanks_for_pointing_out_the_confusion\": \"we will make this clearer in the paper.\\n\\n6) \\\"Also [1] does not suffer from vocabulary problems for that task.\\\"\\nThe Closed Vocab + AugAST entry in Table 2 is the same model as in [1]. So while it may not have suffered from vocabulary problems in [1]'s C# dataset, it indeed suffered from fairly significant vocabulary problems on our Java dataset.\\n\\n7) \\\"The NameMe tasks also shows the weakness of the proposed architectures. This work proposes to compute vectors at every node... In comparison, several prior works introduce one node per variable...\\\"\\nWere you suggesting here that the only weakness of our architecture was that we average several vectors on this task as opposed to using one vector? Or were you giving one example of a more general critique? If the latter, could you say more about what the general critique is?\\n\\n8) \\\"While not on the same dataset, [2,3] consistently get higher accuracy on a related and more complicated task...\\\"\\nPerhaps I'm misunderstanding, but I don't see how these papers' results are comparable to our results.\\n[3] considers a JavaScript dataset, not a Java dataset. These are very different languages, and the results of [2] suggest that variable naming in Javascript is significantly easier.\\n[2] uses, as you say, a different dataset. They use more than 16x as much data as we do and achieve around 5% better accuracy on this dataset (and only if you don't count method naming, which our model does). But beyond their use of much more data, their model is designed specifically for the task of variable naming - ours is meant to be a general representation learning strategy for source code. This is why we test it on two tasks and on entirely unseen repositories.\\nDoes this bear upon your concerns, or have I misunderstood your comment?\\n\\nThanks again!\"}",
"{\"title\": \"Overly complicated techniques for previously well-addressed tasks in literature\", \"review\": \"(updated with some summaries from discussion over the initial review)\\n\\nThe paper discusses the topics of predicting out-of-vocabulary tokens in programs abstract syntax trees. This could have application in code completion and more concretely two tasks are evaluated:\\n - predicting a missing reference to a variable (called FillInTheBlank)\\n - predicting a name of a variable (NameMe)\\n\\nUnfortunately, the paper proposes overly complex and strange formulations of these tasks, heavy implementation with unnecessary (non-motivated) neural architectures and as a result, does not demonstrate state-of-the-art performance or precision on comparable tasks. Figure 1 shows the complexity of the approach, with multiple steps of building a graph, introducing the vocabulary cache to then produce a vector at every node of the input tree of the program (instead of creating architecture for a given task), yet simple analysis over which variables can be chosen is missing.\\n\\nThe FillInTheBlank task is badly defined already on the running example. The goal is to select a variable to fill in a blank and already in the example on Figure 2, one of the candidate variables is out of scope at the location to fill. The motivation for the proposed formulation with building a graph and then computing attention over nodes in that graph is unclear and experiments do not help it. For example, [1] (also cited in the paper) solves the same problem more cleanly by considering only the variables in the scope*. There is no good experimental comparison to that work, but it is unlikely it will perform worse. Also [1] does not suffer from vocabulary problems for that task.\", \"summary_discussion_below\": \"the experiments here are incomparable on many levels with prior works: different architecture details, different even smaller dataset than from [1]. There is a third-party claim that on a full system, the general idea improves performance, but I take it with a grain of salt as no clean experiment was yet done. The reviewer notes that the authors disagree the baselines are not meaningful.\\n\\nThe NameMe tasks also shows the weakness of the proposed architectures. This work proposes to compute vectors at every node where a variable occurs and then to average them and decode the variable name to predict. In comparison, several prior works introduce one node per variable (not per occurrence), essentially removing the long distance relationships between occurrences of the same variable variables and removing the need to average vectors and enforcing the same name representation at every occurrence of the variable [name]. The setup here is incomparable to specialized naming prior works, one feature (a node per variable) is replaced with another (a node per subtoken), but for baselines authors choose to only to be similar to [1]. Also, while not on the same dataset, [2,3] consistently get higher accuracy on a related and more complicated task of predicting multiple names at the same time over multiple programming languages and with much simpler linear models. This is not surprising, because they propose simpler architectures better suited for the NameMe task.\\n\\n[1] Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs\\nwith graphs. ICLR 2017\\n[2] Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from Big\\nCode\\n[3] Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav. A General Path-Based Representation for Predicting Program\\n\\n* corrected text\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A subword embedding model for codes. What's new?\", \"review\": [\"The paper introduces a new way to use a subword embedding model 2 tasks related to codes: fill-in-blank and variable naming.\", \"pros:\", \"the paper is very well written.\", \"the model is easily to reimplement.\", \"the experiments are solid and the results are convincing.\", \"cons:\", \"the title is very misleading. In fact, what the paper does is to use a very shallow subword embedding method for names. This approach is widely used in NLP, especially in machine translation.\", \"the work is progressing, meaning that most of it is based on another work (i.e. Allamanis et al 2018).\", \"questions:\", \"how to build the (subword) vocabulary?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Improved graph representation for learning from programs\", \"review\": \"The submission presents an extension to the Allamanis et al ICLR'18 paper on learning from programs as graphs. The core contribution is the idea of introducing extra nodes and edges into the graph that correspond to (potentially rare) subwords used in the analyzed program code. Experiments show that this extended graph leads to better performance on two tasks, compared to a wide range of baseline methods.\\n\\nOverall, this is a nice paper with a small, incremental idea and substantial experiments that show its practical value. I only have minor comments / questions on the actual core content. However, the contribution is very incremental and of interest to a specialized subsegment of the ICLR audience, so it may be appropriate to reject the paper and redirect the authors to a more specialized venue.\", \"minor_comments\": [\"There's a bunch of places where \\\\citep/\\\\citet are mixed up (e.g., second to last paragraph of page 2). It would make sense to go through the paper one more time to clean this up.\", \"Sect. 4: I understand the need to introduce context, but it feels that more space should be spent on the actual contribution here (step 3). For example, it remains unclear why this extra nodes / edges are only introduced for subwords appearing in variables - why not also for field names / method names?\", \"Sect. 5: It would be helpful if the authors would explicitly handle the code duplication problem (Lopes et al., OOPSLA'17), or discuss how they avoided these problems. Duplicated data files occurring in several folds are a significant risk to the validity of their experimental findings, and very common in code corpora.\", \"Table 1: It is unclear to me what the \\\"Pointer Sentinel\\\" model can achieve. Without edges connecting the additional words to where they occur, it seems that this should not be performing different than \\\"Closed Vocab\\\", apart from noise introduced by additional nodes.\", \"Table 1: Do Pointer Sentinel/GSC use a CharCNN to embed node labels of nodes that are not part of the \\\"cache\\\", or a closed vocabulary? [i.e., what's the embedding of a variable \\\"foo\\\"?] If not, what is the performance of the GSC model with CharCNN-embeddings everywhere? That would be architecturally simpler than the split variant, and so may be of interest.\", \"Page 6: When truncating to 500 nodes per graph: How many graphs in your dataset are larger than that?\", \"Page 7: Do you really use attention over all nodes, instead of only nodes corresponding to variables? How do you deal with results where the model picks a non-variable (e.g., a corresponding cache node)? Does this happen?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SJfHg2A5tQ | BNN+: Improved Binary Network Training | [
"Sajad Darabi",
"Mouloud Belbahri",
"Matthieu Courbariaux",
"Vahid Partovi Nia"
] | Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively. | [
"Binary Network",
"Binary Training",
"Model Compression",
"Quantization"
] | https://openreview.net/pdf?id=SJfHg2A5tQ | https://openreview.net/forum?id=SJfHg2A5tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1x-UzWl4H",
"r1ge3N8Bx4",
"H1lk7FFT1E",
"SJefvMNKyN",
"HklbhU-50X",
"rJlij_ptR7",
"SJeHx3hKRQ",
"HJecywK_A7",
"B1gCOQY_Rm",
"S1esUOwb0X",
"rklWMOvZCm",
"B1xXsPPZ0X",
"H1xXePv-AQ",
"SJePcEvWRm",
"SkldLNDZRX",
"S1lWV4P-CQ",
"rJl3OQvWCX",
"HyeeWC2jh7",
"SygwCr4qh7",
"BkgIvK4Lhm",
"rkeWAp24j7",
"H1xlsA5Vjm",
"HklJGpLNim",
"Bke5-qUEi7",
"BygamLVVoX",
"H1gMIMKXoQ",
"S1lL-zYQim",
"S1lXaYQmsm",
"SyeGKH7XoQ",
"H1xD8ABq9X",
"r1etthr59m",
"SkeoW9jv9m",
"HJg4vQe857"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1565688393382,
1545065639924,
1544554775232,
1544270426247,
1543276200558,
1543260323495,
1543257068975,
1543177954489,
1543177078100,
1542711378701,
1542711304751,
1542711194946,
1542711019265,
1542710414571,
1542710351917,
1542710312731,
1542710132384,
1541291512414,
1541191119215,
1540929886340,
1539784137463,
1539776152384,
1539759367259,
1539758594215,
1539749412540,
1539703370092,
1539703294132,
1539680699424,
1539679610335,
1539100238944,
1539099776904,
1538927107107,
1538814812266
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1074/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1074/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1074/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1074/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1074/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"regarding the results reported from this paper: https://arxiv.org/pdf/1906.08637.pdf, we may think about the question from the title of this post: is our research work aimed at publishing papers or seeking the truth?\\n\\nA frustrating fact is that the most recent proposed optimization/quantization tricks for binary neural networks are not really necessary. However, worse still, most people in academia don't think so because they need these so-called improvements to publish their papers. Papers like ABC-Net, Group-Net they claimed they can achieve near fp accuracy but they don't release their code, thus the improvements they claimed are not necessarily to be achievable by most other people.\", \"the_paper_from_https\": \"//arxiv.org/pdf/1906.08637.pdf reports a result on binary resnet18 on imagenet: 54.5%/77.8%, which doesn't use any training tricks like scaling factor, customized gradient, fine-tuning from fp models, fp short-cut etc.\\nThey just use the simplest sign function, STE, Adam and standard training solution. But higher acc can be achieved. However, such paper is often just ignored by academia, because most people from this community think there is no \\\"novel\\\" idea proposed, although sometimes pointing out the truth and establishing a better standard should be really meaningful for the long-term development.\", \"title\": \"Is our research work aimed at publishing papers or seeking truth?\"}",
"{\"metareview\": \"The paper makes two fairly incremental contributions regarding training binarized neural networks: (1) the swish-based STE, and (2) a regularization that pushes weights to take on values in {-1, +1}. Reviewer1 and reviewer2 both pointed out concerns about the incremental contribution, the thoroughness of the evaluation, the poor clarity and consistency of the writing. Reviewer3 was muted during the discussion. Given the valid concerns from reviewer1/2, this paper is recommended for rejection.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"incremental contribution\"}",
"{\"title\": \"Yes, it works\", \"comment\": \"-----------------------------------------------------------------------------------------------------------------\\n*comment: \\\"Since the paper (Tang, AAAI 2017, how to train a compact binary neural network with high accuracy) introduced regularization function 1-w^2, we have tried this technique to improve our performance of BNN. We plotted the weight distribution, and we found that we can't see two peaks around 1 and -1, we only see one peak around 0, which is the same as other normal L1&L2 regularization functions. Can you produce two peaks around alpha and -alpha? If not, can you explain why regularization function is important, in a more reliable way? In the paper, we only see the assumption, you didn't give any plot like weight distributions.\\\"\\n\\n\\nThe objective of including this modified regularization function is indeed to gradually encourage the weights around \\u2013alpha and +alpha. You should try to implement our proposed regularizers. Of course, while training we can clearly see (visually) that the weights are distributed around two peaks. Note that alpha is also trainable in our framework, therefore, the weight distribution is conditional on alpha\\u2019s value.\\n\\n \\n-----------------------------------------------------------------------------------------------------------------\\n*comment: As the regularization function is one of your only two contributions, I hope we can see more valid results and analytical explanations and experiments, so we can reproduce it and improve the performance in other tasks.\\n \\n\\nPlease read the updated version of the paper. As stipulated in the abstract, \\u201cWe propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation.\\u201d Not only we propose a new way to training binary networks, we also performed an extensive number of experiments to show that the methodology is actually working.\\n\\n-----------------------------------------------------------------------------------------------------------------\\n*comment: Bi-real net achieves Top-1 56.4% Top-5 79.5% in ResNet-18(ImageNet dataset) while yours is 53% and 72.6% respectively, maybe your paper do not achieve highest performance over state-of-the art, and your experiments results may not be enough. Can you show more network experiments like deeper Resnet-34 50 or even Densenet which LQ-Nets actually did and did it well.\\n\\nAs stated in the discussion section of the paper (Section 4.3), \\u201cWe did not compare our network with that of Liu et al. (2018) as they introduce a shortcut connection that proves to help even the full precision network.\\u201d Indeed, the results you cite are the ones using a modified real-valued weights shortcut connection that makes Bi-real net and BNN+ not comparable. However, in [1], their implementation of Resnet-18 achieves 45.7% top-1 accuracy at most on ImageNet dataset.\\n\\nAlso, as stated in the conclusion, \\u201cFor future work, we plan on extending these to efficient models such as CondenseNet (Huang et al., 2018), MobileNets (Howard et al., 2017), MnasNet (Tan et al., 2018) and on object recognition tasks.\\u201d\\n\\n \\n[1] Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm. In ECCV, 2018.\"}",
"{\"comment\": \"1. Since the paper (Tang, AAAI 2017, how to train a compact binary neural network with high accuracy) introduced regularization function 1-w^2, we have tried this technique to improve our performance of BNN. We plotted the weight distribution, and we found that we can't see two peaks around 1 and -1, we only see one peak around 0, which is the same as other normal L1&L2 regularization functions. Can you produce two peaks around alpha and -alpha? If not, can you explain why regularization function is important, in a more reliable way? In the paper, we only see the assumption, you didn't give any plot like weight distributions.\\n\\n2. As the regularization function is one of your only two contributions, I hope we can see more valid results and analytical explanations and experiments, so we can reproduce it and improve the performance in other tasks.\\n\\n3. Bi-real net achieves Top-1 56.4% Top-5 79.5% in ResNet-18(ImageNet dataset) while yours is 53% and 72.6% respectively, maybe your paper do not achieve highest performance over state-of-the art, and your experiments results may not be enough. Can you show more network experiments like deeper Resnet-34 50 or even Densenet which LQ-Nets actually did and did it well.\", \"title\": \"Does the regularization function really work?\"}",
"{\"title\": \"Author's Response\", \"comment\": \"Sorry if there was a misunderstanding about (B).\\n\\nHere, we meant that it is unclear whether the method proposed in [1] is useful for ImageNet with 1-bit activations. In their table 1, on ResNet-18 with 1-bit activations (called Sign in the table), the proposed method FTP-SH actually performs worse than the htanh STE (SSTE in the table).\\n\\nOur opinion is that the method introduced in [1] really shines with 2 or 3-bit activations, and full precision weights. As a result, it may be difficult to do a fair comparison with our work. \\nWe nonetheless believe that the article is a great addition to our related work section.\\n\\nFurther, although we did not re-implement the exact recursive mini-batch algorithm of [1], we already did run experiments using a smooth tanh STE on CIFAR-10 (with Alexnet) as well as our SS1, SS2 and SS5. The tanh STE results were inferior to that of the SS5 STE by a ~2% margin. We plan to add those results to our table 1.\\n\\nThese experiments suggest that the non-monotonicity and the flexibility of our new STE are beneficial compared to using a simpler smooth STE function.\"}",
"{\"title\": \"Reviewer's response\", \"comment\": \"I am now skeptical that you even read [1]. It would be easy to implement and compare to their approach, and they do report ImageNet results. Here is a link if you need: https://openreview.net/forum?id=B1Lc-Gb0Z. I believe that they also provide code to run their method. In more detail:\\n\\n(A) Absolutely, there is no guarantee that [1] would work with binary weights but also no reason to think that it wouldn't for the reasons outlined previously. It would be trivial for you to run one more evaluation using your approach with their STE (essentially, just a tanh) to determine if there is actually any merit to your STE. Alternatively, it would be trivial for you to run one more evaluation using your STE with their approach, for the same effect. \\n\\n(B) This is clearly false. [1] has ImageNet results. Half of Section 4 in [1] is dedicated to ImageNet results. \\n\\nTo reiterate, my point is that [1] also improves on the hardtanh STE. Thus, while your method may also improve on the hardtanh STE, it does not necessarily improve on the method of [1], which already improved on the hardtanh STE. Since your STE has a similar softening as that of [1], it is not clear whether your introduced STE is achieving the same thing that [1] achieved or if there is merit to your claim that the non-monotonicity is important. While, absolutely, this claim could be true, you have not demonstrated that it is; instead, you have simply claimed that it is while comparing only to older work.\"}",
"{\"title\": \"Authors Response to Reviewer #3\", \"comment\": \"We are saddened to see that our response convinced you to reduce your score from 5 to 4.\\n\\nWe agree that [1] proposes an interesting method, which is somewhat related to ours.\\n\\nHowever, we also believe that it would be very long and difficult for us to compare this method with ours:\\n\\n(A) [1] binarizes the activations, but not the weights.\\nAlthough we agree with you that binarizing activations is harder than binarizing weights, we believe that binarizing both is harder than binarizing only the activations.\\nThere is no guarantee that [1] would work with binary weights.\\n\\n(B) [1] does not have ImageNet results.\\nAlthough CIFAR-10 is certainly an interesting benchmark,\\nwe believe that ImageNet is more challenging.\\nThere is no guarantee that [1] would work on ImageNet.\\n\\nWe disagree with your assertion that we did not clearly evaluate the respective contributions of the 2 changes we introduced.\\nOur table 2 shows that our improved \\\"swish\\\" straight-through estimator (STE) performs better than the \\\"hardtanh\\\" STE, with or without our new regularization term.\\nOur table 2 also shows that our new regularization term improves performance\\nindependently of whether the \\\"swish\\\" or the \\\"hardtanh\\\" STE is used.\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"\\\"We thought comparison on CIFAR10 does not provide fair comparison as it is not a challenging dataset. Instead it makes sense to do so on a harder dataset such as Imagenet.\\\"\\n\\nWhile CIFAR10 is easier than ImageNet, performance on it has not yet necessarily saturated for binarized networks, and using a smaller dataset allows one to run experiments much more quickly. This can be extremely useful for running things like ablation experiments, which are crucial when multiple changes are introduced in a single paper. Further, it is always useful to evaluate on multiple datasets, to ensure that one hasn't simply overfit to a single dataset. \\n\\n--------\\n\\n\\\"Lower \\\\beta value do not necessarily mean better results. \\\"\\n\\nDoes this mean that you ran experiments with these values of \\\\beta? The statement you made is extremely vague.\\n\\n--------\\n\\n\\\"The scaling factors are parameters estimated along with the weights. By our comparison with the BNN method we demonstrated the efficacy of using the suggested scales.\\\"\\n\\nHow can this be a demonstration of the efficacy of using the suggested scales when the other contributions were also included in this test (i.e., the regularization and the swish-as-STE approximation). It is thus not clear what the benefit of any of these is in isolation, which is the entire point of running ablation experiments.\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"\\\"[1] is concerned with solving a combinatorial optimization problem for hard thresholding activation units. ...\\\"\\n\\nNo, [1] is concerned with the same problem as this paper -- training neural networks with binarized activations -- they simply formulate the problem as a mixed convex-combinatorial optimization problem to better understand and justify their approach and the STE. While they do not use binarized weights, other papers (see third paragraph of intro of [3] and citations within) have shown that the activation binarization causes far more accuracy loss and is thus the critical component of binarization. Further, activation binarization is orthogonal to weight binarization so it would be trivial to test their activation as well. \\n\\nThe issue here is that you cannot simply claim that your approach is better, even if it is more flexible (and some would argue that introducing additional hyperparameters makes training more challenging), unless you compare directly to existing work. Your experiments loosely show that lower values of \\\\beta outperform higher values of \\\\beta. You say this is not the case but provide no clear evidence for the reader. Further, you say that the bump in the swish helps with learning but, again, this is not clearly demonstrated in the experiments. Using \\\\beta=1 would give an STE with slope=1 at 0 (which tanh has, as used in [1]) but with the non-monotonicity that tanh does not have. This would provide a direct comparison between their approach and yours (although, even a direct comparison using \\\\beta=5 would be fine too).\\n\\nYou have introduced multiple changes to existing architectures but not clearly evaluated them to show their respective contributions and utility. You do not compare to existing work ([1], etc.) that makes a very similar change to the approximation of the derivative of the activation.\\n\\n[3] Deep Learning with Low Precision by Half-wave Gaussian Quantization. Cai, He, Sun, and Vasconcelos (2017).\"}",
"{\"title\": \"To Reviewer #3 (1/4)\", \"comment\": \"Thank you for your constructive feedback. Please find below a point to point response.\\n\\n-----------------------------------------------------------------------------------------------------------------\\n*comment: How exactly is the SS_\\\\beta activation used? It is entirely unclear from the paper, which contradicts itself in multiple ways. Is SS_\\\\beta used in the forward pass at all for either the weight or activation binarization? Or is only its derivative used in the backward pass? If the latter, then you are not replacing the activation anywhere but are simply using a different straight-through estimator in place of the saturated straight-through estimator (e.g., see [1]).\\n\\nIn the forward pass the sign function is used, and in the backward pass we use the derivative of SwishSign function as the approximation in the backward pass. We corrected the sentence in the text:\\n\\n\\u201cCombining both the regularization and activation ideas, we modify the training procedure by replacing the \\\\sign backward approximation binarization with that of the derivative of the SS_\\\\beta activation (2).\\u201d\\n\\n b) Due to the confusion introduced by this figure, we removed it accordingly.\\n\\n-----------------------------------------------------------------------------------------------------------------\\n*comment: In [1], the authors used a similar type of straight-through estimator (essentially, the gradient of tanh instead of hard_tanh) and found that to be quite effective. You should compare to their method. Also, it's possible that SS_\\\\beta reduces to tanh for some choice of \\\\beta -- is this true?\\n\\n[1] is concerned with solving a combinatorial optimization problem for hard thresholding activation units. In their work they keep the weights to full precision values, and only limit the activations of units to binary values. Our primary motivation behind SS_beta was to define a class of functions for which the derivatives are different approximations of the sign function. In [1], the approximation is fixed and therefore less flexible than ours. In the ablation study, we see that for different values of beta, the accuracy changes.The SS_\\\\beta does not reduce to the tanh function. The tanh function is similar to that of the sigmoid, although in the case of SS_1 they look similar, in the swish function there is a subtle difference in that there is a bump at -2.4/beta and +2.4/beta which helps with learning (gradient flow, saturation at later point). Further, one of the major difference between signswish and tanh is that signswish is non monotonous.\\n\\n-------------------------------------------------------------------------------------------------------------\\n*comment: The use of scale factors seems to greatly increase the number of parameters in the network and thus greatly decrease the compression benefits gained by using binarization, i.e., you require essentially #scale_factors = a constant factor times the number of actual parameters in the network (since you have a scale factor for each convolutional filter and for each column of each fully-connected layer). As a result of this, what is the actual compression multiplier that your network achieves relative to the original network?\\n\\nThe use of scale factors does reduce the actual compression, but not significantly. For example in the case of AlexNet, \\n\\nconv 64 - conv 192 - conv 384 - conv 256 - conv 256 - FC 4096 - FC 4096 \\n\\nThe number of parameters in the original network are ~ 6M/32 = 187500 \\nThe number of scales introduced are (192+384+256+256+4096+4096) = 9280\\nThe compression with the addition of scales becomes ~ 31.5\\n\\nThe additional overhead of the scales is less than ~2 %.\\n\\nAt inference time, one can fold the batch norms onto the scaling factors, thus removing the batchnorms operations and their parameters. This has the similar effect as in [2]. As a result although we introduce scaling factors, we remove the batch norm parameters and division operation.\\n\\n[2] Rastegari, Mohammad, et al. \\\"Xnor-net: Imagenet classification using binary convolutional neural networks.\\\" European Conference on Computer Vision. Springer, Cham, 2016.\"}",
"{\"title\": \"To Reviewer #3 (2/4)\", \"comment\": \"*comment: For the scale factor, how does yours differ from that used in Rastegari et al. (2016)? It seems the same but you claim that it is a novel contribution of your work. Please clarify.\\n\\nIn Rastegari et al. (2016) the parameters are estimated dynamically given the weights of the network. Hence in training, on each pass they are updated accordingly. Whereas in our work we introduce the scaling factors in the regularization function. \\n\\nThis follows as the scaling factors introduced by Rastegari et al. (2016) are estimated in a 2-stage fashion. First they find the weights and second, they solve an optimization problem (L2 norm of the difference between full-precision weights and scaling factor times binary weights) in order to estimate the scaling factors. So, the estimated scaling factor is the mean of absolute values of the weights. \\n\\nIn our work, there is a difference in how the scales are formulated. We introduce the scales into a regularization function constructed specifically for a BNN. This class of regularization functions can be written to \\n\\nR(w) = |scaling_factor - |weights| |^p \\n\\nwhere p=1 and 2 in the paper. Instead of having two separate optimization problems, we back-propagate the updates to the scales in order to minimize the loss function plus the regularization term.\\n\\nDepending on the regularization term used, the scaling factors estimation falls into either mean of absolute values of the weights (p=2) or median of absolute values of the weights (p=1). As a result, this could also be seen as a generalization of Rastegari et al. (2016)\\u2019s scaling factor.\\n\\n----------------------------------------------------------------------------------------------------------------\\n*comment: Why did learning \\\\beta not work? What was the behavior? What values of \\\\beta did learning settle on? \\n\\n\\nLearning beta adds only one equation to back-propagation in our experiments we fixed beta to have explicit value. We did not extensively try to experiment with learning beta. We changed the following sentence accordingly:\\n\\n\\u201cThe parameter $\\\\beta$ could be learn-able , and would add one equation only to back-propagation. However, we fixed $\\\\beta$ through out our experiments. The results are summarized in Table \\\\ref{tab:ablation}\\u201d\\n\\nWe did few experiments having beta trainable though it was not conclusive. Accordingly, we decided to leave it for future work. For instance one can investigate changing the beta parameter dynamically as training progresses, perhaps by starting with a smaller beta and moving towards larger beta.\\n\\n-------------------------------------------------------------------------------------------------------------\\n*comment: I assume that for each layer output y_i = f(W_i x_i), the regularizer is applied as R(y_i) while at the same time y_i is passed to the next layer -- is this correct? The figures do not clearly show this and should be changed to more clearly show how the regularizer is computed and used, particularly in relation to the activation.\\n\\nReferring to equation (6) of section 3.3, the regularizer is applied as a function of the weights and scaling factor (R(W_l, \\\\alpha_l)), which is then added to the loss function. This total loss is used to optimize the network. \\n\\nFrom section 3.3: \\n\\u201cJ(W, b) = L(W, b) + \\\\lambda \\\\sum_{l} R(W_l,\\\\alpha_l)\\n\\nwhere L(W, b) is the cost function, W and b are the sets of all weights and biases in the network, W_l is the set weights at layer l and \\\\alpha_l is the corresponding scaling factor. Here, R(.) is the regularization function 4 or 5.\\u201d\\n\\n--------------------------------------------------------------------------------------------------------------\\n*comment: In the pseudocode:\\n (a) What does \\\"mostly bitwise operations\\\" mean? Are some floating point?\\nHere by mostly bitwise operation in algorithm 1 we mean that, the only floating operation is the multiplication of the scales, into the W^bx^b (line 6 in forward pass of algorithm 1)\\n\\n (b) Is this the shift-based batchnorm of Hubara et al. (2016)?\\n No. But, shift-based batchnorm of Hubara et al. (2016) is orthogonal to our methodology. Shift-based batchnorm is only useful if you want to speed up the training. At run-time, you can fold the vanilla batchnorm operations into a simple threshold function.\"}",
"{\"title\": \"To Reviewer #3 (3/4)\", \"comment\": \"*comment: For Table 1:\\n (a) \\\"I assume these are accuracies? The caption should say.\\\"\", \"we_modified_the_caption_as_follows\": \"\\u201cAccuracy results on test set for CIFAR10 dataset, using Manhattan regularization function (\\\\ref{l1reg}) with AlexNet and VGG.\\u201d\\n\\n (b) \\\"Why are there no comparisons to the performance of other methods on this dataset?\\\"\\n\\nCifar10 consists of low resolution images and limited number of classes hence it is not a challenging dataset. Accordingly in order to show the empirical results for our comparisons, we decided to forgo comparison on this task and instead focus on ImageNet.\\nWe thought comparison on CIFAR10 does not provide fair comparison as it is not a challenging dataset. Instead it makes sense to do so on a harder dataset such as Imagenet.\\n\\n (c) \\\"Any thoughts as to why your method performs better than the full-precision method on this dataset for VGG?\\\"\\n\\nVGG network is over-parameterized and given the simplicity of CIFAR10 dataset, binarization helps regularize the network and avoid overfitting.\\n\\n--------------------------------------------------------------------------------------------------------------\\n*comment: For Table 2:\\n (a) \\\"Does Table 2 show accuracies on ImageNet? You need to make this clear in the caption.\\\"\", \"table_2_we_modified_the_caption_as_follows\": \"\\u201cImageNet Top-1 and top-5 accuracies (in percentage) of different combinations of the proposed technical novelties on different architectures. Some architectures were harder to train and did not converge within the time frame of the others, and so is not reported.\\u201d\\n\\n (b) What type of behavior do the runs that do not converge show? This seems like a learning rate problem that is easily fixable. Are there no hyperparameter values that allow it to converge?\\n\\nTable 2 is a computationally time consuming task, as we are training the networks on the ImageNet dataset. As a result, for those two specific table values our experiments were not run. This sentence hadn\\u2019t been worded appropriately. Also we would like to point that there are no convergence issues with the method. \\n\\n (c) \\\"What behavior do you see when you use SS_1 or SS_2, i.e., \\\\beta = 1 or \\\\beta = 2? Since lower \\\\beta values seem better.\\\"\\n\\nLower \\\\beta value do not necessarily mean better results. We modified the sentence in the text\\n\\n\\u201cLastly, it seems smaller (moderate) values of \\u03b2 is better than larger values.\\u201d\\n\\nIn theory, higher \\\\beta value mean better approximation of the derivative of the sign function. However, there is a trade-off between approximation of the derivative of the sign function and the gradient value. So, moderate values of \\\\beta seem better. So, we presented SS_5 and SS_10.\\n\\n (d) \\\"The regularization seems to be the most useful contribution -- do you agree?\\\"\\n\\nOur contributions are incremental, like just about any contributions. E.g., ReLUs were a small modification of activation functions, yet ReLUs had a huge impact. The regularization functions introduced in this paper are two out of many functions we can use for quantizing BNN. Including the scaling factors in the regularization function. As a result, learning the scales through back-propagation is a contribution and useful since it gives a flexibility to the method to adapt to the data. Finally, based on our experimental results summarized in Table 2, our approximation of the derivative of the sign function proves to be useful empirically l since it gives better results than the straight through estimator.\\n\\n\\n (e) \\\"Why did you not do any ablations for the scale factor? Please include these as well.\\\"\\n\\nThe scaling factors are parameters estimated along with the weights. By our comparison with the BNN method we demonstrated the efficacy of using the suggested scales.\\n\\n--------------------------------------------------------------------------------------------------------------\\n*comment: For Table 3, did you compute the numbers for the other approaches or did you use the numbers from their papers? Each approach has its own pros and cons. Please be clear.\\n\\nWe have a re-implementation of BNN, and for XNOR Net we decided to cite the accuracies in the paper instead. We have added the following to the caption of the paper:\\n\\n\\u201cComparison of top-1 and top-5 accuracies of our method BNN$+$ with BinaryNet, XNOR-Net and ABC-Net on ImageNet, summarized from Table \\\\ref{tab:ablation}. Results are cited from the corresponding papers.\\u201d\"}",
"{\"title\": \"To Reviwer #3 (4/4)\", \"comment\": \"*comment: \\u201cAre there any plots of validation accuracy versus epoch/time for the different algorithms in order to ensure that the reported numbers were not simply cherry-picked from the run? I assume that you simply used the weights from the end of the 50th epoch -- correct?\\u201d\\n\\n\\nYes we simply evaluated the model at the end of the training procedure.\\n\\n----------------------------------------------------------------------------------------------------------------\\n*comment: \\u201cIs there evidence for your introductory claims that 'quantizing weights ... make neural networks harder to train due to a large number of sign fluctuations' and 'maintaining a global structure to minimize a common cost function is important' ? If so, you should cite this evidence. If not, you should make it clear that these are hypotheses.\\u201d \\n\\nThis intuition and reasoning was provided in [3], thanks for pointing this out we made sure to cite the paper.\\n\\nConcerning \\u201cmaintaining a global structure to minimize a common cost function\\u201d we re-wrote it as:\\n\\n\\u201cHow to quantize the weights locally, and maintaining a global structure to minimize common cost function is important[4]\\u201d\\n\\n\\n[3] Lin, Xiaofan, Cong Zhao, and Wei Pan. \\\"Towards accurate binary convolutional neural network.\\\" Advances in Neural Information Processing Systems. 2017.\\n[4]Li, Hao, et al. \\\"Training quantized nets: A deeper understanding.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n*comment: \\u201cWhy are there not more details about the particular architectures used? These should be included in the appendices to aid those who would like to rerun your experiments. In general, please include more experiment details in the body or appendices.\\u201d\\n\\nWe added more explanations on the architectures and experiments in the corresponding sections. \\n\\nWe train both, AlexNet \\\\citep{krizhevsky2012imagenet}, and VGG \\\\citep{Simonyan2014VeryDC} using the ADAM \\\\citep{Kingma2014AdamAM} optimizer. The architecture used for VGG is conv(256)-conv(256-conv(512-conv(512-conv(1024)-conv(1024-fc(1024-fc(1024) where conv(\\\\cdot) is a convolutional layer, and fc(\\\\cdot) is a fully connected layer. The standard 3\\\\times3 filters are used in each layer. We also add a batch normalization layer (Ioffe2015BatchNA) prior to activation. For AlexNet, the architecture from (Krizhevsky2014OneWT) is used, and batch normalization layers are added prior to activations. We use a batch size of 256 for training. Many learning rates were experimented with such as 0.1, 0.03, 0.001, etc, and the initial learning rate for AlexNet was set to 10^{-3}, and 3 \\\\times 10^{-3} for VGG. \\n\\n- Typos:\\nThank you for pointing these out. We made sure to correct all typos and unclear sentences in the revised version of the paper.\"}",
"{\"title\": \"To Reviewer #2 (1/3)\", \"comment\": \"Thank you for your constructive feedback. Please find below a response to the comments.\\n-----------------------------------------------------------------------------------------------------------------\\n*comment: However, I did not get much out of Figures 3 or 4. I thought Figure 3 was unnecessary (it shows the difference between l1 and l2 regularization), and I thought the pseudo-code in Algorithm 1 was a lot clearer than Figure 4 for showing the scaling factors. \\n\\nRegarding figure 3, the purpose of this is to give a visualization of the regularization functions to help our non-expert readers with the intuition. We also added a dotted version of the functions to the plot depicting the effect of the scales on the functions. Further, we added a sentence in the body explaining the motivation behind designing it as such (end of pg 4) as well:\\n\\n\\u201cThe proposed regularizing terms are inline with the wisdom of the regularization function R(w) = (1 - w^2) \\\\1_{\\\\{|w| \\\\leq 1\\\\}} as introduced in Tang et al. (2017). A primary difference are in introducing a trainable scaling factor, and formulating it such that the gradients capture appropriate sign updates to the weights. Further this regularization term does not penalize weights that are outside of [-1, +1]. One can re-define the function as to include a scaling factor R(w) = (\\\\alpha - w^2) \\\\1_{\\\\{|w| \\\\leq \\\\alpha\\\\}}. In Figure 3, we depict the different regularization terms, to help with intuition\\u201d\\n\\nWe removed figure 4 and used the added space to add details to the discussion and experiment section.\\n-----------------------------------------------------------------------------------------------------------------\\n*comment: Algorithm 1 helped with the clarity of the approach, although it left me with a question: In section 3.3, the authors say that they train by \\\"replacing the sign binarization with the SS_\\\\beta activation\\\" and that they can back-propagate through it. However, in the psuedo-code it seems like they indeed use the sign-function in the forward-pass, replacing it with the signswish in the backward pass. Which is it?\\n\\nThank you for pointing out this issue. The pseudo-code is the correct one. We use the derivative of SwishSign function as the approximation in the backward pass as opposed to the straight through estimator. We corrected the sentence in the text pg.5:\\n\\n\\u201cCombining both the regularization and activation ideas, we modify the training procedure by replacing the sign backward approximation with that of the derivative of SS_B activation (\\\\ref{sswish}).\\u201d\\n----------------------------------------------------------------------------------------------------------------\\n\\n*comment: \\u201cThey modify it by centering it and taking the derivative. I'm not sure I understand the intuition behind using the derivative of the Swish as the new activation. It's also unclear how much of BNN+'s success is due to the modification of the Swish function over using the original Swish activation. For this reason I would've liked to see results with just fitting the Swish.\\u201d \\n\\nThe activation function of a BNN is the sign function. The original swish function resembles that of Relu and is not a valid approximation for the sign function as it does not saturate on the right side hence we did not experiment with this function. In training binary networks, there is a discrepancy between the forward pass and backward pass. With the modifications made to the Swish, we attempt to close this gap. One interesting property of this function is it captures gradients over a larger domain as opposed to the straight through estimator (STE) where it immediately reaches zero.\\n\\nIn our experiments we compared with the htan backward approximation, which is the valid alternative that is being used in the literature.\\n----------------------------------------------------------------------------------------------------------------\\n*comment: \\u201cThe fact that some of the architectures did not converge is a bit concerning. It's an important detail if a training method is unstable, so I would've liked to see more discussion of this instability.\\u201d \\n\\nTable 2 is a computationally time consuming task, as we are training the networks on the ImageNet dataset, with limited resources on two different architectures. The total number of experiments are 20. As a result, for those two specific table values our experiments had not terminated by the submission deadline. This sentence hadn\\u2019t been worded appropriately. Also we would like to point that we have not experienced any convergence issues with the method. \\n----------------------------------------------------------------------------------------------------------------\\n*comment: \\u201cThe bolding on Table 2 is misleading\\u201d\\n \\nWe removed the boldings from table 2 and instead emphasize them in table 3.\"}",
"{\"title\": \"To Reviewer #2 (2/3)\", \"comment\": \"*comment: \\u201cIt's unclear to me why the zeros of the derivative of sign swish being at +/- 2.4beta means that when beta is larger, we get a closer approximation to the sign function. The derivative of the sign function is zero almost everywhere, so what's the connection?\\u201d\\n\\nThis is a property of the swish sign function, which locates the max and min at +/- 2.4/beta. We thought it\\u2019d be interesting to point this out, and by this we meant that by increasing beta, one can adjust the locations at which the gradients start saturating. Also, it is easy to show that when beta tends to infinity, sign swish converges to the sign function. We added an explanation in the text: \\n\\n\\u201cNote that the derivative d/dxSS_\\\\beta(x) is zero at two points, controlled by \\\\beta. Indeed, it is simple to show that the derivative is zero for x \\\\approx \\\\pm 2.4 / \\\\beta. By adjusting this parameter beta, it is possible to adjust the location at which the gradients start saturating. In contrast to the STE estimators, where it is fixed. Thus, the larger \\\\beta is, the closer the approximation is to the derivative of the \\\\sign function\\u201d\\n\\n----------------------------------------------------------------------------------------------------------------\\n*comment: I'm not sure I understand the intuition behind using the derivative of the Swish as the new activation. It's also unclear how much of BNN+'s success is due to the modification of the Swish function over using the original Swish activation. For this reason I would've liked to see results with just fitting the Swish. \\n\\nAs explained in our answer to the previous comment, the swish is not bounded on the right side.\\n\\n\\u201cwhere \\\\sigma(z) is the sigmoid function and the scale \\\\beta > 0 controls how fast the activation function asymptotes to -1 and +1. The \\\\beta parameter can be learned by the network or be hand-tuned as a hyperparameter. As opposed to the Swish function, where it is unbounded on the right side, the modification make it bounded and a valid approximator of the \\\\sign function. As a result, we call this activation SignSwish, and its gradient is\\u201d\\n----------------------------------------------------------------------------------------------------------------\\n\\n*comment: In terms of their regularization, they point out that their L2 regularization term is a generalization of the one introduced in Tang et al. (2017). The authors parameterize the regularization term by a scale that is similar to one introduced by Rastegari et al. (2016). As far as I can tell, these are the main novel contributions of the authors' approach\\n\\nThe notion of regularization as well as that of scaling factors are not new in the literature of binary neural networks (BNN). But, the regularization introduced in Tang et al. (2017) is different from ours because it does not penalize the weights outside of [-1;1], it is a quadratic function (1 - w^2) and does not include the notion of scaling factor. For the scaling factors introduced by Rastegari et al. (2016), they estimate them in a 2-stage fashion. They first find the weights and second, they solve an optimization function (L2 norm of the difference between full-precision weights and scaling factor times binary weights) in order to estimate the scaling factors. This results in the scaling factor is the mean of absolute values of the weights.\\n\\nIn our work, we introduce a new approach to the quantization of BNNs. Our novelty is the introduction of the scaling factor into a regularization function constructed for a BNN (class of regularization functions R(w) = |scaling_factor - |weights| |^p where p=1 and 2 in the paper) , as well as an adaptive approximation (using a parameter) of the derivative of the sign function. Instead of having two separate optimization problems, we only need back-propagation in order to minimize the loss function plus the regularization term in order to estimate the binary weights as well as the scaling factors. Depending on the regularization term used, the scaling factors estimation falls into either mean of absolute values of the weights (p=2) or median of absolute values of the weights (p=1). \\n\\n---------------------------------------------------------------------------------------------------------------- \\n*comment: The authors don't provide much intuition for why the new activation function is superior to the swish (even including the swish in Figure 2 could improve this). Moreover, they mention that training is unstable without explaining more. \\n\\nI believe we addressed this in prior comments. There is no new activation function but a new approximation of the sign function derivative. Hence, the swish function is not appropriate, but our modification is. We point out the relationship with the swish function is due to our SignSwish function being modification to the derivative of the swish function.\"}",
"{\"title\": \"To Reviewer #2 (3/3)\", \"comment\": \"*comment: \\\"The authors don't compare their method to the Bi-Real Net from Liu et al. (2018) since it introduces a shortcut connection to the architecture, although the Bi-Real net is SOTA for Resnet-18 on Imagenet. Did you try implementing the shortcut connection in your architecture?\\\"\\n\\n\\nThe connection was not added, as suggested in bi-real net. Here our main objective was to improve the training mechanism for Binary Networks. As future work we can investigate more efficient architectures for binarized neural networks, such as condensenet as opposed to residual networks due to their summation operator rids of too much information, whereas in condensenet the activations are appended hence information is maintained across layers.\"}",
"{\"title\": \"To Reviewer #1\", \"comment\": \"Thank you for reviewing the paper and providing us comments. Below is a point-point response\\n-----------------------------------------------------------------------------------------------------------------\\n *comment: \\\"The abstract of this paper should be further refined. I could not find the technical contributions of the proposed method in it.\\\"\\n\\nWe have refined the abstract to include our contributions, please see the revision. \\n\\n\\u201cDeep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors in our regularization functions. Furthermore, we use an improved approximation of the derivative of the $\\\\sign$ activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework and we show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively.\\u201d\\n-----------------------------------------------------------------------------------------------------------------\\n\\n*comment: \\\"The proposed method for training BNNs in Section 3 is designed by combining or modifying some existing techniques, such as regularized training and approximated gradient. Thus, the novelty of this paper is somewhat weak.\\\"\", \"main_contributions_of_this_paper_are_as_follows\": \"Suggesting regularization functions that encourage training binary weights\\nEmbedding trainable scaling factors in the regularization function\\nAdaptive backward approximation to the sign derivative\\nAdmittedly, the notion of regularization as well as that of approximation of the derivative are not new in the literature of binary neural networks (BNN). But, the regularization introduced until then is different from ours because it does not penalize the weights greater than 1 or smaller than -1, also the fact that it is a quadratic function (1 - w^2 in [1]) and does not include a scaling factor. For the gradient approximation, to the best of our knowledge, there is no adaptive function capable of approximating the derivative of the sign function. Indeed, STE and the one proposed in [2] are both arbitrarily chosen and are fixed approximations. Thus, our novelty is the introduction of the scaling factor into a regularization function constructed for a BNN (class of regularization functions R(w) = |scaling_factor - |weights| |^p where p=1 and 2 in the paper) , as well as an adaptive approximation (using a parameter) of the derivative of the sign function. Hence, using back-propagation, the binary weights and scaling factors are learned using only one objective function.\\n\\n-----------------------------------------------------------------------------------------------------------------\\n*comment: \\\"Fcn.3 is a complex function for deep neural networks, which integrates three terms of x. I am worried about the convergence of the proposed method.\\\"\", \"the_function_can_be_simplified_to_take_on_this_form\": \"tanh(beta x / 2) + (beta x / 2) sech^2(beta x / 2). We formulated it as such so that the correspondence with the derivative of the swish function is more clear. Though this the similar complexity as the swish function and as demonstrated by the swish paper [3], as well as our empirical results, we have not observed problems with convergence using the proposed method\\n\\n[1]Tang, Wei, Gang Hua, and Liang Wang. \\\"How to train a compact binary neural network with high accuracy?.\\\" AAAI. 2017.\\n[2] Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, Kwang-Ting Cheng. \\u201dBi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm\\u201d ECCV. 2018\\n[3] Prajit Ramachandran, Barret Zoph, Quoc V. Le. \\u201cSearching for Activation Functions.\\u201d https://arxiv.org/abs/1710.05941. 2018\"}",
"{\"title\": \"Good Paper, which achieves the competitive results over the state-of-the-art methods.\", \"review\": \"1. The abstract of this paper should be further refined. I could not find the technical contributions of the proposed method in it.\\n\\n2. The proposed method for training BNNs in Section 3 is designed by combining or modifying some existing techniques, such as regularized training and approximated gradient. Thus, the novelty of this paper is somewhat weak.\\n\\n3. Fcn.3 is a complex function for deep neural networks, which integrates three terms of x. I am worried about the convergence of the proposed method.\\n\\n4. Fortunately, the performance of the proposed method is very promising, especially the results on the Imagenet, which achieves the highest accuracy over the state-of-the-art methods. Considering that the difficulty for training BNNs, I vote it for acceptance. \\n\\n---------------------------------\\n\\nAfter reading the responses from authors, I have clearer noticed some important contributions in the proposed methods:\\n\\n1) A novel regularization function with a scaling factor was introduced for improving the capability of binary neural networks; \\n2) The proposed activation function can enhance the training procedure of BNNs effectively;\\n3) Binary networks trained using the proposed method achieved the highest performance over the state-of-the-art methods.\\n\\nThus, I think this is a nice work for improving performance of binary neural networks, and some of techniques in this paper can be elegantly applied into any other approaches such as binary dictionary learning and binary projections. Therefore, I have increased my score.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Impressive results for binarized neural networks by combining existing ideas\", \"review\": \"The authors of this paper aim to reduce the constraints required by neural networks so they can be evaluated on lower-power devices. Their approach is to quantize weights, i.e. rounding weights and hidden units so they can be evaluated using bit operations. There are many challenges in this approach, namely that one cannot back-propagate through discrete weights or discrete sign functions. The authors introduce an approximation of the sign function, which they call the SignSwish, and they back-propagate through this, quantizing the weights during the forward pass. Further, they introduce a regularization term to encourage weights to be around learned scales. They evaluate on CIFAR-10 and Imagenet, surpassing most other quantization methods.\\n\\nThe paper is pretty clear throughout. The authors do a good job of motivating the problem and placing their approach in the context of previous work. I found Figures 1 and 2 helpful for understanding previous work and the SignSwish activation function, respectively. However, I did not get much out of Figures 3 or 4. I thought Figure 3 was unnecessary (it shows the difference between l1 and l2 regularization), and I thought the psuedo-code in Algorithm 1 was a lot clearer than Figure 4 for showing the scaling factors. Algorithm 1 helped with the clarity of the approach, although it left me with a question: In section 3.3, the authors say that they train by \\\"replacing the sign binarization with the SS_\\\\beta activation\\\" and that they can back-propagate through it. However, in the psuedo-code it seems like they indeed use the sign-function in the forward-pass, replacing it with the signswish in the backward pass. Which is it?\\n\\nThe original aspects of their approach are in introducing a new continuous approximation to the sign function and introducing learnable scales for l1 and l2 regularization. The new activation function, the SignSwish, is based off the Swish-activation from Ramachandran et al. (2018). They modify it by centering it and taking the derivative. I'm not sure I understand the intuition behind using the derivative of the Swish as the new activation. It's also unclear how much of BNN+'s success is due to the modification of the Swish function over using the original Swish activation. For this reason I would've liked to see results with just fitting the Swish. In terms of their regularization, they point out that their L2 regularization term is a generalization of the one introduced in Tang et al. (2017). The authors parameterize the regularization term by a scale that is similar to one introduced by Rastegari et al. (2016). As far as I can tell, these are the main novel contributions of the authors' approach. \\n\\nThis paper's main selling point isn't originality -- rather, it's that their combination of tweaks lead to state-of-the-art results. Their methods come very close to AlexNet and VGG in terms of top-1 and top-5 CIFAR10 accuracy (with the BNN+ VGG even eclipsing the full-precision VGG top-1 accuracy). When applied to ImageNet, BNN+ outperforms most of the other methods by a good margin, although there is still a lot of room between the BNN+ and full-precision accuracies. The fact that some of the architectures did not converge is a bit concerning. It's an important detail if a training method is unstable, so I would've liked to see more discussion of this instability. The authors don't compare their method to the Bi-Real Net from Liu et al. (2018) since it introduces a shortcut connection to the architecture, although the Bi-Real net is SOTA for Resnet-18 on Imagenet. Did you try implementing the shortcut connection in your architecture?\", \"some_more_minor_points\": [\"The bolding on Table 2 is misleading. It makes it seem like BNN+ has the best top-5 accuracy for Resnet-18, although XNOR-net is in fact superior.\", \"It's unclear to me why the zeros of the derivative of sign swish being at +/- 2.4beta means that when beta is larger, we get a closer approximation to the sign function. The derivative of the sign function is zero almost everywhere, so what's the connection?\", \"Is the initialization of alpha a nice trick, or is it necessary for stable optimization? Experiments on the importance of alpha initialization would've been nice.\"], \"pros\": [\"Results. The top-1 and top-5 accuracies for CIFAR10 and Imagenet are SOTA for binarized neural networks.\", \"Importance of problem. Reducing the size of neural networks is an important direction of research in terms of machine learning applications. There is still a lot to be explored.\", \"Clarity: The paper is generally clear throughout.\"], \"cons\": \"-Originality. The contributions are an activation function that's a modification of the swish activation, along with parameterized l1 and l2 regularization. \\n-Explanation. The authors don't provide much intuition for why the new activation function is superior to the swish (even including the swish in Figure 2 could improve this). Moreover, they mention that training is unstable without explaining more.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Borderline paper -- OK empirical results but weak in most other regards\", \"review\": \"Summary:\", \"this_paper_presents_three_small_improvements_for_training_binarized_neural_networks\": \"(1) a modified straight-through estimator, (2) a novel regularizer to push weights to +/- 1, and (3) the use of scaling factors for the binarized weights. Using the methods presented, the validation accuracies on ImageNet and CIFAR-10 are improved by just under 2 percentage points.\", \"pros\": [\"Decent improvement in the performance of the binarized network in the end\", \"The presented regularizers make sense and seem effective. The modified straight-through estimator seems reasonable as well, although the authors do not compare to recent work with a similar adjustment.\"], \"cons\": [\"The paper is poorly written and confusing. It reads as if it was written in one pass with no editing or re-writing to clarify contributions or key points, or ensure consistency.\", \"While the final numbers are acceptable, the experiments themselves could be stronger and could be presented more effectively.\", \"The novelty of the scale factors is questionable.\"], \"questions_and_comments\": \"1. How exactly is the SS_\\\\beta activation used? It is entirely unclear from the paper, which contradicts itself in multiple ways. Is SS_\\\\beta used in the forward pass at all for either the weight or activation binarization? Or is only its derivative used in the backward pass? If the latter, then you are not replacing the activation anywhere but are simply using a different straight-through estimator in place of the saturated straight-through estimator (e.g., see [1]).\\n (a) At the beginning of Section 3.3, you say that you modify the training procedure by replacing the sign binarization with the SS_\\\\beta activation. This sounds like it is referring to the activation function at each layer; however, the pseudocode says that you are using sign() as the per-layer activation. \\n (b) Further, Figure 4 shows that you are using the SS_\\\\beta function to do weight binarization. However, again, the pseudocode shows that you are using sign() to do the weight binarization. \\n\\n2. In [1], the authors used a similar type of straight-through estimator (essentially, the gradient of tanh instead of hard_tanh) and found that to be quite effective. You should compare to their method. Also, it's possible that SS_\\\\beta reduces to tanh for some choice of \\\\beta -- is this true?\\n\\n3. The use of scale factors seems to greatly increase the number of parameters in the network and thus greatly decrease the compression benefits gained by using binarization, i.e., you require essentially #scale_factors = a constant factor times the number of actual parameters in the network (since you have a scale factor for each convolutional filter and for each column of each fully-connected layer). As a result of this, what is the actual compression multiplier that your network achieves relative to the original network?\\n\\n4. For the scale factor, how does yours differ from that used in Rastegari et al. (2016)? It seems the same but you claim that it is a novel contribution of your work. Please clarify.\\n\\n5. Why did learning \\\\beta not work? What was the behavior? What values of \\\\beta did learning settle on? \\n\\n6. I assume that for each layer output y_i = f(W_i x_i), the regularizer is applied as R(y_i) while at the same time y_i is passed to the next layer -- is this correct? The figures do not clearly show this and should be changed to more clearly show how the regularizer is computed and used, particularly in relation to the activation.\\n\\n7. In the pseudocode:\\n (a) What does \\\"mostly bitwise operations\\\" mean? Are some floating point?\\n (b) Is this the shift-based batchnorm of Hubara et al. (2016)?\\n\\n8. For Table 1:\\n (a) I assume these are accuracies? The caption should say.\\n (b) Why are there no comparisons to the performance of other methods on this dataset?\\n (c) Any thoughts as to why your method performs better than the full-precision method on this dataset for VGG?\\n\\n8. For Table 2:\\n (a) Does Table 2 show accuracies on ImageNet? You need to make this clear in the caption.\\n (b) What type of behavior do the runs that do not converge show? This seems like a learning rate problem that is easily fixable. Are there no hyperparameter values that allow it to converge?\\n (c) What behavior do you see when you use SS_1 or SS_2, i.e., \\\\beta = 1 or \\\\beta = 2? Since lower \\\\beta values seem better.\\n (d) The regularization seems to be the most useful contribution -- do you agree?\\n (e) Why did you not do any ablations for the scale factor? Please include these as well.\\n\\n9. For Table 3, did you compute the numbers for the other approaches or did you use the numbers from their papers? Each approach has its own pros and cons. Please be clear.\\n\\n10. Are there any plots of validation accuracy versus epoch/time for the different algorithms in order to ensure that the reported numbers were not simply cherry-picked from the run? I assume that you simply used the weights from the end of the 50th epoch -- correct? \\n\\n11. Is there evidence for your introductory claims that 'quantizing weights ... make neural networks harder to train due to a large number of sign fluctuations' and 'maintaining a global structure to minimize a common cost function is important' ? If so, you should cite this evidence. If not, you should make it clear that these are hypotheses. \\n\\n12. Why are there not more details about the particular architectures used? These should be included in the appendices to aid those who would like to rerun your experiments. In general, please include more experiment details in the body or appendices.\", \"detailed_comments\": [\"R(l) is not defined in Figure 1 and thus is confusing. Also, its replacement of 'Error' from the original figure source makes the figure much more confusing and less clear.\", \"Typos:\", \"'accustomed' (p.1)\", \"'the speed by quantizing the activation layers' doesn't make sense (p.1)\", \"'obtaining' (p.4)\", \"'asymmetric' doesn't make sense because these are actually symmetric functions across the y-axis (p.4)\", \"'primary difference is that this regularization ...' --> 'primary difference is that their regularization ...' (p.4)\", \"'the scales with 75th percentile of the absolute value ... ' is very confusing and unclear (p.7)\", \"'the loss metric used was the cross-entropy loss, the order of R_1.' I do not know what you're trying to say here (p.8)\", \"Citations: Fix the capitalization issues, typos, and formatting inconsistencies.\", \"[1] Friesen and Domingos. Deep Learning as a Mixed Convex-Combinatorial Optimization Problem. ICLR 2018.\", \"-------------------\", \"After reading the author response, I do not think the paper does a sufficient job of evaluating the contributions or comparing to existing work. The authors should run ablation experiments, compare to existing work such as [1], and evaluate on additional datasets. These were easy tasks that could have been done during the review period but were not.\", \"If I wanted to build on top of this paper to train higher accuracy binary networks, I would have to perform all of these tasks myself to determine which contributions to employ and which are unnecessary. As such, the paper is currently not ready for publication.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Forward pass\", \"comment\": \"Yes, that is correct. You could also refer to the algorithm section in the paper as well.\"}",
"{\"comment\": \"Thanks for your reply, so in the forward pass, as in other BNNs, the sign function is used, and in the backward pass, the derivative of the SS function is used instead of STE, isn't it? Thanks\", \"title\": \"Forward pass\"}",
"{\"title\": \"corrected\", \"comment\": \"Yes, that is the one used. I had made a mistake in writing it down. Sorry for the confusion.\\n\\nI modified it to FC(num classes) for both.\"}",
"{\"comment\": \"Just a bystander chiming in to make some comments -- please be respectful to others even if you disagree strongly.\\n\\nOr, in the case when you are not sure if you would offend someone in the discussion due to language/culture barriers, it would be better to set that expectation, e.g.:\\n\\n\\\"Pardon me if I may sound aggressive / offensive, but ...\\\"\\n\\nThe whole point of discussion is to increase mutual understanding, isn't it?\", \"title\": \"effective communication\"}",
"{\"comment\": \"Are you kidding? The AlexNet from one weird trick is\\nconv 64 - conv 192 - conv 384 -conv 384 - conv 256 - FC 4096 - FC 4096 - FC 1000 .\", \"title\": \"alexnet Architectures\"}",
"{\"title\": \"alexnet imagenet\", \"comment\": \"Same as above was used.\"}",
"{\"title\": \"correction\", \"comment\": \"This is the AlexNet from one weird trick (https://arxiv.org/abs/1404.5997 ), I corrected the original statement.\"}",
"{\"comment\": \"conv 64 - conv 192 - conv 384 - conv 256 - FC 4096 - FC 4096 - FC 4096\\n\\nexcuse me,is this the original AlexNet layers?\", \"title\": \"Architectures\"}",
"{\"comment\": \"I know your alexnet used for cifar10. What\\u2018s the the architectures of AlexNet used for ImageNet? can you put up the framework\\uff1f\", \"title\": \"The architectures of AlexNet used for ImageNet \\uff1f\"}",
"{\"title\": \"Calrification\", \"comment\": \"1. For Cifar10 results the weights were initialized with the Xavier Glorot [1], as for ImageNet the same approach as bi-realnet was used, where a pre-trained full precision network using htan activation was used.\\n\\n2. By this we mean, we replace the STE estimator in the backward pass with the derivative of the SS function. The weights are no longer clipped to allow them to move beyond -1, 1 as we factor out a scale for each filter. Lastly, we still do the forward pass using binary values {-1, +1}\\n\\n[1] Glorot, Xavier, and Yoshua Bengio. \\\"Understanding the difficulty of training deep feedforward neural networks.\\\" Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.\"}",
"{\"title\": \"Architectures\", \"comment\": \"1. VGG-16 was used without the two first convolutional layers (conv2d-64 and conv2d64).\\n\\nconv 256 - conv 256 - conv 512 - conv 512 - conv 1024 - conv 1024 - FC 1024 - FC 1024 - FC (num classes)\", \"for_alexnet\": \"conv 64 - conv 192 - conv 384 - conv 256 - conv 256 - FC 4096 - FC 4096 - FC (num classes)\\n\\nAdditionally, a BatchNorm layer was added before every activation. In the case of Cifar10 only the first layer was not binarized, whereas in ImageNet both the first layer and last layers were not binarized.\\n\\n\\n2. The shortcut was not binarized in the results presented in the paper. Also the last layer was not binarized for ResNet-18.\"}",
"{\"comment\": \"1.How many layers did you use in AlexNet and VGG? Did you binarize all layers?\\n\\n2.As you said bi-real net introduce a shortcut connection .Did you binarize the shortcut ? You didn't binarize the first layer,how about the last layer?\", \"title\": \"How many layers did you use in AlexNet and VGG?\"}",
"{\"comment\": \"1. As said in the paper, bi-real net uses pretrained network weights as an initialization, then what kind of weight initialization do you use?\\n\\n2. What's the meaning of \\\"we modify the training procedure by replacing the sign binarization with the SS\\u03b2 activation (2). During training, the real weights are no longer clipped as in BNN training\\\" in section 3.3? Do the float values generated by the SS\\u03b2 activation replace the {+1, -1} in forward time? If so, how can you make use of bit-wise operation, which is the key to speed up bnn?\", \"title\": \"Some questions\"}"
]
} |
|
SJzSgnRcKX | What do you learn from context? Probing for sentence structure in contextualized word representations | [
"Ian Tenney",
"Patrick Xia",
"Berlin Chen",
"Alex Wang",
"Adam Poliak",
"R Thomas McCoy",
"Najoung Kim",
"Benjamin Van Durme",
"Samuel R. Bowman",
"Dipanjan Das",
"Ellie Pavlick"
] | Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline. | [
"natural language processing",
"word embeddings",
"transfer learning",
"interpretability"
] | https://openreview.net/pdf?id=SJzSgnRcKX | https://openreview.net/forum?id=SJzSgnRcKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syes7Emgl4",
"SkxaqkptCX",
"Skey6mqlA7",
"SJlErExtT7",
"HklxVExYpQ",
"SJekzNxtaX",
"rylJ-ovR2X",
"SJgb3cQF2Q",
"rJx0hgbKnX"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544725538842,
1543258004829,
1542656951092,
1542157371793,
1542157352181,
1542157318588,
1541466871437,
1541122728715,
1541111989687
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1073/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1073/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1073/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1073/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1073/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1073/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": [\"Pros\", \"Thorough analysis on a large number of diverse tasks\", \"Extending the probing technique typically applied to individual encoder states to testing for presence of certain (linguistic) information based on pairs of encoders states (corresponding to pairs of words)\", \"The comparison can be useful when deciding which representations to use for a given task\", \"Cons\", \"Nothing serious, it is solid and important empirical study\", \"The reviewers are in consensus.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A thorough study of contextualized word representations\"}",
"{\"title\": \"Thanks for the reference\", \"comment\": \"This is definitely related; we'll be sure to add a citation!\"}",
"{\"comment\": \"Just wanted to mention a related work: Yonatan Belinkov's thesis ( http://people.csail.mit.edu/belinkov/assets/pdf/thesis2018.pdf ) has some prior experiments with the edge probing task design outlined in this paper. See Chapter 4, \\\"Sentence Structure and Neural Machine Translation: Word Relations\\\".\", \"title\": \"Some previous work on edge probing\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for the review!\\n\\nWe agree that it would be interesting to explore more specific tail phenomena. Attachment phenomena in particular can be studied on many of the same datasets if we fix labels and instead predict one of the two spans; this would be an interesting direction for future study.\\n\\nIt would also be very interesting to explore other languages! While we are limited by available data and encoder models, there\\u2019s nothing in the edge probing technique that makes English-specific assumptions. Probing for phenomena that require long contexts could be a good test of advanced encoders, and can be easily quantified in our framework (for example, see Figure 3).\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for the review!\\n\\nWe\\u2019re very interested in probing for other linguistic attributes - while we present a broad analysis in this paper, there\\u2019s certainly room to use edge probing to study more focused phenomena like PP attachment or ambiguities between specific semantic roles. We use a standardized data format that makes it easy to add new tasks, and we hope that our code release will be a useful platform for this kind of analysis.\\n\\nWe\\u2019ll be sure to update the text to more clearly describe the tables.\\n\\nWhoops! In Figure 2 and 3, the bars/bands are 95% confidence intervals calculated using the Normal approximation. We wanted to emphasize that the SPR and Winograd datasets are quite small and that the differences between models are often not significant. We\\u2019ll add this to the caption in the final version.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for the review! We do hope that this will be of broad interest given recent progress in sentence representations, and hope that our code release will allow continued evaluation of new and better representation models (like BERT).\\n\\nWe\\u2019ll certainly include examples of specific win / loss cases in the final version.\"}",
"{\"title\": \"Nice empirical paper\", \"review\": \"This is a nice paper that attempts to tease apart some questions about the effectiveness of contextual word embeddings (ELMo, CoVe, and the Transformer LM). The main question is about the value of context in these representations, and in particular how their ability to encode context allows them to also (implicitly) represent linguistic properties of words. What I really like about the paper is the \\u201cEdge probing\\u201d method it introduces. The idea is to probe the representations using diagnostic classifiers\\u2014something that\\u2019s already widespread practice\\u2014but to focus on the relationship between spans rather than individual words. This is really nice because it enables them to look at more than just tagging problems: the paper looks at syntactic constituency, dependencies, entity labels, and semantic role labeling. I think the combination of an interesting research question and a new method (which will probably be picked up by others working in this area) make this a strong candidate for ICLR. The paper is well-written and experimentally thorough.\", \"nitpick\": \"It would be nice to see some examples of cases where the edge probe is correct, and where it isn\\u2019t.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice discussion of what type of information is actually encoded by contextualized word embeddings\", \"review\": \"This paper provides new insights on what is captured contextualized word embeddings by compiling a set of \\u201cedge probing\\u201d tasks. This is not the first paper to attempt this type of analysis, but the results seem pretty thorough and cover a wider range of tasks than some similar previous works. The findings in this paper are very timely and relevant given the increasing usage of these types of embeddings. I imagine that the edge probing tasks could be extended towards looking for other linguistic attributes getting encoded in these embeddings.\\n\\nQuestions & other remarks:\\n-The discussion of the tables and graphs in the running text feels a bit condensed and at times unclear about which rows are being referred to.\\n-In figures 2 & 3: what are the tinted areas around the lines signifying here? Standard deviation? Standard error? Confidence intervals?\\n-It seems the orthonormal encoder actually outperforms the full elmo model with the learned weights on the Winograd Schema. Can the authors comment on this a bit more?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Current work reps capture a surprising amount of structure\", \"review\": \"I have no major complaints with this work. It is well presented and easily understandable. I agree with the claim that the largest gains are largely syntactic, but this leads me to wonder about more tail phenomena. PP attachment is a classic example of a syntactic decision requiring semantics, but one could also imagine doing a CCG supertagging analysis to see how well the model captures specific long-tail phenomena. Though a very different task Vaswani et al 16, for example, showed how bi-LSTMs were necessary for certain constructions (presumably current models would perform much better and may capture this information already).\\n\\nAn important caveat of these results is that the evaluation (by necessity) is occurring in English. Discourse in a pro-drop language would presumably require longer contexts than many of these approaches currently handle.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bygre3R9Fm | DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation | [
"Rim Assouel",
"Mohamed Ahmed",
"Marwin Segler",
"Amir Saffari",
"Yoshua Bengio"
] | Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery.
Recently, deep generative models have shown a promising way of performing de-novo molecular design.
Although graph generative models are currently available they either have a graph size dependency in their number of parameters, limiting their use to only very small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph non-differentiable w.r.t the model parameters, therefore preventing them to be used in scenarios such as conditional graph generation. In this work we propose a model for conditional graph generation that is computationally efficient and enables direct optimisation of the graph. We demonstrate favourable performance of our model on prototype-based molecular graph conditional generation tasks. | [
"molecular graphs",
"conditional autoencoder",
"graph autoencoder"
] | https://openreview.net/pdf?id=Bygre3R9Fm | https://openreview.net/forum?id=Bygre3R9Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkgNWt_kg4",
"BJl69AQcR7",
"rJxhICXqRQ",
"BklYpixApm",
"B1glS5H76X",
"HylN_T0kaX",
"SklCA18chQ",
"rylQT5-c27",
"SJl1HiKwsX",
"SJx7vcuDjX",
"HJeEJFpIs7",
"rkeqNivN5m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544681724390,
1543286420581,
1543286355816,
1542486977378,
1541786168359,
1541561708428,
1541197781735,
1541180091179,
1539967799508,
1539963483495,
1539918043725,
1538714418391
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1072/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1072/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1072/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1072/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1072/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1072/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1072/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1072/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1072/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1072/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"Since the reviewers unanimously recommended rejecting this paper, I am also recommending against publication. The paper considers an interesting problem and expresses some interesting modeling ideas. However, I concur with the reviewers that a more extensive and convincing set of experiments would be important to add. Especially important would be more experiments with simple extensions of previous approaches and much simpler models designed to solve one of the tasks directly, even if it is in an ad hoc way. If we assume that we only care about results, we should first make sure these particular benchmarks are difficult (this should not be too hard to establish more convincingly if it is true) and that obvious things to try do not work well.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"the reviewers are unanimous\"}",
"{\"title\": \"Response to reviewer 3 [part 2]\", \"comment\": \"3 and 4. The discriminator is not the central part of the model. In fact it is used for the formulation of the conditional setting, which could be justified/criticized as way to perform explicit conditional generation on another manuscript proposal. The main purpose of our manuscript is the design of a new graph decoding scheme that allows us to do direct graph optimization without suffering from high variance of REINFORCE-like update. As a proof of the representational power of the decoder we test it in the same constrained property optimization framework than previous competitive models ( namely, JT-VAE and GCPN).\\n\\nAlthough conditional generation in itself is a very interessant topic we had to make a choice and as we suggest a new model to perform the task we put the emphasis on the description of the model itself rather than the conditional generation formulation.\\n\\nHowever there is a reason why we do not want to fine-tune the discriminator along with the generator. Doing so would be actually equivalent to the InfoGAN framework where we allow the model to discover the semantic of the structured part of the latent factors. In our case we do not need to discover the semantics present in our data as we already know what they mean (we know that we want to minimize logP of the molecule and not some discovered semantic whose prior would match the one we are sampling from). To be more precise, if we consider fine-tuning the discriminator along with the generator, the bottleneck would be the early stages of the training (when the generator is pretty bad) and where there is a high risk of observing a \\u201csemantic-drift\\u201d. Additionally, as the discriminator will be far away from p_data(y|x) the two signals (MLE and VMI maximization) will drive the generator to update its parameters to completely different directions which would make the training harder. On the contrary when the discriminator has already approximated p_data(y|x) (ie. the final and wanted posterior p_gen(y|x)), the two signals are pointing the generator to the same direction and makes the training easier (and more intuitive).\\n\\nWe will add a further detailed section in the appendices to motivate such formulation.\\n\\n5. In order to test the conditional setting we chose an attribute that we can compute for all the molecules. The way it is computed (in a black-box way through rdkit Library) depends on the fragments present in molecule and is based on some experimental data extensively collected and stored. Computed via rdkiit\\u2019s Crippen function [Landrum, 2016]. We added that in the revised version.\\n\\nWe thank the reviewer again and hope we have addressed his concerns in the revised version.\\n\\nThe authors\\n\\n[1] Jin et al, Junction Tree Variational Autoencoder for Molecular Graphs Generation, https://arxiv.org/pdf/1802.04364.pdf\\n[2] De Cao & Kipf, MolGAN : An implicit generative model for small molecular graphs, https://arxiv.org/abs/1805.11973\\n[3] Simonovsky & Komodakis, GraphVAE : Towards Generation of Small Graphs Using Variational Autoencoders, https://arxiv.org/abs/1802.03480\"}",
"{\"title\": \"Response to reviewer 3 [part 1]\", \"comment\": \"We thank the reviewer for his useful comments and will address his concerns sequentially.\\n\\n1. For the comparison with Jin et Al [1], we recomputed the results in the same settings as ours in the revised version : deterministic AE, 2D molecular graphs, 56 latent size.\", \"concerning_the_missing_comparison_with_the_competitive_methods_suggested_by_the_reviewer_we_would_like_to_clarify_that\": \"- We do discuss those methods in the related work part as existing generative models on molecular graphs but they were not designed to reconstruct a particular graph, and though are not adapted to an exact reconstruction task.\\n - In fact those methods do not come with an inference network whose output can be used to condition the whole generation process to match the condition.\\n\\nLet us suppose that we equip those generative models with an encoder (typically ours), their decoding schemes involve a prediction of very long sequences of action and coming with a good training procedure in that context is not trivial at all : \\n - How do we choose the sequences order ? GCPN (and others) actually argues that using a fixed order as the domain suggested one (SMILES) yields overfitting of their model. To that end they make use of randomly selected state transition in the training set. Naturally this training procedure is not applicable to the task of exact graph reconstruction (where we want to reconstruct the exact full sequence of actions).\\n - Li et al actually states : \\u201c The generation process used by the graph model is typically a long sequence of decisions. If other forms of graph linearization is available, e.g. SMILES, then such sequences are typically 2-3x shorter. This is a significant disadvantage for the graph model, it not only makes it harder to get the likelihood right, but also makes training more difficult.[...]We have found that training such graph models is more difficult than training typical LSTM models. The sequences these models are trained on are typically long, and the model structure is constantly changing, which leads to unstable training. Lowering the learning rate can solve a lot of instability problems, but more satisfying solutions may be obtained by tweaking the model.\\u201d \\n\\nWe overall think that redesigning the suggested models so that they can perform well on an exact reconstruction task is not trivial and would constitute potentially completely different models in the end. Spending considerable amount of time trying to repurpose existing generative models does not seem reasonable. The very example of the JT-VAE that was designed (by its architecture) to reconstruct graphs exactly support our opinion : we tried to go from the probabilistic VAE framework to the deterministic AE one and the best reconstruction result we could get with an available code was lower that the reported one on the VAE setting. \\n\\n2. We agree on the misleading for use of \\u2018scalable\\u2019 and \\u2018cheap\\u2019 in the manuscript.. Actually it was supposed to be understood as it is defined in the original related work section ( \\u201c Scalable : this means that the number of parameters of the decoder should not depend on a fixed predefined maximum graph size\\u201d like it is the cas for [2] and [3]). We fixed the misuse in the manuscript.\\n\\nConcerning the large graph statement, the model\\u2019s focus is on molecular graphs (which we find is an important problem on its own) thus \\u201clarge\\u201d and \\u201csmall\\u201d do not have the same signification here when compared to general graphs/ networks. Small = less than 10 heavy atoms (like in [2] and [3], they specify small in their title) Large = around 60 heavy atoms which is large enough in the optimization tasks we are interested in the drug discovery pipeline.\"}",
"{\"title\": \"Answer to Reviewer 1\", \"comment\": \"We thank the reviewer for his comments and will answer to these one by one.\\n\\n1.\\nWe also state in the paper that probabilistic graph generative models are differentiable. In the abstract \\u201c In this work we propose a model for conditional graph generation that directly optimises properties of the graph, and generates a probabilistic graph, making the decoding process differentiable\\u201d\\n\\n\\n2.\", \"they_are_limited_to_very_small_graph_because_of_their_parametriazations\": \"the number of parameters depends on the predefined maximum graph size they have set. If their last hidden layer is of size d, the number of edges r, and the maximum graph size of size n then the weight matrix mapping the last layer to the edge tensor will be of size n*n*r*d which is very limiting.\\n\\nOur factorization model does not have this limitation (the number of parameters only depends on the size of the embeddings we choose), and keeping in memory of the full probabilistic graph is not an issue when working with molecules -> we are talking here of graphs with a maximum number of heavy atoms around 100. So we overcome this limitation by having a model whose number of parameters does not depend on the maximum size of the graph. We will change the manuscript to be more precise regarding that point.\\n\\n\\n3. \\nThose measures are standard for purely generative models (where the task is to generate molecules without other objective, and the molecules are sampled form the prior). Let us cite JT-VAE\\u2019s description of the reconstruction task to that extent : \\u201c \\tWe test the VAE models on the task of reconstructing input molecules from their latent representations, and decoding valid molecules when sampling from prior distribution. \\u201c\\n\\nOur model is a conditional autoencoder, which is a new setting (we do not put any prior on the latent code).\", \"molgan_does_not_tackle_the_constrained_optimization_scenario_at_all_and_its_formulation_is_not_easily_transferable_to_that_setting\": \"MolGAN is an implicit generative model. One way to constrain the generation process could be to add a reward signal computing the similarity between the generator output and the query molecule (the prototype) but :\\nThis would mean retraining/fine-tuning the model for each query molecule\", \"the_constrained_scenario_would_be_explicit\": \"which is not the case of other models (JT-VAE, GCPN and ours) in which the similarity constraint is not directly specified. JT-VAE finetunes the encoded representation, GCPN uses the prototype as a starting point, and we used a conditional formulation without retraining needed so such a comparison would be difficult and unfair anyway.\\n\\nWe will add in the appendix a comparative table of previous models to that extent and to justify the comparison effectively made in our manuscript. To the best of our knowledge, only JT-VAE and GCPNN are comparable models in the implicitly constrained optimization scenario.\\n\\n4. \\nWe never claimed that the decoding process was permutation invariant. We only made sure that we can make the encoder robust to permutations by training it on different permutations of the embeddings it encodes. However the decoder is trained to match the domain canonical order (heavy atoms are ordered as they appear in their SMILES canonical representation).\\n\\nWe thank the reviewer again and hope our comments shed a clearer light on our manuscript.\\n\\nThe authors\"}",
"{\"title\": \"The paper is very poor--- not ready for publication\", \"review\": \"The paper proposes a conditional graph generation that directly optimizes the properties of the graph. The paper is very weak.\\n1. I think almost all probabilistic graph generative models are differentiable. If the objective is differentiable function of real \\n variables, it is usually differentiable.\\n\\n2. The authors claim that existing works Simonovsky and Komodakis (2018) and Cao & Kipf (2018) are restricted to use small graphs with predefined maximum size. This work does not overcome the limitation of small graphs issue too.\\n\\n3. The authors do not show any measure on validity, novelty or uniqueness which are now standard in literature.\\n Also I do not find any comparison with molGAN paper which tackles a similar objective.\\n\\n4. Could the authors show if the decoding process is permutation invariant? I am not really sure of that. I was trying to prove that thing formally, but I failed.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"We think the novelty of the model has been misunderstood. We clarify that point\", \"comment\": \"We thank the reviewer for the detailed and useful comments. We will proceed to the rebuttal as follows :\\n\\n - Specific answers/clarifications on issues raised by the reviewer sequentially\\n - Summary of clarifications made in the answer \\n - Summary of changes in the manuscript implied by the review \\n\\n ----- SEQUENTIAL: Clarifications/Answers -----\\n\\nOur Answer on 1) Novelty of the model : \\nAs referenced in step 4 in section 3.1, we utilise the edge-factorization described in [1] (VGAE). The goal here is to generate graphs of varying size given some input condition. In practice this means being to generate both the nodes and edges of the graph, conditional on some latent code.\\n\\nIn contrast the VGAE has been designed in the context of relational inference (eg. link prediction in citation network), where the number of nodes is fixed, and task is to learn a suitable representation for the nodes, such that we\\u2019re able to reconstruct/predict missing links.\", \"because_of_the_assumptions_of_this_setting_the_vgae_only_solves_half_of_the_problem_we_are_trying_to_address\": \"given a set of node embeddings [1] reconstructs the adjacency tensor. In contrast we want to be able to generate both the node embeddings whose number is unknown a priori and their adjacency tensor given some latent code.\\n\\nIn practice this is achieved by adding a new component (see step 3. Sec 3.1) to model how to go from a latent code z to an actual set of node embeddings.That specific node embeddings generator (that we parametrize with an LSTM) is the major contribution of our model. \\n\\nTo clarify the differences between [1] and our DEFactor we added a short paragraph in the related work section on edge-factorization.\\n\\n\\nOur Answer on 2) and 3) Clarification on \\\"large\\\" \\\"cheap\\\" and \\\"scalable\\\": \\nOn the use of \\u201cscalable\\u201d and \\u201ccheap\\u201d In the manuscript we state that scalable ``[..] means that the number of parameters of the decoder should not depend on a fixed pre-defined maximum graph size.`` and the number of parameters in DEFactor is independent of the (max) size of the graph unlike [2] and [3]. We fixed the misleading use in the manuscript\", \"on_the_large_graph_modeling_concern\": \"the model\\u2019s focus is on molecular graphs (which we think is an important problem on its own) thus \\u201clarge\\u201d and \\u201csmall\\u201d do not have the same signification here when compared to general graphs/ networks (that is why we put large in italic style in the first version of the manuscript in the bullet points of the related work section but we updated it into \\u201clarge molecular graphs\\u201d) . Small = less than 10 heavy atoms (like in [2] and [3], they specify small in their title) Large = around 60 heavy atoms which is large enough in the optimization tasks we are interested in the drug discovery pipeline.\\n\\n\\n\\n 4) Minor comments , Reviewer : \\u201cRegarding Eq (2), why the lstm is used, instead of some simple order invariant aggregation?the paper needs more refinement. E.g., in the middle of page 2 there is a missing citation.\\u201d\\n\\nWe actually tried the simpler order invariant aggregation function (avg and max) but the convergence was bad so we directly went for a more complex/richer feature extractor such as LSTM. We agree that concerns can be raised concerning the matter of the order when we use such sequential aggregation functions however (as specified in section 3.1 (step 1 and 2) ) we trained the LSTM with a randomly permuted order of the embeddings it has to encode and did not notice any change in the performance of the model. \\n\\nBroken link fixed, thanks :) \\n\\n\\n----- SUMMARY : What we think we clarified ------\\n\\n- The novelty of the proposed decoder ( = autoregressive generation of nodes embeddings for graphs of varying size) which we think has been misunderstood by the reviewer.\\n- The misleading use of words \\u201cscalable\\u201d and \\u201ccheap\\u201d : we meant only meant that the number of parameters should not depend on the graph size (when compared to [2] and [3]).\\n- The meaning of \\u201clarge\\u201d graphs in the context of molecular graphs (which we find is a relevant and important problem on its own).\\n\\n---- ACTION POINTS : What we modified in the manuscript ----\\n\\n- Corrected the misleading use of \\u201cscalable\\u201d and \\u201ccheap\\u201d in the manuscript\\n- Replace large graphs by large molecular graphs to specify the scale of graphs we are referring to \\n- Added a paragraph in the related work section on the edge-factorization to further emphasize the true novelty of our decoder.\\n- Fixed the broken references\\n\\nWe hope that our answers clarified our contribution and thank the reviewer again, \\n\\nThe Authors\\n\\n--- REFERENCES ---\\n\\n[1] Kipf & Welling, Variational Graph Auto-Encoders , https://arxiv.org/pdf/1611.07308.pdf\\n[2] De Cao & Kipf, MolGAN : An implicit generative model for small molecular graphs, https://arxiv.org/abs/1805.11973\\n[3] Simonovsky & Komodakis, GraphVAE : Towards Generation of Small Graphs Using Variational Autoencoders, https://arxiv.org/abs/1802.03480\"}",
"{\"title\": \"review on \\\"DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation\\\"\", \"review\": \"This paper proposed a variant of the graph variational autoencoder [1] to do generative modeling of graphs. The author introduced an additional conditional variable (e.g., property value) into the decoder. By backpropagating through the discriminator, the model is able to find the graph with desired property value.\\n\\nOverall the paper reads well and is easy to follow. The conditional generation of graphs seems also helpful regarding the empirical performance. However, there are several concerns regarding the paper:\\n\\n1) The edge factorization-based modeling is not new. In fact [1] already uses the node embeddings to factorize the adjacency matrix. This paper models extra information including node tags and edge types, but these are not fundamental differences compared to [1].\\n\\n2) The paper claims the method is \\u2018cheaper\\u2019 and \\u2018scalable\\u2019. Since essentially the computation cost is similar to [1] which requires at least O(n^2) to generate a graph with n nodes, I\\u2019m not super confident about the author\\u2019s claim. Though this can be parallelized, but the memory cost is still in this order of magnitude, which might be too much for a sparse graph. Also there\\u2019s no large graph generative modeling experiments available.\\n\\n3) Continue with 2), the adjacency matrix of a large graph (e.g., graph with more than 1k nodes) doesn\\u2019t have to be low rank. So modeling with factorization (with typically ~256 embedding size) may not be suitable in this case.\", \"some_minor_comments\": \"4) Regarding Eq (2), why the lstm is used, instead of some simple order invariant aggregation?\\n\\n5) the paper needs more refinement. E.g., in the middle of page 2 there is a missing citation. \\n\\n[1] Kipf & Welling, Variational Graph Auto-Encoders, https://arxiv.org/pdf/1611.07308.pdf\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Experiments and Writing Need Improvement\", \"review\": \"In this paper, authors propose a deep generative model and a variant for graph generation and conditional graph generation respectively. It exploits an encoder which is built based on GCN and GraphSAGE, a autoregressive LSTM decoder which generates the graph embedding, and a factorized edge based probabilistic model for generating edge and node type. For conditional generation, authors also propose a discriminating training scheme based on maximizing the mutual information. Experiments on ZINC dataset show that the proposed method is promising.\", \"strength\": \"1, The problem this paper tries to tackle is very challenging and of great significance. Especially, the conditional graph generation direction under the deep learning context is novel. \\n\\n2, The overall model is interesting although it is a bit complicated as it combines quite a few modules.\", \"weakness\": \"1, In the reconstruction experiment, comparisons with several recent competitive methods are missing. For example, the methods which have been already discussed in the related work, Li et al. (2018a), You et al. (2018a) and You et al. (2018b). Moreover, it is not explained whether the comparison setting is the same as Jin et al. (2018) and what the size of the latent code of their method is. It seems less convincing by just taking results from their paper and do the comparison.\\n\\n2, Authors motive their work by saying in the abstract that \\u201cother graph generative models are either computationally expensive, limiting their use to only small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph non-differentiable w.r.t the model parameters\\u201d. However, if I understood correctly, in Eq. (7), authors compute the soft adjacency tensor which is a dense tensor and of size #node by #node by #edge types. Therefore, I did not see why this method can scale to large graphs.\\n\\n3, The overall model exploits a lot of design choices without doing any ablation study to justify. For example, how does the pre-trained discriminator affect the performance of the conditional graph generation? Why not fine-tune it along with the generator? The overall model has quite a few loss functions and associated weights of which the values are not explained at all.\\n\\n4, Conditional generation part is not written clearly. Especially, the description of variational mutual information phase is so brief that I do not understand the motivation of designing such an objective function. What is the architecture of the discriminator?\\n\\n5, How do authors get real attributes from the conditionally generated molecules? It is not explained in the paper.\", \"typos\": \"1, There are a few references missing (question mark) in the first and second paragraphs of section 2.\\n\\n2, Methods in the experiment section are given without explicit reference, like GCPN.\\n\\n3, Since edge type is introduced, I suggest authors explicitly mention the generated graphs are multi-graph in the beginning of model section. \\n\\nOverall, I do not think this paper is ready for publishing and it could be improved significantly.\\n\\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\", \"update\": \"Thanks for the detailed explanation. The new figure 1 is indeed helpful for demonstrating the overall idea. \\n\\nHowever, I still found some claims made by authors problematic. \\nFor example, it reads in the abstract that \\\"...or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph non-differentiable w.r.t the model parameters...\\\". \\nClearly, Li et al. 2018b has a differentiable formulation which falls under your description.\\n\\nBesides, I suggest authors adjust the experiment such that it focuses more on comparing conditional generation. \\nAlso, please set up some reasonable baselines based on previous work rather than saying it is not directly comparable.\\nDirectly taking numbers from other papers for a comparison is not a good idea given the fact that these experiments usually involve quite a few details which could potentially vary significantly.\\n\\nTherefore, I would like to keep my original rating.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Will update the stereochemistry issue\", \"comment\": \"Hi,\\nThanks for reply.\\n\\n1 and 3) Again we believe that adding the stereochemistry is an unnecessary burden but as it is simple to add to the model we will re run the reconstruction task taking into account the 3D structure (like you said it is just additional edge labels to add).\\n2) Our main concern with JT-VAE is that we tried to train in its deterministic AE and computed the 'exact match on the 2D structures' but found a lower score than on the 3d stochastic version, which is why we did not report it.\\n\\nThanks again for your comments and clarifications, we will update the manuscript regarding this point.\"}",
"{\"comment\": \"Hi,\\n\\nThanks for your reply. Regarding stereochemistry:\\n1) SMILES based methods (e.g. SD-VAE) do reconstruct 3D structure (with one step). Therefore, Table 1 is a unfair comparison. As long as you compute the reconstruction accuracy based on \\\"2D structure exact match\\\", Table 1 cannot be right.\\n2) JT-VAE does multiple steps of generation. However, they still computed the reconstruction accuracy based on \\\"exact match on 3D structures\\\". \\n3) Is stereochemistry that hard to reconstruct for your model? It's just additional edge labels right?\\n\\nI totally agree that reconstruction accuracy is not a good metric for comparing VAEs. But regardlessly prior work did that comparison, and I am not happy with sloppy experiments. We should really be rigid in experiments and comparisons. After all, I am just trying to help you improve your manuscript. That's it.\\n\\nThanks again for your reply\", \"title\": \"All methods (especially SMILES based methods) do reconstruct the stereochemistry\"}",
"{\"title\": \"On reconstruction accuracy table.\", \"comment\": \"Thank you for your interest in our article, and sorry for our delayed answer.\\n\\n--- On your stereochemistry concern ---\\nThe way we understood the graph-related prior work (ie. JT-VAE) is that it does NOT reconstruct the 3D structure. However they do evaluate on 3d structures (doing a post-ranking scheme). They actually reconstruct a 2D molecular graph, then list all the possible stereoisomers, rank them and take the most \\u2018likely\\u2019 by computing the cosine similarity score between the encoded molecule and the embedding of all the stereoisomers (see Appendix B of the JT-VAE article). For that very reason ( ie. the all the stereoisomers are listable given a 2d structure) we believe that working on 2d structure is not only easier but also enough.\\n\\n\\n--- On your stochastic vs deterministic reconstruction concern ---\\nFor the stochastic VAE vs. deterministic AE we totally agree with you and will add a note to specify this unbalance. However one might argue that it is rather that computing an exact reconstruction score may not be suited to evaluate a VAE model whereas it is a good indicator in our deterministic AE. However we did try to train the JT-VAE model in its AE version on 2d structures and we got a lower reconstruction score than the VAE version reported in the article, which we found weird and did not report it.\\n\\n\\n--- Aim of table 1 ---\\nAll in all, the major aim of our table 1 is to give a sense of the representative power of our proposed decoder and the comparisons are just here as an indication as, again, the evaluation context is not the same. We will add a comment to clarify those discrepancies between our model and the prior ones.\"}",
"{\"comment\": \"Dear authors,\\n\\nI believe the reconstruction accuracy comparison in Table 1 is totally unfair. First, all the baseline models (CVAE, GVAE, SD-VAE and JT-VAE) are variational autoencoders, and they computed the reconstruction accuracy by encoding the input molecule with stochastic noises. That is, the latent encoding of x is sampled from the approximate posterior Q(z|x) (which is a Gaussian). It is a stochastic encoding rather than deterministic. \\nHowever, the proposed model in this paper is an autoencoder, and the authors computed the reconstruction accuracy using the deterministic encoding of x. This is the main reason why the proposed model has better performance. In fact, all the baseline models followed Kusner et al.'s evaluation method -- sampling multiple z from Q(z|x) and average the reconstruction accuracy over all stochastic samples.\\n\\nSecond, does the proposed model decodes the stereochemistry (e.g. chirality)? If not, then the comparison is again not under the same scenario. I am asking this because I didn't see any stereochemistry presented in Figure 5 in the appendix. All the baseline models reconstruct both 2D and 3D information of an input molecule. It is important because there is no way to correctly reconstruct a molecule if its stereochemistry is not reconstructed correctly.\\n\\nI think the authors should remove this problematic comparison, or recompute the accuracy of the proposed model so that all models are compared under the same setting. Table 1 is really misleading, especially to reviewers who are outside of this domain.\", \"title\": \"Table 1 reconstruction accuracy comparison is totally unfair\"}"
]
} |
|
B1eSg3C9Ym | MEAN-FIELD ANALYSIS OF BATCH NORMALIZATION | [
"Mingwei Wei",
"James Stokes",
"David J Schwab"
] | Batch Normalization (BatchNorm) is an extremely useful component of modern neural network architectures, enabling optimization using higher learning rates and achieving faster convergence. In this paper, we use mean-field theory to analytically quantify the impact of BatchNorm on the geometry of the loss landscape for multi-layer networks consisting of fully-connected and convolutional layers. We show that it has a flattening effect on the loss landscape, as quantified by the maximum eigenvalue of the Fisher Information Matrix. These findings are then used to justify the use of larger learning rates for networks that use BatchNorm, and we provide quantitative characterization of the maximal allowable learning rate to ensure convergence. Experiments support our theoretically predicted maximum learning rate, and furthermore suggest that networks with smaller values of the BatchNorm parameter achieve lower loss after the same number of epochs of training. | [
"neural networks",
"optimization",
"batch normalization",
"mean field theory",
"Fisher information"
] | https://openreview.net/pdf?id=B1eSg3C9Ym | https://openreview.net/forum?id=B1eSg3C9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkgtQY8oyV",
"Hkg7zXbUJV",
"SygyUt1ByE",
"H1lvsPXITQ",
"SkeeYvXLa7",
"BygxB4QL6m",
"B1e2x7XI6m",
"SyxNantt27",
"r1xtU48Y27",
"Hkx4CIzQ2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544411425361,
1544061707301,
1543989574781,
1541973918820,
1541973879623,
1541973048108,
1541972724306,
1541147835936,
1541133392876,
1540724427965
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1071/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1071/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1071/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1071/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1071/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1071/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1071/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1071/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1071/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1071/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a mean field analysis of the effect of batch norm on optimization. Assuming the weights and biases are independent Gaussians (an assumption that's led to other interesting analysis), they propagate various statistics through the network, which lets them derive the maximum eigenvalue of the Fisher information matrix. This determines the maximum learning rate at which learning is stable. The finding is that batch norm allows larger learning rates.\\n\\nIn terms of novelty, the paper builds on the analysis of Karakida et al. (2018). The derivations are mostly mechanical, though there's probably still sufficient novelty.\\n\\nUnfortunately, it's not clear what we learn at the end of the day. The maximum learning rate isn't very meaningful to analyze, since the learning rate is only meaningful relative to the scale of the weights and gradients, and the distance that needs to be moved to reach the optimum. The authors claim that a \\\"higher learning rate leads to faster convergence\\\", but this seems false, and at the very least would need more justification. It's well-known that batch norm rescales the norm of the gradients inversely to the norm of the weights; hence, if the weight norm is larger than 1, BN will reduce the gradient norm and hence increase the maximum learning rate. But this isn't a very interesting effect from an optimization perspective. I can't tell from the analysis whether there's a more meaningful sense in which BN speeds up convergence. The condition number might be more relevant from a convergence perspective.\\n\\nOverall, this paper is a promising start, but needs more work before it's ready for publication at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"a promising start, but the analysis is mechanical and the maximum learning rate isn't inherently meaningful\"}",
"{\"title\": \"thanks for the response\", \"comment\": \"We thank Reviewer1 for the response. We have performed additional experiments and further address your questions below:\\n\\n1. I might still worry about constant factor multiplying 1/m and would happy to see this effect is indeed suppressed sufficiently.\\n\\nIn other to see the error suppressed by dataset size m, we performed additional experiments on finding maximal learning rate of fully-connected NN with MNIST and ConvNet with CIFAR10, where training dataset size m varies from 5 to 50000 and dataset is randomly sampled from the original dataset. The results are shown as below:\\n\\nfully-connected on MNIST, \\\\gamma = 0.5\\n--------------------------------------------------------------------------------------------------------\\n m | 5 | 10 | 50 | 100 | 500 | 1000 | 5000 |10000| 50000 \\n---------------------------------------------------------------------------------------------------------\\nlog10(eta)| -1.39 | -1.37 | -1.32 | -1.32 | -1.31 | -1.32 | -1.32 | -1.32 | -1.32\\n--------------------------------------------------------------------------------------------------------\\n\\nfully-connected on MNIST, \\\\gamma = 1\\n--------------------------------------------------------------------------------------------------------\\n m | 5 | 10 | 50 | 100 | 500 | 1000 | 5000 |10000| 50000 \\n---------------------------------------------------------------------------------------------------------\\nlog10(eta)| -2.20 | -1.99 | -1.92 | -1.91 | -1.91 | -1.91 | -1.91 | -1.91 | -1.91\\n--------------------------------------------------------------------------------------------------------\\n\\nConNet on CIFAR10, \\\\gamma = 0.5\\n--------------------------------------------------------------------------------------------------------\\n m | 5 | 10 | 50 | 100 | 500 | 1000 | 5000 |10000| 50000 \\n---------------------------------------------------------------------------------------------------------\\nlog10(eta)| -1.30 | -1.26 | -1.25 | -1.24 | -1.24 | -1.24 | -1.24 | -1.24 | -1.24\\n--------------------------------------------------------------------------------------------------------\\n\\nConvNet on CIFAR10, \\\\gamma = 1\\n--------------------------------------------------------------------------------------------------------\\n m | 5 | 10 | 50 | 100 | 500 | 1000 | 5000 |10000| 50000 \\n---------------------------------------------------------------------------------------------------------\\nlog10(eta)| -1.91 | -1.85 | -1.82 | -1.83 | -1.82 | -1.83 | -1.82 | -1.82 | -1.82\\n--------------------------------------------------------------------------------------------------------\\n\\nnotice that we used step size of 0.01 when scanning the learning rate values to find the maximal learning rate. We observe that maximal learning rate is increasing with dataset size m when m < 50 and becomes stable and saturated when m > 50 for all cases. These experiments are strong evidence that the error introduced by limited data is indeed suppressed sufficiently in most of the dataset we are interested in. \\n\\nWe hope the addition experiments can address your concern and we will include them in the final version.\\n\\n2. extra typo: Figure 3 caption should be (\\\\log_{10} \\\\eta, \\\\sigma_w) . Also original VGG-16 does not have batch-norm, and it should be made clear that the experiments were done on the modified version of VGG-16.\\n\\nWe apologize for the confusion and we will update it in the final version.\\n\\nThank you again for your review and comments, we hope our response address your concerns.\"}",
"{\"title\": \"thanks for the clarifications\", \"comment\": \"I thank the authors for providing answers to raised questions and clarifications. Also I appreciate the efforts to make the revisions.\\n\\n-- \\\"Derivation of recursion relation also requires large dataset size, m, where the error for finite m is O(1/m). Therefore even for a dataset of size 100, the error is around 1%, and the error introduced by finite m is negligible for most of the frequently-used datasets.\\\"\\n\\nI might still worry about constant factor multiplying 1/m and would happy to see this effect is indeed suppressed sufficiently.\", \"extra_typo\": \"Figure 3 caption should be (\\\\log_{10} \\\\eta, \\\\sigma_w)\\n\\nAlso original VGG-16 does not have batch-norm, and it should be made clear that the experiments were done on the modified version of VGG-16.\"}",
"{\"title\": \"Thanks for your review! Additional experiments and results have been added. Part 1\", \"comment\": \"Thank you very much for your review and helpful comments. We address your specific questions and comments below:\\n\\n1. The main result is an informal bound of the maximum eigenvalue, which is given without proof. Though, the numerical result corresponds to the derived bound.\\n\\nWe omitted some important steps in the proof of the bound for the maximum eigenvalue in the original version. We have updated the detailed proof in the SM of our latest version, and apologize for any confusion this caused.\\n\\n2. The paper is basically well written, but the technical part has several notational problems. For example, there is no definition of \\\"$\\\\otimes$\\\", \\\"$\\\\odot$\\\", and \\\"Hess\\\" operators.\\n\\nThanks for the comments. We have updated the paper and added definitions and explanations for all notations.\\n\\n3. The use of the mean-field theory is an interesting direction to analyze batch normalization. However, in this paper, it seems failed to say some rigorous conclusion. Indeed, all of the theoretical outcomes are written as \\\"Claims\\\" and no formal proof is given. Also, there is no clear explanation of why the authors give the results in a non-rigorous way, where is the difficult part to analyze in a rigorous way, etc.\\n\\n Thanks for raising this issue, and allow us an attempt to clarify. Our approach to estimating the maximal eigenvalue of the FIM for a random neural network involves two assumptions. First, we assume a large layer width in the network so that the behavior of a hidden node can be approximated by Gaussian distribution due to central limit theorem. Second, we assume that the averages for the forward and backward pass in the network are uncorrelated. Both assumptions are common and empirically successful in existing literature on mean-field theory of neural networks[1][2], however the second one in particular lacks a rigorous justification. Therefore we present our results as claims instead of theorems to emphasize that additional work is needed to rigorously justify the existing assumptions in the mean field literature generally.\\n \\n To make as explicit as possible our assumptions mentioned above, we have added a clear derivation in our latest version that hopefully will give the reader greater confidence in the rigor of our results.\\n \\n In addition, we acknowledge that the assumptions stated above have not been rigorously justified, albeit being well-accepted in other papers. Thus we performed extensive experiments to test the validity of our theoretical results, finding that indeed the experiments correspond strikingly well to the theory.\"}",
"{\"title\": \"Thanks for your review! Additional experiments and results have been added. Part 2\", \"comment\": \"4. Aside from the rigor issue, the paper heavily depends on the study of Karakida et al. (2018). The derivation of the bound (44) is directly built on Karakida's results such as Eqs. (7,8,20--22), which reduces the paper's originality.\\n The paper also lacks practical value. Can we improve an algorithm or some-thing by using the bound (44) or other results?\\n\\n Although our paper is motivated by their approach, Karakida et al. (2018) have different goals than us, and we significantly extend the framework to address our questions. While Karakida et al. (2018) focuses on studying the statistics of the FIM for vanilla (no BatchNorm) fully-connected neural networks, our aim is to study the role of BatchNorm. Therefore we extend the theory significantly, to both fully-connected and convolutional neural networks, with and without BatchNorm, and derive a new lower bound for ConvNets. We find that adding BatchNorm can greatly reduce the maximal eigenvalue of the FIM, and perform experiments to verify this.\\n \\n A practical upshot of the paper is that faster convergence is linked to smaller \\\\gamma-initialization, which is a new practical finding to our knowledge. To justify this, we have performed additional experiments in the updated version of our paper with VGG16 and Preact-Resnet18 with various \\\\gamma initializations trained on CIFAR-10. We find that a smaller \\\\gamma initialization indeed increases the speed of convergence. This result is included in the SM of the latest version of our paper. Thus, we believe that our work has both theoretical and practical value that should be of use to other researchers.\\n \\n More generally, by excluding unfeasible regions of parameters space, our analysis can be used for hyperparameter search in more realistic architectures than the fully-connected ones considered in Karakida.\\n\\nThank you again for your review and comments. Hopefully our reply has addressed your question and concerns.\\n\\n[1]Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep informationpropagation. In International Conference on Learning Representations (ICLR), 2017.\\n[2]Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, and Jeffrey Penning-ton. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. In International Conference on Machine Learning (ICML), 2018.\"}",
"{\"title\": \"Thanks for your review! Additional experiments and results have been added.\", \"comment\": \"Thank you very much for your review and helpful comments. We address your questions and concerns individually below:\\n\\n1. While mean field analysis a priori works in the limit where networks width goes to infinity for fixed dataset size, the analysis of Fisher and Batch normalization need asymptotic limit of dataset size.\\n\\nThank you for pointing this out. Our derivation of Claim 3.1 from (153) to (154) in SM is based on older definitions of order parameters, where E_{x, y} was replaced by E_{x \\\\neq y}, and therefore the asymptotic limit of large dataset size was required. \\n\\nHowever, based on our new definitions of order parameters, (153) to (155) are exact, and we should have removed (154) and revised Claim 3.1. Therefore in our new version, the asymptotic limit of large dataset size is not required in Claim 3.1. We apologize for this mistake and concomitant confusion.\\n\\nDerivation of recursion relation also requires large dataset size, m, where the error for finite m is O(1/m). Therefore even for a dataset of size 100, the error is around 1%, and the error introduced by finite m is negligible for most of the frequently-used datasets. We have added an explanation of this issue in the latest version of the submission. \\n\\nThe other place where there is a potential issue of large dataset size is in using the empirical FIM to approximate the true FIM in Section 2.1. However, since we are concerned here with the convergence of the learning dynamics on the training set, the empirical FIM is actually sufficient for our analysis. For future work on extending this theory to study generalization, limited dataset size must be taken into account.\\n\\n2. Although some interesting results are provided. The content could be expanded further for conference submission. The prediction on maximum learning rate is interesting and the concrete result from mean field analysis...[did this get cut off?]\\nWhile correlation between batch norm \\\\gamma parameter and test loss is also interesting, the provided theory does not seem to provide good intuition about the phenomenon.\\n\\nIndeed, this is correct. Our approach targets exploring the change of the FIM spectrum, and hence the maximal learning rate, with/without BatchNorm, and hence isn't able to directly make statements about generalization. However, our theory predicts that faster convergence is linked to smaller \\\\gamma-initialization, which is a new practical finding to our knowledge. Following this intuition, we performed additional experiments in the updated version of our paper with VGG16 and Preact-Resnet18, with various \\\\gamma initializations, trained on CIFAR-10. We find that the smaller \\\\gamma initialization indeed increase the speed of convergence. This result can be found in the SM of the latest version of our paper.\\n\\n3. The theory provides the means to compute lower bound of maximum eigenvalue of FIM using mean-field theory. In Figure 1, is \\\\lambda_{max} computed using the theory or empirically computed on the actual network? It would be nice to make this clear.\\n\\nWe are sorry for this confusion. It is computed using the theory and we have clarified this in our latest version. This is also useful in practice because direct numerical calculation of \\\\lambda_max is difficult for realistic deep neural networks due to high computational cost.\\n\\n4. In Figure 2, the observed \\\\eta_*/2 of dark bands in heatmap is interesting. While most of networks without Batch Norm, performance is maximized using learning rates very close to maximal value, often networks using batch norm the learning rate with maximal performance is not the maximal one and it would be interesting to provide theoretical.\\n\\nThis is indeed an interesting observation, but since our theory can't directly speak to performance (it analyzes the maximal allowed rate instead of the optimal rate), a different approach would be required to explain this phenomenon.\\n\\n5. I feel like section 3.2 should cite Xiao et al (2018). Although this paper is cited in the intro, the mean field analysis of convolutional layers was first worked out in this paper and should be credited.\\n\\nYes certainly, and we apologize for the oversight. We have updated the citation in our latest version.\\n\\nThank you again for your review and comments. Hopefully our reply has addressed your question and concerns.\"}",
"{\"title\": \"Thanks for your review! Additional experiments and results have been added.\", \"comment\": \"Thank you very much for your review and valuable comments. We address your questions and comments below:\\n\\n1. As a baseline, how would the max learning rate behave without BatchNorm? Would the theories again match the experimental result there?\\n\\nWe also wondered how the max learning rate would behave without BatchNorm, and thus we did an experiment for a network without BatchNorm where we varied \\\\sigma_w, the weight initialization variance, and found that the theory again matches the experimental result. However, we didn\\u2019t include this result in the previous draft. We have now added this result to the SM in the new revised version as a baseline.\\n\\n2. Is the presence of momentum important? If I set the momentum to be zero, it does not change the theory about the Fisher information and only affects the dependence of $\\\\eta$ on the Fisher information. In this case would the theory still match the experiments?\\n\\nThe presence of momentum doesn't change the picture dramatically. We set momentum to 0.9 to match the value frequently used in practice. Indeed, changing the momentum only affects the dependency of \\\\eta on the FIM. We have performed an additional experiment on training without momentum and find that in this case the theory still matches the experiment. \\n\\n3. This is a well-written paper with a clean, novel result: when we fix the BatchNorm parameter \\\\gamma, a smaller \\\\gamma stabilizes the training better (allowing a greater range of learning rates). Though in practice the BatchNorm parameters are also trained, this result may suggest using a smaller initialization. \\n\\nThanks for the positive feedback! We performed additional experiments in the updated version of our paper with VGG11 and Preact-Resnet18, with various \\\\gamma-initializations, trained on CIFAR-10. We find that the smaller \\\\gamma-initialization indeed increase the speed of convergence. This result can be found in the SM of the latest version of our paper.\\n\\nThank you again for your review and comments. We believe that the inclusion of a baseline without BatchNorm as well as clarification on the role of momentum has improved the results and clarity of the paper.\"}",
"{\"title\": \"Interesting paper\", \"review\": \"This paper studies the effect of batch normalization via a physics style mean-field theory. The theory yields a prediction of maximal learning rate for fully-connected and convolutional networks, and experimentally the max learning rate agrees very well with the theoretical prediction.\\n\\nThis is a well-written paper with a clean, novel result: when we fix the BatchNorm parameter \\\\gamma, a smaller \\\\gamma stabilizes the training better (allowing a greater range of learning rates). Though in practice the BatchNorm parameters are also trained, this result may suggest using a smaller initialization.\", \"a_couple_of_things_i_was_wondering\": \"-- As a baseline, how would the max learning rate behave without BatchNorm? Would the theories again match the experimental result there?\\n\\n-- Is the presence of momentum important? If I set the momentum to be zero, it does not change the theory about the Fisher information and only affects the dependence of \\\\eta on the Fisher information. In this case would the theory still match the experiments?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting application of MFT on FIM to understand Batch Normalization\", \"review\": \"Interesting application of MFT on FIM to understand Batch Normalization\\n\\nThis paper applies mean field analysis to networks with batch normalization layers. Analyzing maximum eigenvalue of the Fisher Information Matrix, the authors provide theoretical evidence of allowing higher learning rates and faster convergence of networks with batch normalization. \\n\\nThe analysis reduces to providing lower bound for maximum eigenvalue of FIM using mean-field approximation. Authors provide lower bound of the maximum eigenvalue in the case of fully-connected and convolutional networks with batch normalization layers. Lastly authors observe empirical correlation between smaller \\\\gamma and lower test loss.\", \"pro\": [\"Clear result providing theoretical ground for commonly observed effects.\", \"Experiments are simple but illustrative. It is quite surprising how well the maximum learning rate prediction matches with actual training performance curve.\"], \"con\": [\"While mean field analysis a-priori works in the limit where networks width goes to infinity for fixed dataset size, the analysis of Fisher and Batch normalization need asymptotic limit of dataset size.\", \"Although some interesting results are provided. The content could be expanded further for conference submission. The prediction on maximum learning rate is interesting and the concrete result from mean field analysis\", \"While correlation between batch norm \\\\gamma parameter and test loss is also interesting, the provided theory does not seem to provide good intuition about the phenomenon.\"], \"comments\": [\"The theory provides the means to compute lower bound of maximum eigenvalue of FIM using mean-field theory. In Figure 1, is \\\\bar \\\\lambda_{max} computed using the theory or empirically computed on the actual network? It would be nice to make this clear.\", \"In Figure 2, the observed \\\\eta_*/2 of dark bands in heatmap is interesting. While most of networks without Batch Norm, performance is maximized using learning rates very close to maximal value, often networks using batch norm the learning rate with maximal performance is not the maximal one and it would be interesting to provide theoretical\", \"I feel like section 3.2 should cite Xiao et al (2018). Although this paper is cited in the intro, the mean field analysis of convolutional layers was first worked out in this paper and should be credited.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Theoretical but not rigorous\", \"review\": \"In this paper, the effect of batch normalization to the maximum eigenvalue of the Fisher information is analyzed. The techinique is mostly developed by Karakida et al. (2018). The main result is an informal bound of the maximum eigenvalue, which is given without proof. Though, the numerical result corresponds to the derived bound.\\n\\nThe paper is basically well written, but the technical part has several notational problems. For example, there is no definition of \\\"\\\\otimes\\\", \\\"\\\\odot\\\", and \\\"Hess\\\" operators.\\n\\nThe use of the mean-field theory is an interesting direction to analyze batch normalization. However, in this paper, it seems failed to say some rigorous conclusion. Indeed, all of the theoretical outcomes are written as \\\"Claims\\\" and no formal proof is given. Also, there is no clear explanation of why the authors give the results in a non-rigorous way, where is the difficult part to analyze in a rigorous way, etc. \\n\\nAside from the rigor issue, the paper heavily depends on the study of Karakida et al. (2018). The derivation of the bound (44) is directly built on Karakida's results such as Eqs. (7,8,20--22), which reduces the paper's originality.\\n\\nThe paper also lacks practical value. Can we improve an algorithm or something by using the bound (44) or other results?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1gBgnR9Y7 | End-to-end learning of pharmacological assays from high-resolution microscopy images | [
"Markus Hofmarcher",
"Elisabeth Rumetshofer",
"Sepp Hochreiter",
"Günter Klambauer"
] | Predicting the outcome of pharmacological assays based on high-resolution microscopy
images of treated cells is a crucial task in drug discovery which tremendously
increases discovery rates. However, end-to-end learning on these images
with convolutional neural networks (CNNs) has not been ventured for this task
because it has been considered infeasible and overly complex. On the largest
available public dataset, we compare several state-of-the-art CNNs trained in an
end-to-end fashion with models based on a cell-centric approach involving segmentation.
We found that CNNs operating on full images containing hundreds
of cells perform significantly better at assay prediction than networks operating
on a single-cell level. Surprisingly, we could predict 29% of the 209 pharmacological
assays at high predictive performance (AUC > 0.9). We compared a
novel CNN architecture called “GapNet” against four competing CNN architectures
and found that it performs on par with the best methods and at the same time
has the lowest training time. Our results demonstrate that end-to-end learning on
high-resolution imaging data is not only possible but even outperforms cell-centric
and segmentation-dependent approaches. Hence, the costly cell segmentation and
feature extraction steps are not necessary, in fact they even hamper predictive performance.
Our work further suggests that many pharmacological assays could
be replaced by high-resolution microscopy imaging together with convolutional
neural networks. | [
"Convolutional Neural Networks",
"High-resolution images",
"Multiple-Instance Learning",
"Drug Discovery",
"Molecular Biology"
] | https://openreview.net/pdf?id=S1gBgnR9Y7 | https://openreview.net/forum?id=S1gBgnR9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BklhDylAJN",
"B1xmKZJcRm",
"H1gG1xkqRX",
"SklS6DpF0Q",
"rye6wDTFRm",
"rJgQ_qzq3Q",
"BkxIVj-92m",
"SJeOBN-qh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544580964026,
1543266682743,
1543266266379,
1543260092932,
1543260004956,
1541184107310,
1541180206268,
1541178432256
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1070/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1070/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1070/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1070/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1070/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1070/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1070/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1070/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This work studies the performance of several end-to-end CNN architectures for the prediction of biomedical assays in microscopy images. One of the architectures, GAPnet, is a minor modification of existing global average pooling (GAP) networks, involving skip connections and concatenations. The technical novelties are low, as outlined by several reviewers and confirmed by the authors, as most of the value of the work lies in the empirical evaluation of existing methods, or minor variants thereof.\\n\\nGiven the low technical novelty and reviewer consensus, recommend reject, however area chair recognizes that the discovered utility may be of value for the biomedical community. Authors are encouraged to use reviewer feedback to improve the work, and submit to a biomedical imaging venue for dissemination to the appropriate communities.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting study applying CNNs to prediction of assays, but work is perhaps more suited for a biomedical imaging journal.\"}",
"{\"title\": \"Response does not change view\", \"comment\": \"I appreciate the authors' responses to my comments; however, they do not really address my concerns about the contribution of the empirical comparison. I believe a revised version of the paper which addresses some of the questions which are still open (in particular, 3 and 4) would significantly improve the contribution.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for his insightful comments and questions.\\n\\n=== Major comments \\nWe agree with the reviewer that this paper empirically evaluates different microscopy analysis approaches. \\nIndeed, our custom-designed architecture \\u201cGapNet\\u201d has been included in the analysis, but does not undercut neutrality. \\nAll architectures were adapted to this task, had the chance to adjust their most important hyperparameters \\non a validation set, and have been compared on an independent test set. \\nWe apologize that the discussion section does not meet the expectations of the reviewer.\", \"regarding_a_more_in_depth_discussion_of_results\": \"(1) Do the end-to-end systems perform well on the same assays? \\nYes, there is an overlap of 78%-89% between model on assays with AUC >0.9.\\n\\n(2) Would ensemble improve performance?\\n\\nAs the reviewer points out, an ensemble of multiple models might increase predictive performance given the \\nhigh overlap across highly-predictive assays for end-to-end models. \\nUnfortunately we did not have the capacity to learn ensemble approaches at this time.\\n\\n(3) How representative is Figure 5?\\n\\nThe reviewer points out a very interesting question here. We are looking into this by using \\ncontribution analysis (e.g. Integrated Gradients) to identify visual cues indicating bioactivity. \\nHowever, due to the large number of samples and their complex nature this is still an ongoing process.\\n\\n(4) Discussion of FNN outperforming CNN\\n\\nSimilar to the previous question we are still in the process of gaining insights here. \\nWith respect to this specific assay it seems that the CNN-based method is unable to predict any \\nsample correctly as active (although it is capable of correctly predicting inactive samples).\\nFrom a first visual inspection we are unsure what causes this.\\n\\n(5) Sensitivity with respect to the number of labeled examples per assay\\n\\nWe checked the relation of number of labeled examples per assay with its AUC for all models and overall there is only a slight positive trend. Both end-to-end models as well as the FNN-based approach are able to achieve high AUCs for assays with a high number of labeled samples (200+ and more) as well as for those with a lower number of labeled samples (starting with ~50 samples we see AUCs of >0.9). We added a figure that displays this association (Appendix Figure A1). \\n\\n(6) Informative compounds and binarized labels\\n\\nWe agree with the reviewer that modeling this as a regression task could provide valuable insights \\nor be of more interest to practitioners. \\nHowever, we aim at being comparable to the work of Simm et al. (2018) who also considered \\nthis as a classification task. Nonetheless, we plan to include modelling of the regression \\ntask in a future version of this work.\\n\\n\\n=== Minor comments \\n1) Are individual images from the same sample image always in only the training, validation, or testing set?\\n\\nYes, we took care that multiple images from the same sample are always in the same fold. \\nWe now state this clearer in the manuscript.\\n\\n2) I did not find the dataset construction description very clear.\\n\\nWe apologize for not stating the process clearly and have improved the description of the \\ndata set which hopefully makes this more clear. \\n\\n3) Pipeline and data collection\\n\\nWe have improved the pipeline description in the paper.\\nRegarding dataset collection, yes all images have been captured with the same microscope and magnification and are of the same cell line (U2OS). \\nDue to the absence of a data set comprising images from multiple devices and labs, it is yet unclear how robust the predictive performance is with respect to variance arising from those sources.\\n\\n4) Convergence of different architectures\\n\\nActually, MIL-Net takes the most epochs to converge on the validation set, only reaching good performance after about 90 epochs. In contrast GAP-Net, DenseNet and ResNet converge somewhere around 50-60 epochs. M-CNN converges after roughly 40 epochs and the FNN after 65 epochs.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for this assessment of our work.\\n\\nIndeed, we believe that our contribution lies in the assembly of a novel and \\nrelevant prediction task, and that we showed that end-to-end learning outperforms other \\napproaches at this task. Although the machine learning community might have \\ncommonly hypothesized that the superiority of end-to-end learning would \\nalso hold in this area, our empirical evaluation is the first to demonstrate this. \\nWe also suggest a much more compact architecture, GapNet, that given a fixed time-span \\nallows to search many more hyperparameters compared to other architectures.\", \"other_notes\": \"1) Ad Table 1\\nThe error bars in Table 1 are standard deviations of the AUC and F1 scores across prediction tasks. \\nWe apologize for not making this clearer and we now state this clearer in the table caption.\\n\\n2) Ad Figure 3\\nThank you for pointing out the missing reference, we corrected this mistake in the manuscript.\\nAlso, the *\\u2019s represent outliers in this box plot.\\n\\n3) Ad Figure 5 \\nFigure 5 shows example associated with active compounds according to a specific assay. \\nThe two bigger images on top are individual samples while the two smaller image below \\neach of these shows the enlarged crop marked with red rectangles.\\n\\n4) Be clear what you mean when you refer to \\u201cupper layers\\u201d of a network \\nThank you for pointing this out, we changed the wording to \\u201cdeepest layers\\u201d which makes this more clear.\\n\\n5) Relevance to other cases\\n\\nWe thank the reviewer for this valuable comment that we had not addressed. \\nIndeed, there are many assays, such as the micronucleus test, that are actual imaging readouts. \\nHowever, the assays that we modelled here are cell-based assays whose association with \\nthis particular cell line and stains had not been investigated.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for his/her positive comment on our manuscript.\\n\\nRegarding questions 1) and 2) we apologize for the confusing explanation of the process to generate our label matrix, we have tried to state the process more clearly in the paper.\\n\\nThe final label matrix has 10,057 compounds and 209 assays. To arrive at that we perform the following steps:\\n 1) Starting with the full ChEMBL database, extract all compounds for which we have microscopy images (11,585)\\n 2) Extract all assays for which at least one compound with a pChEMBL value between 4 and 10 is present. \\n 3) Then, we apply three thresholds to this matrix (5.5, 6.5 and 7.5) with values above the threshold indicating an \\n active compound and below an inactive compound.\\n 4) These three binary matrices are concatenated along the assay dimension.\\n 5) Next, we also use the \\u201cactivity comment\\u201d field of ChEMBL which directly gives us a binary matrix.\\n 6) We concatenate the three thresholded matrices and the activity comment matrix.\\n 7) Finally, we keep only assays (or assay/threshold combinations) with at least 10 active and 10 inactive compounds. \\n 8) This results in some compounds now having no measurement in the remaining assays and by removing these we \\n arrive at 10,574 compounds and 209 assays.\\n\\nRegarding question 3), the 4 channels stated on page 3 were actually a typo - we now correctly state 5 channels. We apologize for this error and the the unclear explanation of channels, views, and images. \\nThe microscopic device takes six adjacent images, called \\u201cviews\\u201d, for each sample. These six views can be stitched together to get one image per sample (they are arranged in a 2x3 grid). Each of these images, or views, has five channels that correspond to five different fluorescent dyes. We improved this description in the main manuscript.\", \"ad_question_4\": \"As the reviewer correctly pointed out, GapNet is on par with other architectures w.r.t. Predictive performance, but fast w.r.t. computation speed.\"}",
"{\"title\": \"The paper introduces Gapnet, that uses a CNN architecture to learn pharmacological assays from high-resolution microscopy images. The paper deals with a valid problem of handling images in a segmentation-agnostic way.\", \"review\": \"The paper is well written, deals with a valid and crucial end-to-end imaging problem.\\n\\nComments\\n1) Section 2: It is not clear how 10574 compounds increase to 11585 (2nd paragraph page 3). Also how does one arrive at 11171 compounds (para 3). \\n2) How do you arrive at 209 assays from 10818?\", \"do_consider_enumerating_this_section\": \"data dimensions you started with and then how the dimensions were reduced per step. I gather you have mentioned this but it is confusing to grasp, at this point.\\n\\n3) In page 2, you mention the images have 5 channels but towards the end of the section on page 3, it says 1) views have \\u20186\\u2019 such images per sample image and 2) 4 channels for stains. How many stains are there per channel and how are 5 channels related to the \\u20186\\u2019 and 4 channels? \\n\\n4) In Section 4 and Appendix 6, it does not seem that Gapnet outperforms, rather it is at par to, other architectures. Is the only gain with Gapnet the runtime across epochs?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An empirical study with little analysis\", \"review\": \"Edit: changed \\\"Clarity\\\"\\n\\n[Relevance] Is this paper relevant to the ICLR audience? yes\\n\\n[Significance] Are the results significant? no\\n\\n[Novelty] Are the problems or approaches novel? no\\n\\n[Soundness] Is the paper technically sound? okay\\n\\n[Evaluation] Are claims well-supported by theoretical analysis or experimental results? marginal\\n\\n[Clarity] Is the paper well-organized and clearly written? no\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\", \"seen_submission_posted_elsewhere\": \"No\", \"detailed_comments\": \"In this work, the authors compare several state-of-the-art approaches for high-resolution microscopy analysis to predicting coarse labels for the outcomes of pharmacological assays. They also propose a new convolutional architecture for the same problem. An empirical comparison on a large dataset suggests that end-to-end systems outperform those which first perform a cell segmentation step; the predictive performance (AUC) of almost all the end-to-end systems is statistically indistinguishable.\\n\\n=== Major comments\\n\\nThe paper is primarily written as though its main contribution is as an empirical evaluation of different microscopy analysis approaches. Recently, there have been a large number of proposed approaches, and I believe a neutral evaluation of these approaches on datasets other than those used by the respective authors would be a meaningful contribution. However, the current paper has two major shortcomings that prevent it from fulfilling such a place.\\n\\nFirst, the authors propose a novel approach and include it in the evaluation. This undercuts claims of neutrality. (Minor comments about the proposed approach are given below.) \\n\\nSecond, the discussion of the results of the empirical evaluation is restricted almost solely to repeating in text the what the tables already show. Further, the discussion focuses only on the \\u201ctop line\\u201d numbers, with the exception of a deep look at the Gametocytocidal compounds screen. It would be helpful to instead (or additionally) identify meaningful trends, supported by the data acquired during the experiments. For example: (1) Do the end-to-end systems perform well on the same assays? (2) Would a simple ensemble approach improve things? if they perform well on different assays, then that suggests it might. (3) What are the characteristics of the assays on which the CNN-based approaches perform well or poorly (i.e., how representative is Figure 5)? (4) What happens when the FNN-based approach outperforms the CNN-based ones? in particular, what happens in A13? (5) How sensitive are the approaches to the number of labeled examples of each assay type? (6) Are there particular compounds which seem particularly informative for different assays?\\n\\nA second major concern is whether the binarized version of this problem (i.e., assay result prediction) is of interest to practitioners. In many contexts, quantitative information is also important (\\u201chow much of a response do we see?\\u201d). While one could imagine the rough qualitative predictions (\\u201cdo we see a response?\\u201d) shown here as an initial filtering step, it is hard to believe that the approach proposed here would replace other more informative analysis approaches. \\n\\n=== Minor comments\\n\\nAre individual images from the same sample image always in only the training, validation, or testing set? that is, are there cases where some of the individual images from a particular sample image are in the training set, while others from that sample image are in the testing set?\\n\\nI did not find the dataset construction description very clear. Does each row in the final, 10 574 x 209 matrix correspond to a single image? Does each image correspond to a single row? For example, it seems as though multiple rows may correspond to the same image (up to four? the three pChEMBL thresholds as well as the activity comment). What is the order in which the filtering and augmenting happens? It would be very helpful to provide a coherent, pipeline description of this (say, in an appendix).\\n\\nDo all the images in the dataset come from the same microscope (and cell line) at the same resolution, zoom, etc.? If so, it is unclear how well this approach may work for images which are more heterogeneous. There are not very many datasets of the size described (I believe, at least) available. This may significantly limit the practical impact of this work.\\n\\nHow many epochs are required for convergence of the different architectures? For example, MIL-net has significantly fewer parameters than the others; does it converge on the validation set faster?\\n\\n=== Typos, etc.\\n\\nThe references are not consistently formatted.\\n\\n\\u201cnot loosing\\u201d -> \\u201cnot losing\\u201d\\n\\u201cdoesn\\u2019t\\u201d -> \\u201cdoes not\\u201d\", \"rating\": \"3: Clear rejection\"}",
"{\"title\": \"A new and interesting application but the strength of original contributions is unclear\", \"review\": \"The authors explore the possibility of using an end-to-end approach for predicting pharmacological assay outcome using fluorescence microscopy images from the public Cell Painting dataset. In my view, the primary contributions are the following: an interesting and relatively new application (predicting assay outcomes), enriching the CellPainting dataset with drug activity data, and a comparison of several relevant methods and architectures. The technical novelty is weak, and although the authors demonstrate that end-to-end holistic approaches outperform previous segmentation-and-feature-extraction approaches, this result is not surprising and has been previously reported in closely related contexts.\\n\\n\\nOVERVIEW\\n\\nThe authors evaluate the possibility of using and end-to-end deep learning approach to predict drug activity using only image data as input. The authors repurpose the CellPainting dataset for activity prediction by adding activity data from online ChEMBL databases. If made available as promised, the dataset will be a valuable resource to the community. The authors compare a number of previous approaches and state-of-the-art image classification network architectures to evaluate the use of CNNs instead of more classical image analysis pipelines. The comparison is a strong point of the paper, although some details are lacking. For example, the authors claim that GapNet is the quickest method to train, and while they report the number of hyperparameters and time per epoch, the number of epochs trained is never mentioned. \\n\\nThe authors propose an architecture (GapNet) for the assay prediction task. While the way Global Average Pooling is used to extract features at different stages in the network might be new, it is a straightforward combination of GAP and skip connections. Little insight into why this approach is more efficient or evidence for its effectiveness is provided. Similarly, more explanation for why dilated convolutions and SELU activations would be appreciated. A comparison between GapNet and the same network without the GAP connections could possibly provide a more interesting comparison and might also provide a more pervasive argument as to why GapNet\\u2019s should be used. Ultimately, the benefit of using GapNet over the other architectures is not strongly motivated, as training time is less of a concern in this application than predictive power.\\n\\n\\nRELATED WORK\\n\\nThe authors present previous work in a clear and comprehensive manner. However, the reported finding that \\u201cCNNs operating on full images containing hundreds of cells can perform significantly better at assay prediction than networks operating on a single-cell level\\u201d is not surprising, and partial evidence of this can be found in the literature. In [1], it was shown that penultimate feature activations from pre-trained CNNs applied to whole-image fluorescence microscopy data (MOA prediction) outperform the baseline segmentation-then-feature extraction method (FNN). Similarly, in [2] (the paper proposing MIL-Net), it is shown that end-to-end whole-image CNN learning for protein localization outperforms the baseline (FNN). In [3] whole image end-to-end learning outperforms whole image extracted features for a phenotyping task. All of these references use fluorescence microscopy data similar to the dataset in this work.\\n\\n[1] Pawlowski, Nick, et al. \\\"Automating morphological profiling with generic deep convolutional networks.\\\" bioRxiv (2016): 085118.\\n[2] Kraus, Oren Z., Jimmy Lei Ba, and Brendan J. Frey. \\\"Classifying and segmenting microscopy images with deep multiple instance learning.\\\" Bioinformatics 32.12 (2016): i52-i59\\n[3] Godinez, William J., et al. \\\"A multi-scale convolutional neural network for phenotyping high-content cellular images.\\\" Bioinformatics 33.13 (2017): 2010-2019.\\n\\n\\nAPPROACH\\n\\nThe authors compile enrich the CellPaining dataset with activity data from various drug discovery assays. In my view, the creation of this dataset is the strongest and most valuable contribution of the paper. The method used to collect the data is described clearly and the choices made when compiling the dataset, including the thresholds and combinations of activity measures seems like a well founded approach.\\n\\nThe authors then identify a number of approaches that are relevant for the problem at hand, binary prediction of drug activity based on image data. These include previous approaches used for cell images and modern image classification networks.\\n\\n\\nEXPERIMENTS\\n\\nThe different approaches/networks mentioned above were evaluated on a testset. The results indicate that end-to-end CNN approaches outperform all non-end-to-end with no significant difference between the individual end-to-end CNNs. The results are stated clearly and the presentation of different metrics is a nice addition to properly compare the results. It would however contribute valuable information if the authors stated how the confidence intervals of the F1 score are calculated (are the experiments based on several runs of each network or how is it done).\\n\\n\\nNOVELTY/IMPACT\\n\\n+ Creation of a new dataset on a new and interesting problem \\n+ Useful comparison of modern networks on the task\\n- GapNet - lacking technical novelty, insight, and performance is unconvincing\\n- Demonstrates that end-to-end learning outperforms cell centric approach - was this really surprising or even new information?\", \"other_notes\": [\"Figure 3 is never mentioned in the main text\", \"Figure 3 (*\\u2019s) are confusing. Do they represent outliers? Statistical significance tests?\", \"Figure 5 which panel is which?\", \"Be clear what you mean when you refer to \\u201cupper layers\\u201d of a network\", \"An important point not mentioned: in practice, many assays use stains that are closely tied to the readout, unlike the dataset here which provides only landmark stains. The results found here do not necessarily apply in other cases.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1eVe2AqKX | PCNN: Environment Adaptive Model Without Finetuning | [
"Boyuan Feng",
"Kun Wan",
"Shu Yang",
"Yufei Ding"
] | Convolutional Neural Networks (CNNs) have achieved tremendous success for many computer vision tasks, which shows a promising perspective of deploying CNNs on mobile platforms. An obstacle to this promising perspective is the tension between intensive resource consumption of CNNs and limited resource budget on mobile platforms. Existing works generally utilize a simpler architecture with lower accuracy for a higher energy-efficiency, \textit{i.e.}, trading accuracy for resource consumption. An emerging opportunity to both increasing accuracy and decreasing resource consumption is \textbf{class skew}, \textit{i.e.}, the strong temporal and spatial locality of the appearance of classes. However, it is challenging to efficiently utilize the class skew due to both the frequent switches and the huge number of class skews. Existing works use transfer learning to adapt the model towards the class skew during runtime, which consumes resource intensively. In this paper, we propose \textbf{probability layer}, an \textit{easily-implemented and highly flexible add-on module} to adapt the model efficiently during runtime \textit{without any fine-tuning} and achieving an \textit{equivalent or better} performance than transfer learning. Further, both \textit{increasing accuracy} and \textit{decreasing resource consumption} can be achieved during runtime through the combination of probability layer and pruning methods. | [
"Class skew",
"Runtime adaption"
] | https://openreview.net/pdf?id=S1eVe2AqKX | https://openreview.net/forum?id=S1eVe2AqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkx6RZ_iyV",
"HylkPTJ0hX",
"S1glW4P5hX",
"HJl1rrQdnX",
"r1eLxATDhm",
"Syen8Xawn7",
"SkgZr2nwh7",
"SkgDx7nD3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1544417748718,
1541434711345,
1541202935961,
1541055799201,
1541033453640,
1541030740022,
1541028920596,
1541026542777
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1069/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1069/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1069/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1069/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1069/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1069/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1069/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1069/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers rate the paper as below threshold. While the authors responded to an earlier request for clarification, there is no rebuttal to the actual reviews. Thus, there is no basis by which the paper can be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview: no rebuttal\"}",
"{\"title\": \"Simple Idea, Good Results But Novelty? Detailed Analysis?\", \"review\": \"The paper proposes a simple idea to calibrate probabilities outputted by a CNN model to adapt easily to environments where class distributions change with space and time (and are often skewed). The paper shows that such a simple approach is sufficient to get good accuracies without requiring any costly retraining or transfer learning. Thereby proving to give benefits in terms of resource consumption and at the same time giving better results than the state of the art.\\n\\nHowever, \\nA] The proposed calibration doesn't take any CNN specific details into consideration, rather it is a general calibration method which was also proposed in Saerens et. al, 2002 (cited in the paper). It is unclear why the paper specifically talks about CNN.\\nB] The proposed Class Skew Detector is a simple method. Change-point detection is a well-studied area. The paper lacks a literature review in this area and a reasoning of why the proposed approach is preferred. Also, an independent analysis of how the class skew detector behaves in the face of rapidly changing class skews versus slow changing class skews is warranted here. Particularly, given that the paper proposes to use this approaches in mobile which may work in both rapid and slow changing class skews.\\nC] The Class Skew Detector is dependent on the base model. Thus, it is also likely that the empirical distribution estimated is biased and yet the final accuracies reported are much higher than the base model accuracies. There is something interesting happening here. An analysis of the robustness of the proposed approach in the face of noisy class skew detection could potentially make this paper a stronger work.\\nD] The analysis in the paper has largely focused on pre-trained models. However, another analysis that could have been useful here is, varying the quality of the classifier (e.g. classifier trained on skewed training data vs. balanced training data) and measuring how the quality of the classifier correlates with the final performance. Maybe even attempt to answer the question \\\"which classifiers are likely to work with this approach?\\\" In fact, this analysis can be either done in a general context of any classifier or just CNN's and identifying whether certain properties of CNN help in getting better performance.\\n\\nThe paper lacks novelty and at the same time, it is not quite compensating that with a detailed analysis of the work. The problem is interesting and I like the work because the approach is simple and the results look good. I think with a stronger focus on more detailed analysis, this can be a good submission to an applied conference like MobiCom etc.\\n\\nBy the way, the paper is riddled with several spelling errors - \\n\\\"filed\\\" -> \\\"field\\\", page 1, second paragraph, last line\\n\\\"complimentary\\\" -> \\\"complementary\\\", page 2, section 2, paragraph 1, last line\\n\\\"epoches\\\" -> \\\"epochs\\\", page 2, section 2, transfer learning, second paragraph, second last line\\n\\\"CNNs does not use\\\" -> \\\"CNNs do not use\\\", page 3, section 3, intuition, first paragraph, first line\\n\\\"formular\\\" -> \\\"formula\\\", page 4, above equation 4\\nEquation 4 has a typo in the denominator, P_t(i) should be P_t(j), same with Equation 5\\n\\\"obstained\\\" -> \\\"obtained\\\", page 7, second paragraph, first line\\n\\\"adaptation\\\" is almost everywhere spelled as \\\"adaption\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"No technical contribution, Heuristic solution\", \"review\": \"This paper proposed a way to detect a skew in the distribution of classes in a stream of images and reweight the class priors accordingly, to estimate the final posterior probabilities of present classes. This probability re-calibration is referred to as the probability layer. A simple algorithm is proposed to detect the class distribution skew. The proposed benefit of this method is that they do not require fine-tuning any network parameters using newly skewed data.\\n\\nOverall the method is quite simple and heuristic. The technical contribution - i) updating class priors online ii) detecting class skews, is marginal. \\n\\nThe evaluation is performed on a contrived setting of skewed imagenet images. I would have liked to see some evaluation on video stream data where the skews are more natural. \\n\\nIn real scenarios, the class specific appearances P_{X|Y}(x|i) as well as class distributions P_Y(i) change online. The method seems incapable to handle such problems. In these situations, there is no simple fix, and one needs to resort to transfer.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clarification on section 4.\", \"comment\": \"Dear reviewer:\\n\\nThanks for your comment! (This clarification could be better read in pdf version (https://drive.google.com/file/d/17wFjCrhnNjcoIeV5v537gvcw9bH3KekX/view?usp=sharing) due to latex equations.)\\n\\n\\nWe estimate $P_t(i)$ with the \\\\textit{empirical class distribution} [1] in a short time window (every $\\\\omega_{min}$). In algorithm 1, $y_t$ indicates the prediction result for the $t$-th input frame classified by the model $h(\\\\cdot)$. $S_j$ indicates the empirical distribution in the $j$-th time window (the utilization of time window will be justified in \\\\textbf{Assumption} paragraph) and $\\\\oplus$ indicates the concatenation of two distributions. $S_j \\\\leftarrow S_j \\\\oplus [y_t]$ means that every new prediction result $y_t$ will be incorporated into the \\\\textit{empirical class distribution} $S_j$ composed by all $\\\\omega_{min}$ predictions, which will be computed as following:\\n\\\\begin{equation}\\n S_j(i) = \\\\frac{1}{\\\\omega_{min}} \\\\sum_{t=1}^{\\\\omega_{min}}\\\\mathbbm{1}_{y_t \\\\leq i} \\n\\\\end{equation}\\n. The $P_t(i)$ can be derived from empirical class distribution $S_j(i)$ by \\n\\\\begin{equation}\\n P_t(i) = S_j(i) - S_j(i-1)\\n\\\\end{equation}\\n\\n\\nThe if statement of $|| S_{j-1}, S_j|| \\\\leq \\\\pi_r$ is proposed for detecting the switch of class skew, as detailed in the following.\\n\\n\\n\\\\paragraph{Assumption.}As described in the first paragraph of section 4, the only assumption we hold is that the class skew in a scenario remains unchanged. Formally, let assume the existence of a partition (scenario for a class skew) $\\\\pi: N^+ \\\\rightarrow N^+$ over the input stream, where $\\\\pi(t)$ refers to the class skew that $t$-th image belongs to. Here, each partition maintains a distribution $T_{\\\\pi(t)}$ and the image $(x_t, y_t)$ is drawn randomly (\\\\textit{i.i.d.}) from distribution $T_{\\\\pi(t)}$. Here, the overall series is composed by a sequence of abruptly-changing partitions and the distribution within each partition remains same. This is a very weak but realistic assumption, since we do not have any other assumptions on how long a stationary distribution exists. Thus our proposed algorithm needs to not only detect the underlying distribution $T_{\\\\pi(t)}$ ($P_t(i)$ is the probability of each class $i$ in the distribution $T_{\\\\pi(t)}$), but also recognize the start time and end time for each partition $\\\\pi(t)$ (class skew) in an untrimmed streams of data.\\n\\n\\\\paragraph{Proposed approach.}As described in the second paragraph of section 4, we propose a windowed class skew detector to approximate the underlying distribution, as well as the start time and end time for each partition $\\\\pi(t)$ (class skew). Here, the empirical distribution $S_j$ in each window $j$ can be obtained to estimate the $P_t(i)$. Furher, the start time and end time of each partition $\\\\pi(t)$ (class skew) can be decided when there is a dramatic change in empirical class distributions $S_{j-1}$ and $S_{j}$ from adjacent windows $j-1$ and $j$. A dramatic change is decided when\\n\\\\begin{equation}\\n \\\\underset{i}{\\\\text{sup}} | S_{j}(i) - S_{j-1}(i) | \\\\geq \\\\frac{\\\\pi_r}{\\\\omega_{min}}\\n\\\\end{equation}\\n, where $\\\\omega_{min} = 30$ and $\\\\pi_r = 2$ in our evaluations.\\n\\n\\\\paragraph{Edge case when class skew switches.}Our proposed probability layer can handle the edge case, \\\\textit{i.e.}, a small turbulence to class skew has happened. For example, $10$ people stays in a lab and a stranger suddenly visits. This edge case is handled by the weak class skew (p<1) in our evaluation section.\\n\\nWe apologize for not providing enough detail for the Algorithm 1. We will revise it in our final version.\\n\\n\\\\begin{wrapfigure}{R}{0.35\\\\textwidth}\\n \\\\begin{minipage}{0.35\\\\textwidth}\\n \\\\begin{algorithm}[H]\\n \\\\caption{CSD algorithm}\\n \\\\begin{algorithmic}\\n \\\\Function{CSD}{$ $} \\\\label{alg: WEG}\\n \\\\For{$t$ in $1, ..., w_{min}$}\\n \\\\State $y_t \\\\leftarrow h(t)$\\n \\\\State $S_j \\\\leftarrow S_j \\\\oplus [y_t]$\\n \\\\EndFor\\n \\\\If{$|| S_{j-1}, S_j|| \\\\leq \\\\pi_r$}\\n \\\\State $S_j \\\\leftarrow S_{j-1} \\\\oplus S_j$\\n \\\\EndIf\\n \\\\State \\\\Return $S_j$\\n \\\\EndFunction\\n \\\\end{algorithmic}\\n \\\\label{alg: algorithm}\\n \\\\end{algorithm}\\n \\\\end{minipage}\\n\\\\end{wrapfigure}\\n\\n[1] J. Shao.Mathematical Statistics. Springer Texts in Statistics. Springer, 2003.\"}",
"{\"title\": \"Thanks for this response\", \"comment\": \"Thanks for this response,\\n\\nCan you also expand on section 4. The notation used in algorithm 1 is not detailed. \\nWhat is of particular interest is how Pt(i) is computed.\"}",
"{\"title\": \"Clarification on the transition from euqation 3 to equation 4\", \"comment\": \"Dear reviewer,\\n\\nThanks for your comment! There is a small typo in equation 4, which has no influence over other parts in our paper. (This clarification could be better read in pdf version (https://drive.google.com/file/d/1M1t0CjZWmcolfELb-kkqKVWtcqR9A6mg/view?usp=sharing) due to latex equations.)\\n\\nInstead of \\n\\\\begin{equation*}\\n P_t(i|X) = \\\\frac{\\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X)}{\\\\sum_{j=1}^n \\\\frac{P_t(i)}{P(j)} \\\\cdot P(j|X)}\\n\\\\end{equation*},\\nit should be \\n\\\\begin{align*}\\n P_t(i|X) = \\\\frac{\\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X)}{\\\\sum_{j=1}^n \\\\frac{P_t(j)}{P(j)} \\\\cdot P(j|X)}\\n\\\\end{align*}.\\nNote the single $i$ in the denominator has been replaced by $j$.\\n\\nThe following is the detailed proof on the transition from equation 3 to equation 4.\\n\\nIn equation 3, we have $P_t(i|X) = \\\\frac{P_t(i)}{P(i)} \\\\cdot \\\\frac{P(X)}{P_t(X)} \\\\cdot P(i|X)$. We also have $\\\\sum_{i=1}^n P_t(i|X) = 1$, based on the property of probability. Together, we can find that \\n\\\\begin{align*}\\n 1 & = \\\\sum_{i=1}^n P_t(i|X) \\\\\\\\\\n & = \\\\sum_{i=1}^n\\\\frac{P_t(i)}{P(i)} \\\\cdot \\\\frac{P(X)}{P_t(X)} \\\\cdot P(i|X) \\\\\\\\\\n & = \\\\frac{P(X)}{P_t(X)} \\\\cdot \\\\sum_{i=1}^n \\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X)\\n\\\\end{align*}\\nThe second equality holds by using equation 3. The third equality holds since $\\\\frac{P(X)}{P_t(X)}$ does not change over $i$.\\n\\nThus, we can have\\n\\\\begin{align*}\\n \\\\frac{P(X)}{P_t(X)} & = \\\\frac{1}{\\\\sum_{i=1}^n \\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X)} \\\\\\\\ \\n & = \\\\frac{1}{\\\\sum_{j=1}^n \\\\frac{P_t(j)}{P(j)} \\\\cdot P(j|X)}\\n\\\\end{align*}\\nThe second equality holds since every $i$ has been replaced with $j$. We conduct this replacement to avoid confusement with $i$ used in equation 3 and equation 4.\\n\\nUse this equation to replace $\\\\frac{P(X)}{P_t(X)}$ in equation 3, we can get\\n\\\\begin{align*}\\n P_t(i|X) & = \\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X) \\\\cdot \\\\frac{P(X)}{P_t(X)} \\\\\\\\\\n & = \\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X) \\\\cdot \\\\frac{1}{\\\\sum_{j=1}^n \\\\frac{P_t(j)}{P(j)} \\\\cdot P(j|X)} \\\\\\\\\\n & = \\\\frac{\\\\frac{P_t(i)}{P(i)} \\\\cdot P(i|X)}{\\\\sum_{j=1}^n \\\\frac{P_t(j)}{P(j)} \\\\cdot P(j|X)}\\n\\\\end{align*}\"}",
"{\"title\": \"Review\", \"review\": \"The idea proposed in this paper is to improve classification accuracy by making use of the context.\\nE.g. on the north pole we will see polar bears but no penguins, on Antartica we have no polar bears but many penguins.\\nHence, if we apply our imagenet-like classifier in the wild, we can improve accuracy by taking into account changes in the prior distribution.\\n\\nThe paper proposes a way to rescale the probabilities to do exactly this and reports improved results on modified versions of \\n CIFAR 10 and imagenet with artificial class skew. To achieve this, an additional trick is introduced where the re-scaling is only used when the model is not very certain of its prediction. And additional motivation for this work is that less compute resources are needed if the problem is simplified by utilizing class skew. \\n\\nThe core idea of the paper is interesting. However, I am not able to understand what exactly is done and I am 100% confident I cannot re-implement it. The authors already improved upon this in our interactions prior to the review deadline. \\nAn additional issue is that the paper does not have a good baseline. \\nI would not like to dismiss the approach based on its simplicity. An elegant solution is always preferred. However, all the tasks are quite artificial and this limits the \\\"impact\\\" of this work. If an \\\"natural\\\" application/evaluation where this approach would be possible, it would strengthen the paper greatly. \\n\\nFor the reasons above I recommend rejection of the manuscript in the current state but I am confident that many of these issues can be resolved easily and if this is done I will update the review.\\n\\nMissing information\\n----------------------------\\n- The original manuscript had a lot of information missing, but much of it has since been provided by the authors.\\n- In the static class skew experiment, were two passes over the data needed? Or was the Pt(i) pre-set? Would it also be possible to give details about LR, optimizer, LR schedule, batch size, .... for the transfer learning experiments. This would enhance reproducibility. \\n- For the imagenet experiments how was Pt(i) set in the if I assume correctly, static setting.\", \"possible_additional_baselines\": \"-----------------------------------------\\n\\nWe could make a simpler rescaling by changing the prior distribution and assuming everything else remains constant.\\nWhile this is a simplifying assumption, it is very easy to implement and should take only a couple of minutes to run. \\nP(i|x)=1/P(X)*P(X|i)*P(i)\\nPt(i|x)=P(i|x)*Pt(i)/P(i)\\n\\nOne could also introduce another baseline where only the most probably classes are considered. Since this approach is clearly sub-optimal since it guarantees some mis-predictions it should serve as a lower bound on the performance that is to be expected.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Request for clarification\", \"comment\": \"Dear Authors,\\n\\nCould you please clarify the transition from equation 3 to equation 4. I do not understand how this step is made.\\n\\nIt would be helpful if you could clarify this before I submit the official review.\"}"
]
} |
|
BkgVx3A9Km | A More Globally Accurate Dimensionality Reduction Method Using Triplets | [
"Ehsan Amid",
"Manfred K. Warmuth"
] | We first show that the commonly used dimensionality reduction (DR) methods such as t-SNE and LargeVis
poorly capture the global structure of the data in the low dimensional embedding. We show this via a number of tests for the DR methods that can be easily applied by any practitioner to the dataset at hand. Surprisingly enough, t-SNE performs the best w.r.t. the commonly used measures that reward the local neighborhood accuracy such as precision-recall while having the worst performance in our tests for global structure. We then contrast the performance of these two DR method
against our new method called TriMap. The main idea behind TriMap is to capture higher orders of structure with triplet information (instead of pairwise information used by t-SNE and LargeVis), and to minimize a robust loss function for satisfying the chosen triplets. We provide compelling experimental evidence on large natural datasets for the clear advantage of the TriMap DR results. As LargeVis, TriMap is fast and and provides comparable runtime on large datasets. | [
"Dimensionality Reduction",
"Visualization",
"Triplets",
"t-SNE",
"LargeVis"
] | https://openreview.net/pdf?id=BkgVx3A9Km | https://openreview.net/forum?id=BkgVx3A9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gNwjx-lV",
"HylVQ3-Gy4",
"SJluAqgMkN",
"SkgJdDNjRQ",
"B1e323mPR7",
"HkgnsCWD07",
"HygXUL-R6m",
"rJgNfUWCpm",
"SkxHCrZAam",
"rJxzobq32m",
"ryeAsHY5hQ",
"HklpbWfWhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544780635860,
1543801884430,
1543797455990,
1543354214852,
1543089331570,
1543081635877,
1542489674952,
1542489612490,
1542489549186,
1541345689585,
1541211557591,
1540591876582
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1068/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1068/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1068/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1068/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1068/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1068/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1068/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1068/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1068/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1068/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1068/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1068/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Dear authors,\\n\\nThe reviewers all appreciated your goal of improving dimensionality reduction techniques. This is a field which does not enjoy the popularity it once did but remains nonetheless important.\\n\\nThey also appreciated the novel loss and the use of triplets.to get the global structure.\\n\\nHowever, the paper lacks some guidance. In particular, it oscillates between showing qualitative results (robustness to outliers, \\\"nice\\\" visualizations) and quantitative ones (running time, classification performance). I agree with the reviewers that the quantitative ones should have used the same preprocessing for t-SNE and TriMap (either PCA or no PCA), regardless of the current implementation in software tools.\\n\\nGiven that the quantitative results are not that impressive, may I suggest focusing on the qualitative ones for a resubmission? The robustness of the emeddings to the addition or removal of a few points is definitely interesting and worth further investigation, optionally with a corresponding metric.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A new take on dimensionality reduction which deserves a more guided experimental section\"}",
"{\"title\": \"Comments are vague and unclear\", \"comment\": \"Please note that we carefully addressed all the concerns raised in your initial review. In the revised version, we showed that other DR methods (PHATE, UMAP, and STE) fail at least some of the global tests discussed in our paper. Note that we did not provide thorough comparisons with PHATE and UMAP because these two methods are unpublished work! We believe we have provided a clear problem formulation, a performance comparison with two state-of-the-art methods, t-SNE and LargeVis, and proof for scalability to tens or hundreds of dimensions. Furthermore, we added evidence that each piece in our formulation (loss transformation, triplet weighting, nearest-neighbor triplets, and random triplets) are crucial for our method to work. The only concern not addressed in our work in a \\\"global measure of performance\\\" for DR methods, which actually has not been developed yet and we are not aware of any measure that can quantitatively reflect the performance on our DR tests.\\n\\nWhile we appreciate your feedback very much, your final comments seem rather vague and subjective. We need more concrete and clear comments to be able to improve our paper. We appreciate it very much if you provide a \\\"detailed reasoning\\\" why you would think the revised version deserves a lower score than the initial submission. While we respect your decision, lowering the score after significantly improving the comparisons and adding clarifications to the revised version while addressing all your concerns in the initial review is rather unfair.\"}",
"{\"title\": \"Insufficiently complete revision\", \"comment\": \"The proposed method appears promising and seems to work well on the examples shown in the paper. However, the authors' additional comparisons competing methods are very cursory and do not substantially demonstrate Trimap's superiority to the existing suite of dimensionality reduction methods currently available. The authors do not provide sufficient discussion to justify the formulation of Trimap or to provide a theoretical basis for its performance over other methods.\\n\\nWhile I believe the algorithm is a valuable contribution, the presentation of the algorithm in this manuscript is not sufficient for its publication. The authors would do well to revise the manuscript to give a clearer justification, including a conclusive suite of benchmark datasets. Additionally, further evidence for the value of Trimap could be provided in the analysis of its scalability to tens or hundreds of dimensions for use as a general dimensionality reduction methods, as is done with many methods like PCA and diffusion maps, but is not possible with t-SNE. This would make Trimap useful for applications such as clustering, rather than only for visualization.\\n\\nIn light of this cursory revision of the manuscript, I have updated my recommendation from a 6 to a 5.\"}",
"{\"title\": \"A Gentle Reminder\", \"comment\": \"Thanks again to all reviewers.\\nThis is just a gentle reminder (to Reviewer 1 and 3) to possibly give us feedback on our new additions.\"}",
"{\"title\": \"Further Discussion\", \"comment\": \"Thank you for acknowledging our response.\\n\\n1) Your concern about the global measures of DR performance is absolutely valid. We are not aware of any global measure of DR method that can reflect the properties discussed in our tests. However, we are actively working on developing such measures that are tractable for large datasets. For instance, a direct generalization of precision-recall (which are local measures of DR) to global measures would be to consider the \\\"farthest-away remoteness\\\" of each point. That is, for each point i, (instead of nearest-neighbors) consider the farthest-away points from i in the high-dimensional space and the low-dimensional embedding. The ratios (true farthest points recovered/total points recovered) and (true farthest points recovered/total true farthest points) would be direct generalizations of precision and recall, respectively. However, while calculating nearest-neighbors is relatively cheap, finding farthest-away points in high-dimension is extremely inefficient and cannot be used for datasets of our scale. We leave the development of methods for (approximately) calculating such measures on large datasets for future work.\\n \\n2) We checked different implementations of t-SNE. There seems to be some inconsistency about applying PCA in the official implementation of t-SNE (https://lvdmaaten.github.io/tsne/) and the sklearn implementation (https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html). We will update the runtime of t-SNE and LargeVis in Table 1 and 2 and include the runtime before and after applying PCA down to 100 dimensions.\"}",
"{\"title\": \"Quantitative definition about global properties and detailed comparisons to t-SNE missing\", \"comment\": \"Thanks for the authors' detailed explanations and experiments addressing the raised concerns. After reading the authors' rebuttal, I still have two major concerns that I believe to be highly important:\\n\\n1. TriMap produces good visualization without sacrificing global properties too much. However, no formal quantitative definitions or evaluations are provided, which is the key to support the claimed advantages of the proposed method. Without these quantitative or theoretical analyses, users will be very reluctant to choose such a method lying between other methods. If the users' target is to identify outliers preserving global properties, they will choose other embedding methods that directly focusing on anomaly detection (for e.g., recent methods based on deep autoencoders). \\n\\n2. Comparing to the default implementation of t-SNE in sklearn is highly biased. Before applying t-SNE with tree accelerations, PCA should also be applied to the original data to obtain reasonably low-dimensional (e.g., 100-d) input data to t-SNE, through which the running time of tree accelerated t-SNE will be significantly reduced. The comparisons to tree accelerated t-SNE in this paper is biased.\\n\\nTherefore, item 3, 6, and 7 in my original review should be seriously addressed.\\n\\nAfter reading the rebuttal, I revised my rating from 5 to 6.\"}",
"{\"title\": \"Detailed Comments\", \"comment\": \"Reviewer 1:\\n\\nThank you for raising the concerns about sampling triplets. We hope that additional experimental results given in Appendix C addresses some of your questions. In addition, although the triplets are formed using the pairwise distances, they simultaneously take into account the relative similarities (or distances) of three points. Therefore they are somehow more informative than simple pairwise constraints. This has been discussed before in the semi-supervised clustering context: e.g. in \\\"A kernel-learning approach to semi-supervised clustering with relative distance comparisons\\\" by Amid et al. 2015. They showed experimentally that a smaller number of triplet constraints improves the performance of clustering significantly more than pairwise constraints.\", \"reviewer_2\": \"Although the local properties of the low-dimensional embeddings are highly valuable, we claim that the global properties are sometimes much more important in certain scenarios. For instance, in medical diagnosis, the relative closeness of the clusters of points or detection of bad outliers (e.g. cancerous vs. non-cancerous cells) are highly critical. Our paper brings out this issue and provides a solution for improving the global accuracy. We are hoping that our paper will initiate new research on DR methods that focus on global properties.\", \"reviewer_3\": \"Thank you for your suggestion on detecting and removing the outliers. We are planning to add the \\\"zoom feature\\\" to our official release of the code. We would like to enable enhancing the visualization of sub-regions of the embedding by simply zooming into the region and re-evaluating the algorithm on the subset of the points. Since TriMap is rather insensitive to removing portions of data (experimental evidence in Figure 1.2.1a and 1.2.1b), this feature would fit perfectly with method.\\n\\nThe dotted line between the clusters of '0's and '6's is drawn to mainly emphasize that the relative distance between these two clusters (and the rest) before and after removing the odd digits remains almost the same for our method but change significantly when using LargeVis (Figure 5.\\\\star and Figure 2.1.b). This dotted line was unnecessary for Figure 1.2.1a and therefore, we removed it in the updated version.\"}",
"{\"title\": \"Common Concerns (Continued)\", \"comment\": \"*** Runtime and Computational Complexity ***\\n\\nThank you for noticing the error in the runtime given in Table 1. We corrected the runtime on Fashion MNIST dataset. Note that we used the sklearn implementation of t-SNE on a 2.6 GHz Intel Core i5 machine with 16 GB memory. We used the same machine for running LargeVis and TriMap and we are certain that the results are accurate. One major point we noticed during our experiments was that LargeVis performs very slow on: 1) small datasets (see Table 2), and 2) datasets with large number of dimensions (Table 1, TV News). The former could be an implementation issue while the latter is due the random projection step for nearest neighbor search. Note that we perform PCA as a step in our algorithm to accelerate the nearest neighbor search (also used in t-SNE; please see the original paper and the sklearn implementation details). The error induced by the PCA step is negligible compared to the error of mapping from high-dimension to 2-D or 3-D. We can also accelerate LargeVis on large dimensional datasets by performing the PCA step (although it is not part of the original algorithm). However, the performance on small datasets still remains slow.\\n\\nThank you also for correcting our statement about our computational complexity. The nearest neighbor search is indeed the bottleneck of most of the algorithms (t-SNE, LargeVis, UMAP, TriMap, etc.). In some sense, all these algorithms have at least O(n log n) complexity due to the NN seach. However, notice that after the nearest neighbor search step (which is done once), (Barnes-Hut) t-SNE still has O(n log n) complexity while our method scales linearly. We have already corrected the complexity discussion in the current version. We are actively working on improving the runtime of TriMap. Our goal is to achieve a runtime as fast as UMAP.\\n\\n*** Effect of Weights and Different Triplets ***\\n\\nWe added extra experimental results to show the effect of adding weights to the triplets as well as the effect of different types of triplets (nearest neighbors triplets vs randomly generated triplets). The results are given in Figure 6, 7, and 8. In conclusion, assigning weights to the triplets is a crucial piece in obtaining good low-dimensional embeddings of the data. Without adding weights, we may need much larger number of triplets to achieve similar performance as the weighted case. Additionally we show that using a larger number of nearest neighbors for forming triplets provides more information about the local structure of the data whereas the global structure is explained by a fewer number\\nof distant points. Also a small number of randomly generated triplets improves the global structure of the embedding.\"}",
"{\"title\": \"Common Concerns\", \"comment\": \"Thank you for your thorough reviews.\\nWe were able to significantly our experimental evaluation based on your suggestions.\", \"we_first_address_the_common_concerns_raised_by_reviewers\": \"*** Comparison to Stochastic Triplet Embedding (van der Maaten and Weinberger, 2012) ***\\nIn Appendix A, we show that STE (also t-STE) is a special case of our method where the parameters are chosen sub-optimally (in the sense that the loss is not robust) and the triplet weights are set to one. We also added results of the DR tests on t-STE in Appendix B. Also the result in Figure 3 with (t = 1, t' = 2) corresponds to t-STE with the addition of triplet weights. Overall, it is evident from the experiments that t-STE is inferior and provides poor results by introducing\\nfactitious outliers. Additionally, t-STE fails to reveal the true outliers (See Figure 5.2.2).\\n\\n*** Quantitative Measures of Local and Global Performance ***\\n\\nThe main reason for including AUC (which is a local measure of DR performance) is to show that the local measure do NOT reflect the global properties of the embedding. This is discussed in our tests for global accuracy in Section 2 of the paper. For instance, in Figure 1.2.3, PCA clearly separates the two copies of MNIST, but has a very low AUC score. We also included the NN classification accuracy in our results and (after fixing a minor bug in our code) updated the AUC scores. The (AUC, NN-Accuracy) values are show on the bottom of each plot in Figure 1 and 5. Again, NN-accuracy fails to reflect these global properties. \\n\\nWe are not aware of any global measure of DR method that can reflect the properties discussed in our tests. We leave the development of such measures that are tractable for large datasets for future work. We also added DR test results produced by PHATE (Moon et al., 2017) and UMAP (McInnes and Healy, 2018). These two methods also fail at least some of the DR tests. We also performed experiments using Diffusion Map (Coifman and Lafon, 2016). The Diffusion Map method is incredibly slow and we were only able to calculate the results for a subset of 10,000 points from MNIST. Overall, the method has good global properties and is able to detect the artificial outlier and the two copies. However, the quality of the embeddings are much inferior compared to other methods (results available anonymously at: https://goo.gl/bGJqSD). We were not able to perform the tests on Parametric t-distributed Stochastic Exemplar Centered Embedding (Min et al., 2018). The method requires careful implementation of a neural network and this was not feasible to try in the given timeframe. However, the method is heavily motivated by the pt-SNE (van der Maaten, 2009) method and thus, we conjecture that it also may not be able reflect the global properties. Overall, we are not aware of any competitor method (with comparable runtime and quality of embedding) that can reflect all the global properties discussed in our paper.\\n\\nWe also added experimental results on three datasets with underlying low-dimensional manifolds, namely, 3-D Sphere, Swissroll, and ISOMAP Faces (Figure 9 and 10). TriMap again provides globally accurate results while preserving the continuity of the underlying manifolds. Thus, although the local measures for TriMap are not as high as t-SNE, is still provides locally accurate results.\"}",
"{\"title\": \"Novel loss function but experiments are lacking\", \"review\": \"Motivated by the observation that most of previous dimensionality reduction methods focus on preserving\\nlocal pairwise neighboring probabilities and lack in preserving global properties, this paper proposes a \\nmethod called TriMap to optimize a loss function preserving similarities among triplets of data points. A large \\nnumber of triplets are sampled either based on nearest neighbor calculations or random sampling. Experimental \\nresults on several datasets show that TriMap identifies outliers and preserves global data properties better \\nthan previous approaches based on pairwise data point comparisons.\", \"major\": \"The idea in this paper is well motivated and the loss function based on probability ratio is novel. However, \\nthere are some major concerns about method analyses and experimental evaluations,\\n\\n1. Data embedding based on triplets has been presented in (van der Maaten and Weinberger, 2012). The authors \\nneed to present detailed explanations and formal analysis why the proposed method significantly outperforms the \\nprevious one. A recent dimensionality reduction method compares data points only to data cluster centers (Parametric \\nt-distributed stochastic exemplar centered embedding, Min et al., 2018), does it preserve global data properties? Does \\nits trivial combination with standard t-SNE well preserve both local and global data properties?\\n\\n2. Preserving local pairwise neighborhood structure is often the most important part in high-dimensional data \\nvisualization, because only local similarities can be confidently trusted in a high-dimensional space. Even if preserving \\nglobal data properties is important, the very local neighborhood structure should also be preserved. However, the \\nproposed method TriMap is significantly worse than t-SNE according to AUC under the precision-recall curve. \\n\\n3. Standard quantitative evaluations based on 1NN error rate and quality scores (van der Maaten & Hinton 2008, Min \\net al. 2018) should be added to the experiments. For preserving global data properties, quantitative evaluations on all \\nthe datasets will make the experiments much more convincing.\\n\\n4. In the abstract, the claim that TriMap scales linearly is inaccurate, the triplet sampling requires nearest neighbor \\ncalculations, which has computational complexity of at least O(nlogn)\\n\\n5. This paper proposed two variants of triplet sampling, nearest neighbor triplets and random triplets. Detailed experimental \\ncomparisons about them should be provided in the paper.\\n\\n\\n6. The running time comparisons in Table 1 must be wrong or highly biased with improper hyperparameter setting. Based \\non tree accelerations, t-SNE can produce impressive visualization on MNIST-scale datasets within 15 minutes (please \\ncheck the experimental details PP. 3235-3238 in van der Maaten, Journal of Machine Learning Research 2014).\\n\\n7. The authors mentioned partial observation, outliers and subclusters in the global information, but the authors do not specifically define \\nwhat the global information should be rigorously, and the paper does not theoretically prove or explain via experiments how the global \\ninformation is kept by TriMap.\\n\\n8. In the experiments, the authors applied PCA before TriMap to reduce the dimensionality while PCA is not applied in tSNE and LargeVis. The authors do not explain why the settings are different in the three methods.\", \"minor\": \"9. In the algorithm, the authors show different equations for different t and t\\u2019, but are not evaluated in experiments.\\n\\n(After reading the rebuttal, I raised the rating from 5 to 6.)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"TripMap needs more comparison and validation\", \"review\": \"In this paper, the authors present a novel dimensionality reduction method named TriMap. TriMap attempts to improve upon the widely-adopted t-SNE algorithm by incorporating global distances through the use of triplets, rather than pairwise comparisons. The authors compare to t-SNE, as well as a newer method called LargeVis which also claims to impose a global distances metric. The authors show that their method is more robust to the addition or removal of clusters and outliers and provides a more meaningful global distance relative to the methods against which they compare.\\n\\nTechnical Quality\\nThe authors\\u2019 method is clear and well described and addresses a poignant issue in dimensionality reduction. However, the authors fail to compare their method to a number of relevant dimensionality reduction algorithms which also claim to provide solutions with globally meaningful distances. Such methods include force-directed graph drawing (Fruchterman & Reingold, 1991), diffusion maps (Coifman & Lafon, 2006) and PHATE (Moon et al., 2017). \\n\\nAdditionally, the handling of outliers is a concern. While the authors claim that the retention of outliers as disconnected from the manifold is a desirable quality of their technique, the presence of many outliers in a dataset (for example, in the Tabula Muris and lyrics datasets) has the potential to mask the interesting portion of the dimensionality reduction. It may be worth commenting on the desirability to identify and remove outliers, and the provision of such a technique in the software upon its release.\\n\\nFinally, the runtime comparison is of concern. It is common to perform most DR methods on high-dimensional PCA representation of the data, particularly in single-cell genomics (e.g. the Tabula Muris dataset in Part 3.) In this context, both UMAP and PHATE successfully embed the Tabula Muris dataset in less than the reported TriMap time (3.5 and 5 minutes respectively, compared to 15 minutes reported for TriMap.)\\n\\nNovelty\\nThe authors\\u2019 method appears to provide improved results over the compared alternatives, however, it is worth noting that triplet-based embedding is not novel in its own right (van der Maaten & Weinberger, 2012), though one could argue novelty is warranted here due to claimed substantial improvements of results. In this case, the authors should include a comparison to competing triplet-based methods, at least in the appendix. \\n\\nPotential impact\\nThe authors\\u2019 method has the potential to be used widely across many fields, as a direct replacement for t-SNE. Its adoption is contingent on compelling evidence that it produces results substantially better than UMAP (which is currently heralded as an upcoming replacement for t-SNE in some fields) and other competing methods. The authors may find it worthwhile to provide such comparisons, if not in the main body of the paper at least in the appendix. \\n\\nClarity\\nThe paper is easy to read and makes its point in a reasonably concise manner. Detailed explanation of experiments v) and vi) could be relegated to the appendix. More precise statement of the authors\\u2019 tests in Part 2 could be provided by quantifying the results of the tests in a more precise way; it is not clear what the authors seek to achieve by drawing the dotted lines between clusters in Figure 1a, or by providing AUC values in Figure 1.\\n\\nDetailed Comments\\n\\u2022\\tIn the definition of Equation 2, it is not until one paragraph later than q_{ij}^{~(t\\u2019)} is defined \\u2013 this is confusing and hard to read.\\n\\u2022\\tThe captions for Figures 1 and 3 would be substantially clearer with more detail on the dataset analyzed and in Figure 1, some discussion of the purpose of each subplot.\\n\\u2022\\tThe Figure 3 caption needs a semicolon or period before introducing the bottom panel.\\n\\u2022\\tThe claim that the authors\\u2019 heuristic triplet sampling (nearest-neighbor and random sampling) is sufficient to approximate full triplet sampling should be shown in the appendix.\\n\\u2022\\tThe collaboration network analyzed in Part 3 is naturally a graph; it would make sense to cluster and visualize this using a graph-based clustering, rather than coercing it to Euclidean coordinates.\\n\\n(Note: after reading the revised manuscript I have changed my recommendation from a 6 to a 5)\\n\\nReferences\\nCoifman, R. R., & Lafon, S. (2006). Diffusion maps. Applied and computational harmonic analysis, 21(1), 5-30. https://doi.org/10.1016/j.acha.2006.04.006\\nMoon, K. R., van Dijk, D., Wang, Z., Burkhardt, D., Chen, W., van den Elzen, A., ... & Krishnaswamy, S. (2017). Visualizing transitions and structure for high dimensional data exploration. bioRxiv, 120378. https://doi.org/10.1101/120378\\nFruchterman, T. M., & Reingold, E. M. (1991). Graph drawing by force\\u2010directed placement. Software: Practice and experience, 21(11), 1129-1164. https://doi.org/10.1002/spe.4380211102\\nL. van der Maaten and K. Weinberger. Stochastic triplet embedding. In 2012 IEEE International Workshop on Machine Learning for Signal Processing, pp. 1\\u20136, Sept 2012. doi: 10.1109/MLSP.2012.6349720.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A More Globally Accurate Dimensionality Reduction Method Using Triplets\", \"review\": \"Authors propose a new method called TriMap, which captures higher orders of structure with triplet information, and minimize a roust loss function for satisfying the chosen triplets.\\n \\nThe proposed method is motivated by the misleading selection approach for a dimensionality reduction method using local measurements. And then, authors resort to an evaluation based on visual clues based on a number of transformations. Authors then claim that any DR method preserving the global structure of the data should be able to handle these transformations. An example on MNIST data illustrate these properties, but it is still not clear what are the visual clues as the criterion to select a good DR method and what are the global structures.\\n \\nAuthors discussed the results in Figure 4 for six real-world datasets, but there is no convincing evidence from the corresponding domains or reference researches for the support of the global structure in the learned embedding space. It will be good to add some convincing evidences for the conclusion.\\n \\nAs the method highly depends on the subset of sampled triplets, it is interesting to see how the global structure changes if a different set of triplets is used. In addition, it is unclear why sampled triplets can achieve a global structure of data instead of pairwise relations. From the experiments, triplets are also sampled according to the pairwise nearest neighbor graph.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1x4ghC9tQ | Temporal Difference Variational Auto-Encoder | [
"Karol Gregor",
"George Papamakarios",
"Frederic Besse",
"Lars Buesing",
"Theophane Weber"
] | To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning. | [
"generative models",
"variational auto-encoders",
"state space models",
"temporal difference learning"
] | https://openreview.net/pdf?id=S1x4ghC9tQ | https://openreview.net/forum?id=S1x4ghC9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJeFv1wBm4",
"HkgMMeVZxN",
"BJxUrnv_AQ",
"SyxSfnP_0Q",
"BJgAOovOC7",
"rkeaQHUJam",
"rJeR1S-ThQ",
"BkgnEnawnm"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1548214112605,
1544794121837,
1543171133588,
1543171085239,
1543170933601,
1541526821395,
1541375205526,
1541033011572
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1067/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1067/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1067/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1067/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1067/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1067/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1067/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for your great work.\\n\\nI have one question about DeepMind Lab experiments in this paper.\\nIn Appendix D, you mentioned that p_D(x_{t_2}) is Bernoulli distribution and the log-likelihood is calculated using the logits outputted by the network in MNIST experiments.\\nIs it same in the DeepMind Lab experiments?\\nI think Normal distribution with a fixed variance is often used as decoder distribution in such color image generation, so if you used a different setting for DeepMind Lab experiments, I hope the setting is clearly written in the paper.\\n\\nThank you.\", \"title\": \"About DeepMind Lab experiments\"}",
"{\"metareview\": \"The reviewers agree that this is a novel paper with a convincing evaluation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Original paper\"}",
"{\"title\": \"Re:\", \"comment\": \"Thank you for your review and comments. We clarified our intuitive derivation of the loss in section A. It is indeed difficult to compare the jumpy TD-VAE model to other models, as there is little work that studies such models. We updated the appendix to explain how a model similar to jumpy TD-VAE provides an approximate ELBO to the \\u2018jumpy\\u2019 log likelihood log p(x_{t_1}, x_{t_2}, .. x_{t_n}). As for comparison to published models, we did compare the sequential TD-VAE elbo on the simple mini-pacman dataset to classical state-space models; we also compared the belief state obtained by training a TD-VAE on the oscillator network to a more classical lstm in a recurrent classification setup. Following the line of the thinking, we believe an appropriate way to compare similar models will be through the comparison of the different belief states they learn. We highlighted this in the text.\"}",
"{\"title\": \"Re:\", \"comment\": \"Thank you for the review and comments.\\nThanks for the suggestion - we added missing experiment details, network specifications and hyperparameters in the appendix. \\nYou are correct that q(z_{t-1}|z_t, b_{t-1}, b_t) does not need to depend on b_{t-1}, but it does not hurt to do so; we chose to do so in order to further facilitate the learning of b_{t-1}, but it may not have affected experiments.\\nIf the model does not take the jump interval as input, the model has to represent the jump size by way of a multimodal distribution over possible future events. One could imagine that one of the latent variables could be learned to correspond to dt.\"}",
"{\"title\": \"Re:\", \"comment\": \"Thank you for your thoughtful review and comments.\\n\\nThanks for noticing the typo - we will fix it.\\nRegarding the exposure bias - TD-VAE may indeed reduce exposure bias by generating faraway futures in fewer steps of generation. But we have not explicitly investigated that issue in the paper.\\nRegarding the distribution of (t_2-t_1), for the noisy harmonic oscillator experiment we use a mixture of two uniform distributions, one with support [1,T], the second with support [1,T\\u2019], with T\\u2019>T. Since shorter time steps are easy to model, this served as a form of \\u2018curriculum\\u2019 for the jumpy model; this enables us to learn the state representation, which in turns facilitates learning the \\u2018jumpier\\u2019 transitions from [1,T\\u2019]. We clarify this in the text. It is indeed likely that weighting [1,T'] more heavily would indeed improve the jumpier prediction.\\nMore general strategies could be adopted, for instance choosing jump sizes which make the jump easy to predict (as is suggested in Neitz et al. and Jayaraman et al.), or hard to predict (a form of prioritized replay for model learning), or any other criterion. We reserve the investigation of which scheme leads to the best model to future work.\\nAs for code, we will aim to release a simplified version of the code in the future.\"}",
"{\"title\": \"Nice and novel idea\", \"review\": \"This paper proposes the temporal difference variational auto-encoder framework, a sequential general model following the intuition of temporal difference learning in reinforcement learning. The idea is nice and novel, and I vote for acceptance.\\n1. The introduction of belief state in the sequential model is smart. How incorporate such technique in such an autoregressive model is not easy.\\n2. Fig 1 clearly explained the VAE process.\\n3. Four experiments demonstrated the main advantages of the proposed framework, including the effectiveness of proposed belief state construction and ability to jumpy rolling-out,\", \"other_comments_and_questions\": \"1. Typo, p(s_{t_2}|s_{t_1}) in the caption of Fig 1.\\n2. Can this framework partially solve the exposure bias?\\n3. The author used uniform distribution for t_2 - t1, and from the ``NOISY HARMONIC OSCILLATOR`` we can indeed see larger interval will result in worse performance. However, the author also mentioned other distortion could be investigated, so I am wondering if the larger probability mass is put on larger dt, what the performance will become.\\n4. The code should be released. I think that it is a fundamental framework deserving further development by other researchers.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very strong\", \"review\": [\"The authors propose TD-VAE to solve an important problem in agent learning, simulating the future by doing jumpy-rollouts in abstract states with uncertainty. The authors first formulate the sequential TD-VAE and then generalize it for jumpy rollouts. The proposed method is well evaluated for four tasks including high dimensional complex task.\", \"Pros.\", \"Advancing a significant problem\", \"Principled and quite original modeling based on variational inference\", \"Rigorous experiments including complex high dimensional experiments\", \"Clear and intuitive explanation (but can be improved further)\", \"Cons.\", \"Some details on the experiments are missing (due to page limit). It would be great to include these in the Appendix.\", \"It is a complex model. For reproducibility, detail specification on the hyperparameters and architecture will be helpful.\", \"Minor comments\", \"Why q(z_{t-1}|z_t, b_{t-1}, b_t) depends both b_{t-1}, b_t, not only b_t?\", \"The original model does not take the jump interval as input. Then, it is not clear how the jump interval is determined in p(z\\u2019|z)?\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"TD-VAE\", \"review\": \"There are several ingredients in this paper that I really liked. For example, (1) the notion that an agent should build a deterministic function of the past which implicitly captures the belief (the uncertainty or probability distribution about the state), by opposition for example to sampling trajectories to capture uncertainty, (2) modelling the world's dynamic in a learned encoded state-space (by opposition to the sensor space), (3) instead of modeling next-step probabilities p(z(t+1)|z(t)), model 'jumpy transitions' p(z(t+delta)|z(t)) to avoid unrolling at the finest time scale.\", \"now_for_the_weak_points\": \"(a) the justification for the training loss was not completely clear to me, although I can see that it has a variational flavor\\n(b) there is no discussion of the issue that we can't get a straightforward decomposition of the joint probability over the data sequence according to next-step probabilities via the chain rule of probabilities, so we don't have a clear way to compare the TD-VAE models with jumpy predictions against other more traditional models\\n(c) none of the experiments make comparisons against previously published models and quantitative results (admittedly because of (b) this may not be easy).\\n\\nSo I believe that the authors are onto a great direction of investigation, but the execution of the paper could be improved.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rye4g3AqFm | Deep learning generalizes because the parameter-function map is biased towards simple functions | [
"Guillermo Valle-Perez",
"Chico Q. Camargo",
"Ard A. Louis"
] | Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong simplicity bias in a model DNN for Boolean functions, as well as in much larger fully connected and convolutional networks trained on CIFAR10 and MNIST.
As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems.
This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10 and for architectures including convolutional and fully connected networks. | [
"generalization",
"deep learning theory",
"PAC-Bayes",
"Gaussian processes",
"parameter-function map",
"simplicity bias"
] | https://openreview.net/pdf?id=rye4g3AqFm | https://openreview.net/forum?id=rye4g3AqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1x060syx4",
"ryxmoPzVJV",
"ByxxENLl1V",
"rJxEXPUn0X",
"H1e4AB830m",
"S1xnqB82Cm",
"rJlbFXU20m",
"Skxmk-LhAX",
"Hkx48e8nA7",
"SyxXKMG937",
"SkgdeIbqh7",
"BJlNf4D827"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544695493904,
1543935899045,
1543689256446,
1543427867931,
1543427531863,
1543427475812,
1543426936940,
1543426266737,
1543426123636,
1541182074904,
1541178863536,
1540940812103
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1066/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1066/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1066/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1066/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Dear authors,\\n\\nThere was some disagreement among reviewers on the significance of your results, in particular because of the limited experimental section.\\n\\nDespite this issues, which is not minor, your work adds yet another piece of the generalization puzzle. However, I would encourage the authors to make sure they do not oversell their results, either in the title or in their text, for the final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting addition to the deep learning theory literature\"}",
"{\"title\": \"Probability versus complexity for our CCN with over 10^6 parameters\", \"comment\": \"We thank the referee for getting back to us. The referee mentions that our Fig 1 probability v.s. complexity plot is on a relatively small network with 2001 parameters.\\n\\nIf it helps, one can also infer probability-complexity plots from our much larger FCN and CNN networks. We calculate PAC-Bayes bounds for increasing amounts of label corruption From Fig 3 (a) for the CCN with 1,451,001 parameters for MNIST or fMNIST, the generalization error bound varies from 0.2 for no label corruption, to 0.8 for full label corruption. Increasing label corruption leads to increased complexity. At the same time, P(U) drops from roughly e^(-2000) for no corruption to roughly e^(-8000) for full corruption, so in this case, the probability decreases by 2600 orders of magnitude to explain the decrease in generalization accuracy. P(U) is not the same as P(f) of course, but they should correlate (P(U) is proportional to the average P(f)). Overall, this shows that P(f) probably correlates with function complexity for these bigger networks too.\\n\\nWe are now working on using complexity measures applicable to these larger networks (e.g. based on neural network compression techniques), and correlate that with the probability of the function.\"}",
"{\"title\": \"read and considered\", \"comment\": \"I have read and considered the author's response, and do not wish to change my rating of the paper.\"}",
"{\"title\": \"Other papers on SGD that are relevant\", \"comment\": \"__Other papers on SGD that are relevant__\\n\\nWe also include in the introduction a brief description of a related argument by (Wu et al. 2017 https://arxiv.org/abs/1706.10239 ) who found that normal gradient descent (GD) and SGD gave similar results for several architectures on MNIST. In their paper they find a wide range of generalization performance that correlates very well with a measure of local flatness. They point out that flatness correlates with basin volume V (a concept from optimisation on landscapes) and since the volume varies a lot they argue that both GD and SGD, at least to first order, find basin volumes V that are large, and so find similar generalization performance. In our work we directly calculate the volume via P(f), which is proportional to the volume of parameter space that generates function f. This volume is also correlated with the basin volume V, and we make a similar qualitative argument, namely that the very large bias in the parameter-function map is likely to be the first order driver of what solutions SGD finds. \\n\\n(As an aside, we note that while the concepts of flatness are vulnerable for example to reparameterization, and moreover are local, P(f) is a global property)\\n\\nWe are planning a longer paper on the complex question of how SGD samples parameters. It may nevertheless be helpful to include some further discussion of the literature here.\\n\\n So far there have not been many works that directly study how SGD samples parameters. Nevertheless, there are interesting indirect suggestions in the literature that are worth exploring. \\nFor example, in a study comparing GPs and direct SGD on a CNN and an FCN, all trained on CIFAR10, (see Figure 4 (b) of Nowak et al. (2018) https://arxiv.org/pdf/1810.05148.pdf ) the authors show that the GPs and SGD are very close in generalization accuracy once enough channels are included in the CNN. Since the Gaussian processes assume a Gaussian prior on parameters, and then weight according to the likelihood on the training data, they are essentially sampling the parameter-function map in the same way that we are assuming is happening for the PAC-Bayes bounds. Thus these results are highly suggestive of SGD also sampling parameters in way that may approach direct uniform sampling of parameters.\\n These results have been further discussed in Matthews et al (2018) https://arxiv.org/pdf/1804.11271.pdf where on p3 the authors write \\u201cLee et al.(2018) compare finite neural networks trained with stochastic gradient descent (SGD) to Gaussian processes instead. The latter comparison to SGD is suggestive that this optimization method mimics Bayesian inference \\u2013 an idea that has been receiving increasing attention (Welling and Teh, 2011; Mandt et al., 2017; Smith and Le, 2018).\\u201d\\n\\nHowever, there are many subtleties, also shown in Nowak et al. (2018). See e.g. their tables 1 and 2. Overall there is good agreement for the generalisation performance of GPs and more standard CNNs and FCNs. However, by careful hyperparameter tuning, including pooling, by having the SGD underfit the training data, or by combining high learning rates with ReLU non-linearities, they do obtain better results than for GPs for the CNNs. These results show that there is more to understand about how SGD samples parameters. It would be interesting to see how much better the GPs work if pooling or underfitting training data is included. It might be, for example, that since underfitting training data constrains the solutions less, that lower complexity functions are found. In the PAC-Bayes language this leads to a larger P(U), which may sometimes compensate for the increase in training error.\\n\\nThere are also other examples of where SGD differs from Bayesian inference. For example experiments in Mandt et al. (2017) https://arxiv.org/abs/1704.04289 find that the SGD posterior differs from the Gibbs posterior. But in order to do this analysis, the authors needed to use very low dimensional spaces, which may be under-representative of high-dimensional parameter spaces of neural networks. \\n\\nFinally, we are well aware of a significant stream of the literature which conjectures that certain properties SGD are the dominant source of generalisation in DNNs. While SGD is unquestionably very important for optimisation, the fact that other optimisation methods lead to similar generalisation (see e.g. some of the discussion in our paper) suggest that SGD is not the main source of generalization.\\n\\nIn summary, while more work needs to be done, we believe that we are justified in our conjecture that SGD samples the highly biased parameter-function map close enough to the way Bayesian sampling would sample functions that we can apply PAC-Bayes bounds to simple SGD. If we are wrong, then there must be some interesting cancellation of errors, as our bounds work remarkably well.\"}",
"{\"title\": \"Answer to reviewer (part 2)\", \"comment\": \"__PAC-Bayes bounds__\\n\\nThe reviewer says that \\u201cThey bounds are loose, but not vacuous\\u201d -- To our knowledge, these are the first non-vacuous bounds for DNNs that follow the same trends as the generalization error. We think this is a pretty big deal. Most other bounds that follow trends are typically orders of magnitude larger than 1. So we wouldn\\u2019t call this loose compared to the state of the field. \\n\\n__Early stopping__\\n\\nThe reviewer says\\n\\u201cIn all of their experiments, they stop training when the training accuracy reaches 100%, where papers like https://arxiv.org/pdf/1706.08947.pdf have found that continuing training past this point further improves test Accuracy.\\u201d\\n\\nWhile the paper cited is interesting, it mainly argues that certain bounds become better when training beyond zero training error, they don\\u2019t show that this holds for the true generalization error. Moreover, the bounds they use are vacuous (>>1). However there are papers that do directly discuss the generalization gain of longer training including https://arxiv.org/abs/1710.10345 and https://arxiv.org/pdf/1705.08741.pdf . The first paper only concerns itself with full batch gradient descent, not SGD. In both cases, the benefit of longer training is only a few percent improvement in generalization error. There are many similar techniques that add a few percent to the generalization performance. As explained in other responses above, we are not primarily writing about these small gains.\\n\\n__Realistic architectures__\\nThe reviewer complains that \\n\\u201c The experiments all use architectures that are quite dissimilar to what is commonly used in practice, and achieve much worse accuracy, so that a reader is concerned that the results differ qualitatively in other respects.\\u201d\\n\\nWe disagree. We use FC and CNN networks that are similar to those used in practice. It is true that the CNNs we use don\\u2019t have max-pooling, which is probably the main reason why their performance on CIFAR10 is less than state of the art. We plan to extend our analysis to networks with pooling in future work. Moreover, for MNIST the performance is much closer to state of the art. Of course we could push our results closer to the state of the art, but we don\\u2019t think this is necessary to make the main points of our paper. \\n\\n__SGD (and Soudry, et al.)__\\n\\nWe disagree that Soudry et al. is inconsistent with our work. See for example our discussion of SGD in the responses to referee 1 and in our general response. Generally, results suggesting that \\u201coptimization algorithms\\u201d are important in papers like that of Soudry, et al. are consistent with our work. When studying properties of any optimization algorithm like SGD, the parameter-function map plays a role. The better way to look at this is not as a mutually exclusive alternative, but as a new perspective that could shed light on old and new results. The perspective being that understanding properties of the parameter-function map can explain the observed behavior of a wide class of neural networks training algorithms.\\n More specifically here, Soudry et al. look at full gradient descent, rather than stochastic gradient descent; it is not yet clear if the results would carry through to SGD. T\\n\\n__Others__\\n\\nThe networks used for Table 1 are the same as in Figure 2, so the CNN has 4 layers, and the FC has 1 layer, we updated the caption to reflect this.\"}",
"{\"title\": \"Answer to reviewer (part 1)\", \"comment\": \"We thank Reviewer 3 for the constructive comments and feedback.\\n\\n__not surprising?__\\nThe reviewer\\u2019s title says \\u201cnot surprising\\u201d and in the text they write \\u201cI do not find it surprising that randomly sampling parameters of deep networks leads to simple functions.\\u201d\\n\\nWe are happy that the reviewer does not find this surprising. Be that as it may, to our knowledge we are the first to directly measure the parameter-function map for a DNN, showing simplicity bias over many orders of magnitude. We demonstrate that the parameter-function map obeys the conditions that allow the simplicity bias bound to hold, which for biased maps, gives an exponential drop in probability with a linear increase in descriptional complexity. (Note that it is not hard to see that many other machine learning methods don\\u2019t satisfy these necessary conditions, and therefore overfit when there are more parameters than data). We then show that this simplicity bias provides the implicit regularization that explains the remarkable generalization properties of highly overparameterized DNNs. In parallel, we provide a novel PAC-Bayes analysis based on the parameter-function map that generates the **first non-vacuous generalization bounds for DNNs that correctly scale with varying generalization performance for MNIST, fashion MNIST and CIFAR10**. These bounds would not work unless the parameter-function map is extremely biased, an effect we capture in our version of PAC-Bayes for DNNs. Regardless of whether or not the referee finds all this surprising, we believe that these results are significant, and have not been published in the literature on DNNs.\\n\\n__Complexity measures__\\nThe reviewer complains about our use of Lempel-Ziv (LZ), and that we put other complexity measures into the Appendix. In our original manuscript we write \\u201cHere we simply note that there is nothing fundamental about LZ. Other approximate complexity measures that capture essential aspects of Kolmogorov complexity also show similar correlations (see Appendix E.4).\\u201d\\nIn the current manuscript we have expanded this sentence slighlty, but also note that the question of complexity measures has been further discussed in the Dingle et al (2018) paper we cite above.\\n\\nIn more detail, as we write in the Appendix E.1 in the original manuscript \\u201c[the ordering] may affect the LZ complexity, although for simple input orderings, it will typically have a negligible effect.\\u201d, Therefore, the ordering of the domain is not totally arbitrary. It must be a Kolmogorov-simple ordering to ensure that the complexity of the resulting bit string is close to the complexity of the function.\\n\\nFurthermore, we don\\u2019t claim that LZ is fundamental, or necessarily the best choice. The motivation behind using LZ is that it is commonly used to approximate Kolmogorov complexity, and it seemed to be the one that correlated best with the probability of Boolean functions, for the small fully connected network, although other measures (such as Boolean complexity) also do well. \\n\\nRegarding meaningfulness, we think some of the measures offered in the Appendix are perhaps more meaningful (or at least, interpretable) as they are truly only dependent on the function, and not domain ordering. \\n\\nAt any rate, while It is true that the literature on complexity measures is vast, and much more could be said about them, for the basic argument we are making in the paper we believe that the measures we use are sufficient.\"}",
"{\"title\": \"Answer to reviewer\", \"comment\": \"We thank Reviewer 1 for the constructive comments and feedback.\\n\\n__Tiny networks__\\nThe reviewer uses as a title \\u201cInteresting perspective but most relevant experiments are on very tiny networks\\u201d -- While we do use a small model network for our direct sampling, we perform a significant amount of work on more standard architectures and datasets, including 4 hidden layer CNNs with 200 filters per layer for all databases, and a FC network that has 1 hidden layer, with 784 neurons for MNIST and fashion MNIST, and 1024 neurons for CIFAR10. (We also used FC networks with more layers, but there results are similar, with only a small improvement in generalization). While they are not the latest state of the art, we don\\u2019t think that these DNNs are tiny. \\n\\n__Clarity of exposition__\\n\\nWe agree that we could have been clearer in what we are trying to achieve. To this end, we have expanded the introduction, and throughout the paper tried to make our arguments more clear (see also our general response above). In response to the reviewer, we have in particular improved the exposition in the sections which were a bit difficult to follow. In Sections 2 and 3, we explained the experiment and sampling procedure in more detail. In Section 2 we also defined the parameter-function map more clearly as per Reviewer 2\\u2019s advice. The mention of a \\u201ctraining set of 64 examples\\u201d was a typo, as the experiment in Section 2 did not involve any training.\\n\\nIn response to the referee, we have expanded the description of the Gaussian processes (GPs) and the Expectation-Propagation in section 4.1 to help people unfamiliar with the topic. Nevertheless the link between DNNS and GPs is a vast topic, going back to the famous 1995 work by Radfod Neal. See also the pioneering recent papers we cite as [(Lee et al. (2017); Matthews et al. (2018); Garriga-Alonso et al. (2018); Novak et al. (2018))]. But we hope that what we write, is sufficient for a non-expert to catch a flavour of the method. Some more detail on GPs can be found in Appendix C, and of course m We are planning a longer publication explaining in much more detail how all this works for PAC-Bayes.\\n\\n__SGD__\\n\\nHere we quote the full paragraph on SGD because it raises an important issue that merits a longer response. The reviewer writes\\n\\u201cMoreover, the generalization bound is derived with the assumption that the learning algorithm uniformly sample from the set of all hypothesis that is consistent with a given training set. It is unlikely that this is what SGD is doing. But explicit experiments to verify how close is the real-world behavior to the hypothetical behavior would be helpful.\\u201d\\n\\n\\n__Our new experiments on SGD sampling__\\n\\nAs also described above in our general response, in the new section 6, we performed experiments which test the behaviour of SGD in a more direct way than most previous approaches, at the expense of being constrained to very small input spaces (we use the neural network with 7 Boolean inputs and one Boolean output). We performed experiments directly comparing the probability of finding individual Boolean functions when training the neural network with two variants of SGD, versus using the Gaussian process corresponding to the neural network architecture (which approximates Bayesian sampling of the parameters under i.i.d. Gaussian prior). We find good agreement.\\n\\n__Complexity of real-world functions__\\n\\nThe reviewer also asks for direct measurements of the complexity of real world functions. This is indeed an interesting question. While the simplicity bias bound means that large P(f) must mean low complexity, it is not so easy to calculate the complexity for real world functions using most of the measures we consider in the Appendix. We are currently working on this question using the critical sample ratio, which is the most scalable measure. Preliminary results are encouraging, but they weren\\u2019t ready by the deadline to submit the manuscript..\\nIt\\u2019s not hard to imagine that function for in Figs 3 and 4 are more complex as we corrupt the data more.\"}",
"{\"title\": \"Overview of changes\", \"comment\": \"We thank the reviewers for their constructive comments and feedback, which stimulated us to improve our paper. Here we list the main changes:\\n\\n1) We have rewritten the abstract and significantly expanded the introduction to make it clearer the question we are trying to answer: **Why do highly over-parameterised deep neural network (DNN) generalize at all, given that the expectation from classical learning theory is that such highly expressive models should strongly overfit?** Our answer to this generalization puzzle is that DNNs exhibit a strong intrinsic bias towards simple functions that provides the main source of implicit bias needed to explain the puzzle of generalization. There are many empirical results showing, for example, that using dropout or using SGD instead of gradient descent (GD), or using early stopping etc... leads to improvements in generalization. While important for practical applications, these improvements are generally relatively small, and so don\\u2019t answer the big question that we are trying to address.\\n\\n2) In section 2, we have added a clearer definition of the parameter-function map, which we argue provides a novel and fruitful lens through which to analyze generalization in DNNs.\\n\\n3) In section 3, we improved our description of how the AIT-inspired simplicity bias phenomenology from (K. Dingle, C. Q. Camargo and A. A. Louis, Nature Comm. 9, 761 (2018)) applies to the parameter-function map of DNNs. In particular simplicity-bias predicts that functions f with a relatively high probability P(f) to obtain upon random sampling of parameters will have a relatively low descriptional complexity. **Our key argument is that such easy-to-find functions will also generalize well**. We demonstrate that this works explicitly for our smaller model network with two hidden layers of 40 neurons each (that nonetheless can express on the order of 10^(38) functions).. Since the AIT based arguments for simplicity bias are very general, they should apply for larger DNNs as well where direct sampling is out of the question.\\n\\n4) In section 4 we have expanded our description of how we use Gaussian processes (GPs) to calculate the probability P(U) of the data, which plays a key role in our PAC-Bayes bounds, which, in turn provide an independent argument for why highly biased parameter-function maps lead to good generalization performance.\\n\\n5) We made no major changes to the results section, but do note that our CNN sizes are 4 hidden layers with 200 filters per layer for all databases, while the FC network had 1 hidden layer, with 784 neurons for MNIST and fashion MNIST, and 1024 neurons for CIFAR10. (We also used FC networks with more layers, but results are similar, with only a small improvement in generalization, so we didn\\u2019t show these). \\n\\n6) We have added a new section 6 where we directly compare the probability of finding individual functions using SGD with an estimate using GPs. While the agreement is not exact, which could be due to errors in our GP calculation (see Fig 6 in Appendix B), or due to deviations of SGD from Bayesian sampling, overall the trend is encouraging. We note that the probabilities range over many orders of magnitude. In PAC-Bayes, the bias enters the PAC-Bayes via a log, so we only need SGD to be similar to Bayesian sampling on a log scale. Thus we believe that the agreement we find here between SGD and the GP prior is good enough for PAC-Bayes to work. One can also turn this argument around: The fact that we find, for the first time for DNNs, non-vacuous (< 1) bounds using PAC-Bayes is highly non-trivial, and provides indirect evidence that SGD is indeed sampling functions roughly consistently with the prior P(f).\\n\\n7) We have slightly sharpened the conclusions, but not changed this section significantly. \\n\\n8) We have moved a section on choice of hyperparameters for the GP from the main text to the Appendix C, as it was quite technical and not central to our main argument. We also added Appendx D, which explains how we compare function probabilities for GPs to SGD, as well as Appendix I, Bias and the curse of dimensionality, where we discuss why other machine learning methods that are not biased do suffer from overfitting, in contrast to DNNs.\"}",
"{\"title\": \"Answer to reviewer\", \"comment\": \"We thank Reviewer 2 for the constructive comments and feedback.\\n\\nWe have submitted a new draft were we address concerns raised, and sharpen some of the main points we make. In particular, we have clarified what we are trying to explain in terms of the generalization puzzle. We are trying to explain the big picture of why overparametrized DNNs generalize at all, and have tried to clarify in the text that we are not trying to explain for example why SGD or dropout or other similar techniques improve further on generalization. We\\u2019re still happy to add qualifiers to the title if the reviewer wants us to.\\n\\nWe have fixed the bibliography, correctly citing peer-reviewed publications, and completing those with incomplete information.\\n\\nThe classification error on the training set is 0 in all the experiments in Figures 2 and 3. We have updated the captions to make this clear in the figures themselves. If instead \\u2018loss\\u2019 referred to the cross-entropy loss, that will of course not be exactly zero, but our discussion is centered on classification error, and so we think adding that would be distract from the point of the figures.\\n\\nWe have added a definition of parameter-function map in Section 2, and addressed all the other minor typos and comments. We also changed to \\u201cSupplementary information\\u201d to Appendices.\"}",
"{\"title\": \"A fresh study to the generalization capabilities of (deep) neural networks, with the help of the PAC-Bayesian learning theory and empirically backed intuitions.\", \"review\": \"The paper brings a fresh study to the generalization capabilities of (deep) neural networks, with the help of an original use of PAC-Bayesian learning theory and some empirically backed intuitions.\\n\\nExpressing the prior over the input-output function space generated by the neural network is very interesting. This provides an original analysis compared to the common PAC-Bayesian analysis of neural networks that express the prior over network parameters space. The theoretical study here appears simple (noteworthy, it is based one of the very first PAC-Bayesian theorems of McAllester that is not the most used nowadays), and the study is conducted mainly by empirical observation. Nevertheless, the experiments leading to these observations are cleverly designed, and I think it gives great insights and might open the way to other interesting studies.\\n\\nOverall, the paper is enjoyable to read. I also appreciate the completeness of the supplementary material. I recommend the paper acceptance, but I would like the authors to consider the concerns I rise below:\\n- The paper title is a bit presumptuous. The paper presents a conjunction backed by empirical evidence on some not-so-deep neural networks. Even if I consider it as an important piece of work, it does not provide any definitive answer to the generalization puzzle. \\n- Many peer-reviewed publications are cited as arXiv preprints. Please carefully complete the bibliography. Some papers are referenced by the name, title and year only (Smith and Le 2018; Zhang et al, 2017)\\n- I recommend adding to the learning curves of Figures 2 and 3 the loss on the training set.\", \"other_minor_comments_and_typos\": [\"Intro: Please define \\\"parameter-function\\\" map\", \"Page 4: Missing parentheses around Mand et al. (2017)\", \"SGD has not had time ==> SGD did not have time\", \"Please refers to the definition in the supplementary material/information the first time you mention Lempel-Ziv complexity.\", \"Please mention that SI stands for Supplementary Information\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting perspective but most relevant experiments are on very tiny networks\", \"review\": \"This paper propose an interesting perspective to explain the generalization behaviors of large over-parameterized neural networks by saying that the parameter-function map in neural networks are biased towards \\\"simple\\\" functions, and through a PAC-Bayes argument, the generalization behavior will be good if the target concept is also \\\"simple\\\". I like the perspective of view that combines the \\\"complexity\\\" of both the algorithm bias and the target concept in the view of generalization. However, the implementation and presentation of the paper could be improved.\\n\\nFirst of all, the paper is a bit difficult to follow as some important information is either missing or only available in the appendix. For example, in Section 2, to measure the properties of the parameter-function mapping, a simple boolean neural network is explored. However, it is not clear how the sampling procedure is carried out. There is also a 'training set of 64 examples', and it not obvious to the reader how this training set is used in this sample of neural network parameters.\\n\\nFollowing that, the paper uses Gaussian Process and Expectation-Propagation to approximately compute P(U). But the description is brief and vague (to non-expert in GP or EP). As one of the main contribution stated in the introduction, it would be better if more details are included.\\n\\nMoreover, the generalization bound is derived with the assumption that the learning algorithm uniformly sample from the set of all hypothesis that is consistent with a given training set. It is unlikely that this is what SGD is doing. But explicit experiments to verify how close is the real-world behavior to the hypothetical behavior would be helpful.\\n\\nThe experiment in section 6 that verify the 'complexity' of 'high-probability' functions in the given prior is very interesting. It would be good if some kind of measurements more directly on the real world tasks could be done, which will better support the argument made in the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"not surprising\", \"review\": \"The authors make a case that deep networks are biased\\ntoward fitting data with simple functions.\\n\\nThe start by examining the priors on classifiers obtained by sampling\\nthe weights of a neural network according to different distributions. They do this\\nin two ways. First, they examine properties of the distribution\\non binary-valued functions on seven boolean inputs obtained by\\nsampling the weights of a small neural network. They also empirically compare\\nthe labelings obtained by sampling the weights of a network with\\nlabelings obtained from a Gaussian process model arising from earlier\\nwork.\\n\\nNext, they analyze the complexity of the functions produced, using\\ndifferent measures of the complexity of boolean functions. A\\nfavorite of theirs is something that they call Lempel-Ziv complexity,\\nwhich is measured by choosing an arbitrarily ordering of the\\ndomain, writing the outputs of the function in that ordering,\\nand looking at how well the Lempel-Ziv algorithm compresses this\\nsequence. I am not convinced that this is the most meaningful\\nand fundamental measure of the complexity of functions.\\n(In the supplementary material, they examine some others.\\nThey show plots relating the different measures in the body\\nof the paper. None of the measures is specified in detail in the\\nbody of the paper. They provide plots relating these complexity\\nmeasures, but they don't demonstrate a very close connection.)\\n\\nThe authors then evaluate the generalization bound obtained by\\napplying a PAC Bayes bound, together with the assumption that\\nthe training process produces weights sampled from the distribution\\nobtained by conditioning weights chosen according to the random\\ninitialization on the event that they fit they fit the training\\ndata perfectly. They do this for small networks and simple datasets.\\nThey bounds are loose, but not vacuous, and follow the same order\\nof difficulty on a handful of datasets as the true generalization\\nerror.\\n\\nIn all of their experiments, they stop training when the training\\naccuracy reaches 100%, where papers like https://arxiv.org/pdf/1706.08947.pdf\\nhave found that continuing training past this point further improves test\\naccuracy. The experiments all use architectures that are\\nquite dissimilar to what is commonly used in practice, and\\nachieve much worse accuracy, so that a reader is concerned\\nthat the results differ qualitatively in other respects.\\n\\nI do not find it surprising that randomly sampling parameters\\nof deep networks leads to simple functions.\\n\\nPapers like the Soudry, et al paper cited in this submission are\\ninconsistent with the assumption in the paper that SGD samples\\nparameters uniformly.\\n\\nIt is not clear to me how many hidden layers were used for the\\nresults in Table 1 (is it four?). \\n\\nI did find it interesting to see exactly how concentrated the\\ndistribution of functions obtained in their 7-input experiment\\nwas, and also found results on the agreement of the Gaussian process\\nmodels with the randomly sampled weight interesting, as far as they\\nwent. Overall, I am not sure that this paper provided enough\\nfundamental new insight to be published in ICLR.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJgEl3A5tm | CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild | [
"Yang Zhang",
"Hassan Foroosh",
"Philip David",
"Boqing Gong"
] | In this paper, we conduct an intriguing experimental study about the physical adversarial attack on object detectors in the wild. In particular, we learn a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors. Our approach alternates between two threads. In the first, we train a neural approximation function to imitate how a simulator applies a camouflage to vehicles and how a vehicle detector performs given images of the camouflaged vehicles. In the second, we minimize the approximated detection score by searching for the optimal camouflage. Experiments show that the learned camouflage can not only hide a vehicle from the image-based detectors under many test cases but also generalizes to different environments, vehicles, and object detectors. | [
"Adversarial Attack",
"Object Detection",
"Synthetic Simulation"
] | https://openreview.net/pdf?id=SJgEl3A5tm | https://openreview.net/forum?id=SJgEl3A5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1l_crfvW4",
"HJlRKTeUZV",
"B1eQB3w2lV",
"Bkg6RrJjlV",
"ryxNmu6ge4",
"BJg7t_Y8AX",
"H1g6Z8FURX",
"Bkl5sVtIAQ",
"HyxNHfKU0m",
"HylF5fZc37",
"SJgCAUuK3X",
"B1gpWORE2m"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1546229135973,
1546157446185,
1545530427505,
1545430485437,
1544767516311,
1543047291294,
1543046661367,
1543046306406,
1543045691990,
1541178001438,
1541142229814,
1540839429206
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1065/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1065/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1065/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1065/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1065/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1065/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1065/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1065/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1065/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1065/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"s' is a list of scores\", \"comment\": \"Thank you for your positive comments. s' is a list of scores over various transformations and s* is a scalar. Instead of comparing s' and s* directly, we are comparing the mean of s', notated as s'/|s'|, with s* in Algorithm 1 line 9. We will change s'/|s'| to mean(s') to avoid further confusion.\"}",
"{\"comment\": \"Very interesting work. I have a question regarding Algorithm 1 line 7. When you calculate the actual score by querying the original detection model on the newly found camouflage, what did you store as s'? Is it a list of the scores over various transformations? Is it the minimum of the score over various transformations? Is it the sum of the scores over various transformations? It seems to me that s' must be a scalar, since you're comparing it with s*. Can you please clarify this? Thank you!\", \"title\": \"Clarification on Algorithm 1, Line 7\"}",
"{\"title\": \"Planar objects (e.g., the stop sign) vs. non-planar objects\", \"comment\": \"Thanks for the pointer to [1]. We will add it to our final version. We did have cited a similar paper[2] which also aims to physically perturb the stop-sign detectors. We will be glad to discuss the differences between our work and [1,2]. The discussion will also facilitate the other readers to understand the motivation of our paper better.\", \"good_physical_camouflages_are_supposed_to_fail_object_detectors_for_any_images_taken_about_the_camouflaged_object_under_all_conditions\": \"object-to-camera distance, background, lighting condition, view angle, etc. When we formalize the problem and try to optimize with respect to the camouflage, however, the \\u201cimaging function\\u201d which transforms the camouflage to camouflaged object and eventually various images is unknown. Hence, the key challenge to learning the physical camouflage is how to tackle this unknown imaging function.\\n\\nBoth[1] and [2] perturb the detectors of stop signs which are planar objects whose images, under changes in camera geometry, are related by linear 2D projective transformations. This is in contrast to non-planar objects (e.g., a car) whose images are related by more complex range-dependent nonlinear transformations. Hence, [1,2] are able to simplify such imaging function to projective transformations (cf. Section 4.2.2 in [1] and Section 4.1 in[2]) without breaking the gradient chain between the perturbation and the detector' output score. Complex nonlinear transformations, however, require a dedicated 3D simulation, for instance the one used in our paper. The 3D simulation breaks the gradient chain and turns the problem into a black-box optimization problem. In short, the non-planar objects break [1,2]'s premise since the approaches therein rely on functions that are differentiable with respect to the camouflage.\\n\\n\\n[1] Song, Dawn, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. \\\"Physical adversarial examples for object detectors.\\\" In 12th {USENIX} Workshop on Offensive Technologies ({WOOT} 18). 2018.\\n[2] Chen, Shang-Tse, Cory Cornelius, Jason Martin, and Duen Horng Chau. \\\"Robust Physical Adversarial Attack on Faster R-CNN Object Detector.\\\" arXiv preprint arXiv:1804.05810 (2018).\"}",
"{\"comment\": \"This work does not cite or compare with work that appeared almost 6 months ago on attacking object detectors in the physical world and showing transferability. The work of Eykholt et al show a similar camouflage attack in making a stop sign disappear. curious about the differences to this paper, and what intellectual contributions it provides over existing work.\\n\\nEykholt et al., Physical Adversarial Examples for Object Detectors, USENIX WOOT 2018.\", \"https\": \"//www.usenix.org/conference/woot18/presentation/eykholt\", \"title\": \"Paper misses recent prior work that is very closely related\"}",
"{\"metareview\": \"This work develops a method for learning camouflage patterns that could be painted onto a 3d object in order to reliably fool an image-based object detector. Experiments are conducted in a simulated environment.\\n\\nAll reviewers agree that the problem and approach are interesting. Reviewers 1 and 3 are highly positive, while Reviewer 2 believes that real-world experiments are necessary to substantiate the claims of the paper. While such experiments would certainly enhance the impact of the work, I agree with Reviewers 1 and 3 that the current approach is sufficiently interesting and well-developed on its own.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareview: interesting approach\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Dear reviewer 1,\\n\\nThank you for your positive comments. We will answer your questions and comments one by one.\", \"q\": \"Two methods are proposed but I only find results for one.\", \"a\": \"The two key techniques jointly make it possible to learn a single camouflage to physically fail the detectors. Please see Figure 4 for an illustration of the unified framework. Probably our text was not clear enough and caused the confusion; we will improve the clarity.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Dear reviewer 3,\\n\\nWe appreciate that you highlighted the significance of our work. For your question concerning the transferability of the camouflage, we did have one single camouflage tested in both environments. Probably our presentation was not clear enough and caused the confusion; we will improve the clarity. We transfer and test the single camouflage under different detectors, across different locations of an environment, and in two different environments. The learned camouflage outperforms the baselines despite some of the factors (detectors, locations, the mountain environment) are unseen during the training.\\n\\nMore concretely, we learn a single camouflage on Camry in the urban environment against the Mask-RCNN detector. We then test this camouflage on Camry in the mountain environment against Mask-RCNN (Ours - Transferred in Table 4.), on Camry in the urban environment against the YOLO detector (Ours (Mask R-CNN trained) in Table 2.), on SUV in the urban environment against Mask-RCNN (Table 5.), and on Camry in the urban environment against Mask-RCNN with different cameras (Ours - Transferred in Table 6.). All of those results significantly outperform the baselines. It shows that it is possible to have a single camouflage generalize to different locations, detectors, environments, and even vehicles.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear reviewer 2,\\n\\nThank you for your detailed reviews. We will address your concerns one by one.\", \"q\": [\"eqs 1, 2 & 3: min_* should be argmin_*\"], \"a\": \"We have revised it.\"}",
"{\"title\": \"Paper revision: Two new experiments\", \"comment\": \"Dear all,\\n\\nWe have added two experiments to the appendix of the revised PDF. In the first experiment, we present the relationship between the newly added samples and the quality of the learned camouflage. We find that the more samples added, the learned camouflage is better since the clone network estimates the score better. In the second experiment, we present the detector's attention by visualizing the gradient heatmap of the detector's classification score w.r.t. The input image. We find that the detector mostly places its attention on the upper body of the car. This leads to better camouflage performance in the front and rear detector view since the upper body camouflages are more visible in these two views. We have also found a bug in our countryside environment evaluation. Our learned camouflages turned out to be more robust and transferable than we previously anticipated after fixing the bug. We have updated the results in Table.4. Finally, we have made some other minor revisions according to the reviewers\\u2019 suggestions.\"}",
"{\"title\": \"Interesting problem, interesting approach but misses opportunities for detailed analysis. Not clear it will scale to real-world applications.\", \"review\": \"The authors investigate the problem of learning a camouflage pattern which, when applied to a simulated vehicle, will prevent an object detector from detecting it. The problem is frames as finding the camouflage pattern which minimises the expected decision scores from the detector over a distribution of possible views of this pattern applied to a vehicle (e.g. vantage point, background, etc). This expectation is approximated by sampling detector scores when applying the detector considered to images synthesised using a number of vantage points and scene contexts. In order to generate a gradient signal with respect to the camouflage applied (the simulated image rendering is non-differentiable) the approach considers learning a clone network which takes as input the camouflage pattern, the vehicle model and a given environment and outputs the vehicle detector\\u2019s devision values. The clone network and the optimal camouflage are learned alternately in order to obtain a relatively faithful approximation. The approach is evaluated in simulation using two standard object detectors (Mask R-CNN and YOLOv3) on two vehicle models over a range of transformations.\", \"pros\": [\"\\u2014\\u2014\\u2014\", \"interesting challenge\", \"the clone network provides an interesting solution to the challenge of having a (non-differentiable) simulator in the loop.\"], \"cons\": [\"\\u2014\\u2014\\u2014\", \"the fact that this is done in simulation is understandable but to some degree negates the authors\\u2019 point that physicality matters because it is harder than images. How effective is a learned camouflages in reality? It would be great to get at least some evidence of this.\", \"if the sim-2-real gap is too big it is not clear how this approach could ever be feasibly employed in the real world. Some intuition here would add value.\", \"the approach presented critically hinges on the quality of the clone network. Some analysis of robustness of the approach with respect to network performance would add value.\", \"little is offered in terms of analysis of the camouflages generated. The three failure modes triggered and discussed are intuitive. But are there any insights available as to what aspect of the camouflages triggers these behaviours in the object detectors? This could also add significant value.\", \"In summary, this work addresses an interesting problem but it is not clear how impactful the approach will be in real-world settings and hence how significant it it. Some of the more technically interesting aspects of the submission (e.g. the quality of the clone network, any learning derived from the camouflages generated) are not explored.\"], \"misc_comments\": [\"equs 1, 2 & 3: min_* should be argmin_*\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Physical adversarial attack on object detectors is interesting.\", \"review\": \"This is an interesting paper targeting adversarial learning for interfering car detection. The approach is to learn camouflage patterns, which will be rendered as a texture on 3D car models in urban and mountain scenes, that minimizes car detection scores by Mask R-CNN and YOLOv3-SPP.\\n\\nDifferent from image-based adversarial learning, this paper examines whether 3D car textures can degrade car detection quality of recent neural network object detectors. This aspect is important because the learned patterns can be used in the painting of real-world cars to avoid automatic car detection in a parking lot or on a highway.\\n\\nThe experimental results show that the car detection performance significantly drops by learned vehicle camouflages.\", \"major_comments\": [\"It is not clear how learned camouflage patters are different in two scenes. Ideally, we should find one single camouflage patter that can deceive the two or more object detection systems in any scenes.\"], \"minor_comments\": [\"In abstract, it is not good that you evaluated your study as \\\"interesting\\\". I recommend another word choice.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Adversarial attacks for vehicles in simulators\", \"review\": \"Adversarial attacks and defences are of growing popularity now a days. As AI starts to be present everywhere, more and more people can start to try to attack those systems. Critical systems such as security systems are the ones that can suffer more from those attacks. In this paper the case of vehicles that attack an object detection system by trying to not be detected are tackled.\\n\\nThe proposed system is trained and evaluated in a simulation environment. A set of possible camouflage patterns are proposed and the system learns how to setup those in the cars to reduce the performance of the detection system. Two methods are proposed. Those methods are based on Expectation over transformation method. This method requires the simulator to be differentiable which is not the case with Unity/Unreal environments. The methods proposed skip the need of the simulator to be differentiable by approximating it with a neural network.\\n\\nThe obtained results reduce the effectivity of the detection system. The methods are compared with two trivial baselines. Isn't there any other state of the art methods to compare with?\\n\\nThe paper is well written, the results are ok, the related work is comprehensive and the formulation is correct. The method is simply but effective. Some minor comments:\\n - Is the simulator used CARLA? Or is a new one? Where are the 3D assets extracted from?\\n - Two methods are proposed but I only find results for one\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Hke4l2AcKQ | MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders | [
"Xuezhe Ma",
"Chunting Zhou",
"Eduard Hovy"
] | Variational Autoencoder (VAE), a simple and effective deep generative model, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. However, recent studies demonstrate that, when equipped with expressive generative distributions (aka. decoders), VAE suffers from learning uninformative latent representations with the observation called KL Varnishing, in which case VAE collapses into an unconditional generative model. In this work, we introduce mutual posterior-divergence regularization, a novel regularization that is able to control the geometry of the latent space to accomplish meaningful representation learning, while achieving comparable or superior capability of density estimation.Experiments on three image benchmark datasets demonstrate that, when equipped with powerful decoders, our model performs well both on density estimation and representation learning. | [
"VAE",
"regularization",
"auto-regressive"
] | https://openreview.net/pdf?id=Hke4l2AcKQ | https://openreview.net/forum?id=Hke4l2AcKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gjl5gJzV",
"SJxCSBYegV",
"HkxNWckD0Q",
"ByxYlNP4C7",
"HyeQs7536Q",
"r1eryMXiTX",
"HklIzEowaQ",
"H1l3IfiwaQ",
"Byl31zowTX",
"SJec11dj3X",
"rJgA-rlU37",
"BJgo0o0NnX",
"BylPAIJZ97",
"BJlD3jFg9X"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1546746355180,
1544750406358,
1543072252294,
1542906864995,
1542394779505,
1542300124971,
1542071309971,
1542070867718,
1542070755612,
1541271266227,
1540912390433,
1540840403325,
1538483919014,
1538460591046
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1064/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1064/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1064/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1064/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1064/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1064/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1064/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1064/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1064/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1064/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1064/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1064/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1064/Authors"
],
[
"~Hello_Kitty2"
]
],
"structured_content_str": [
"{\"title\": \"Carema-Ready version updated\", \"comment\": \"Carema-Ready version of this paper has been uploaed\"}",
"{\"metareview\": \"This paper proposes a solution for the well-known problem of posterior collapse in VAEs: a phenomenon where the posteriors fail to diverge from the prior, which tends to happen in situations where the decoder is overly flexible.\\n\\nA downside of the proposed method is the introduction of hyper-parameters controlling the degree of regularization. The empirical results show improvements on various baselines.\\n\\nThe paper proposes the addition of a regularization term that penalizes pairwise similarity of posteriors in latent space. The reviewers agree that the paper is clearly written and that the method is reasonably motivated. The experiments are also sufficiently convincing.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review\"}",
"{\"title\": \"Results no longer look marginal, thanks for the extra work!\", \"comment\": \"Thanks to the authors for the work in addressing my questions and comments.\\n\\n2. That\\u2019s interesting to know, makes sense indeed. I would explicitly indicate this in your \\u201cMeasure of Smoothness.\\u201d section then, as this does not come across in the current text.\\nThe new figure in Appendix B.1.3 is interesting to see, but does not seem to indicate such a drastic effect, which I guess might be due to t-SNE \\u201cfixing it\\u201d, but I am not sure what would be the best way to showcase this effect. \\n\\n5. Yes sorry what I meant by \\u201clatent traversals\\u201d is something akin to the single unit clamping done in Beta-VAE (Higgins et al 2017, https://openreview.net/forum?id=Sy2fzU9gl). In your case, given you have latents with 32 dimensions this is harder to do easily, hence interpolations might be interesting to see indeed.\\n\\nI think the updated results seem to make the model stronger and show visible improvements on VLAE. \\nI am still a bit unclear on the exact characteristics of the latent space learnt and I\\u2019m looking forward to see more work in that direction. \\n\\nHence the paper does seem good enough in its current state, so I\\u2019d recommend publication as a poster (keeping my score, increasing my confidence).\"}",
"{\"title\": \"Thanks for your response.\", \"comment\": \"The changes to the paper look great, thanks for your updates. They do not, however, change my basic opinion of the paper and so I will maintain my score as is.\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thank you for upgrading your score!\\nWe really appreciate your suggestion to evaluate learned representations with simple non-linear classifiers.\\nWe are performing experiments with SVM using non-linear kernels and will update results soon.\"}",
"{\"title\": \"Response to author feedback\", \"comment\": \"Thank you for your clarifications and the additional experiments. As a result of these, I have increased my score by one point.\\n\\nI agree with your comments on the importance of learning interpretable and disentangled representation. However notice that this can also be achieved learning simple non-Euclidean spaces, that may require however a simple but non-linear classifiers (e.g. 1-layer neural network with a small number of hidden units, non-linear SVM).\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thank you for the insightful comments!\\n-- For your questions and concerns about the results on CIFAR-10, please see this post:\", \"https\": \"//openreview.net/forum?id=Hke4l2AcKQ¬eId=BylQ2fjL6X\\nwhere we show stronger performance of our model.\\n\\n-- For your questions about the motivation of our method:\\n \\u201cencouraging the learned variational posteriors to be diverse\\u201d is the motivation of L_diversity. If we only have L_diversity in our regularization method, it is, as in your comment, counter-intuitive for similar data points. However, by adding the smoothness term L_smooth, we expect that the model itself is able to learn how to balance diversity and smoothness to capture both diverse patterns in different data points and shared patterns in similar ones. And our experimental results show that these two regularization terms together help achieve stronger performance.\\n\\n-- For your questions about additional computational burden:\\nIn order to train the model with large batch size, like 100, it requires more memory. But the computation of all the regularization terms is relatively efficient comparing to the computation of other parts of the objective. And the model converges as fast as that without the regularization.\\n\\n-- We really appreciate your comments about the evaluation of the learned representations. \\nWe agree that the latent manifold of VAEs may not be Euclidean.\\nHowever, as discussed in our paper and previous works, good latent representations need to capture global structured information and disentangle the underlying causal factors, tease apart the underlying dependencies of the data, so that it becomes easier to understand, to classify, or to perform other tasks. Evaluating learned representations with unsupervised or semi-supervised methods with limited capacity is a reasonable way and has been widely adopted by previous works. From this perspective, it might be an important advantage of our method if our regularizer can force the space to be more Euclidean, because the learned representations are easier to be interpreted and utilized. Flexible classifiers might favor representations by just memorizing the data, thus not providing fair evaluation of the learned representations.\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"Thank you for the insightful comments!\\n\\nFor your questions and concerns about the results on CIFAR-10 with more expressive decoders, please see this post:\", \"https\": \"//openreview.net/forum?id=Hke4l2AcKQ¬eId=BylQ2fjL6X\\nwhere we show stronger performance with more expressive decoders for our model.\\n\\nFor your specific questions, \\n1 & 2. We appreciate your suggestion to perform ablation experiments for the two terms in our regularizer. Actually, both of the regularization terms play important roles. Without L_smooth, the model will easily place different posteriors into isolated points far away from each other, obtaining L_diversity close to zero, and the model performance on both density estimation and representation learning is worse than original VLAE without the regularization. Moreover, removing the L_smooth term, the training of the model becomes unstable.\\n\\n3. Thanks for your suggestion, we have added samples from VLAE in the updated version.\\n\\n4. Thanks for your comment, we have revised the paper to fix the grammatical mistakes.\"}",
"{\"title\": \"Response to Review 1\", \"comment\": \"Thank you for the insightful comments!\", \"for_your_questions\": \"1. Thanks for pointing out the related work. We cited Esmaeili\\u2019s paper in our updated version. Actually, MAE does not fit anyone in their Table A.2. If we also decompose our objective in the same, our objective is, if we use the original form of MPD and ignore L_sommth, term (1) + (2) + (4\\u2019), where (4\\u2019) is a modified version of (4).\\nThe original (4) is KL(q(z) || p(z)) = E_q(z} [log q(z) - log p(z)], while (4\\u2019) is E_{p(x) q(z)} [log q(z|x) - log p(z)]\\n\\n2. In our experiments, L_smooth plays a very important role. If we remove it, the model will easily place different posteriors into isolated points far away from each other, obtaining L_diversity close to zero. This phenomenon becomes more serious when a more powerful prior is applied, like auto-regressive flow. The unsupervised clustering and semi-supervised classification experiments justified the necessity of L_smooth. We also visualized the latent spaces with different settings in Appendix B.1.3, which might be helpful to understand the effects of the two regularization terms.\\n\\nFrom the theoretical perspective, we have not provided rigorous support of L_smooth and will leave it to future work.\\n\\n3. In order to better approximate L_diversity, we used large batch size in our experiments. For binary images, we use batch size 100. For natural images, due to memory limits, we use 64. The details are provided in Appendix. In practice, we found that these batch sizes provide stable estimation of L_diversity.\\n\\n4. As we discussed in the paper, one advantage of our regularization method is that L_diversity is computationally efficient. Previous works such as InfoVAE and AAE also has considered the Jensen-Shannon Divergence. But directly optimizing it is intractable, and they applied adversarial learning methods.\\n\\n5. We plan to show the reconstruction results with linearly interpolated z-vectors in another updated version. We appreciate your suggestions if there are better ways of investigating the latent space in terms of \\\"latent travelsals\\\".\\n\\n6. The possible reason that VLAE obtained worse reconstruction than the original paper is that in our experiments, we used more powerful decoders with more layers and receptive fields. We want to test the performance of our regularizer with sufficiently expressive decoders. With more powerful decoders, our reimplementation of VLAE achieved better NLL but worse reconstruction, showing that VLAE suffers the KL varnishing issue with stronger decoders.\\n\\n7. Thanks for your suggestion! We will make figure 2 easier to understand and update the revised version later.\"}",
"{\"title\": \"Interesting paper with marginal results\", \"review\": \"This paper proposes changes to the ELBO loss used to train VAEs, to avoid posterior collapse. They motivate their additional components rather differently than what has been done in the literature so far, which I found quite interesting.\\nThey compare against appropriate baselines, on MNIST and OMNIGLOT, in a complete way.\\n\\nOverall, I really enjoyed this paper, which proposed a novel way to regularise posteriors to force them to encode information. However, I have some reservations (see below), and looking squarely at the results, they do not seem to improve over existing models in a significant manner as of now.\", \"critics\": \"1.\\tThe main idea of the paper, in introducing a measure of diversity, was well explained, and is well supported in its connection to the Mutual Information maximization framing. One relevant citation for that is Esmaeili et al. 2018, which breaks the ELBO into its components even further, and might help shed light on the exact components that this new paper are introducing. E.g. how would MAE fit in their Table A.2?\\n2.\\tOn the contrary, the requirement to add a \\u201cMeasure of Smoothness\\u201d was less clear and justified. Figure 1 was hard to understand (a better caption might help), and overall looking at the results, it is even unclear if having L_smooth is required at all?\\n\\nIts effect in Table 1, 2 and 3 look marginal at best?\\n\\nGiven that it is not theoretically supported at all, it may be interesting to understand why and when it really helps.\\n3.\\tOne question that came up is \\u201chow much variance does the L_diverse term has\\u201d? If you\\u2019re using a single minibatch to get this MC estimate, I\\u2019m unsure how accurate it will be. Did changing M affect the results?\\n4.\\tL_diverse ends up being a symmetric version of the MI. What would happen if that was a Jensen-Shannon Divergence instead? This would be a more principled way to symmetrically compare q(z|x) and q(z).\\n5.\\tOne aspect that was quite lacking from the paper is an actual exploration of the latent space obtained. \\nThe authors claim that their losses would control the geometry of the latents and provide smooth, diverse and well-behaved representations. Is it the case?\\n\\nCan you perform latent traversals, or look at what information is represented by different latents?\\n \\nThis could actually lend support to using both new terms in your loss.\\n6.\\tReconstructions on MNIST by VLAE seem rather worst than what can be seen in the original publication of Chen et al. 2017? Considering that the re-implementation seems just as good in Table 1 and 3, is this discrepancy surprising?\\n7.\\tFigure 2 would be easier to read by moving the columns apart (i.e. 3 blocks of 3 columns).\\n\\nOverall, I think this is an interesting paper which deserves to be shown at ICLR, but I would like to understand if L_smooth is really needed, and why results are not much better than VLAE.\", \"typos\": [\"KL Varnishing -> vanishing surely?\", \"Devergence -> divergence\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Improving InfoVAE\", \"review\": \"This paper presents a new regularization technique for VAEs similar in motivation and form to the work on InfoVAE. The basic intuition is to encourage different training samples to occupy different parts of z-space, by maximizing the expected KL divergence between pairwise posteriors, which they call Mutual Posterior-Divergence (MPD). They show that this objective is a symmetric version (sum of the forward and reverse KL) of the Mutual Info regularization used by the InfoVAE. In practice however, they do not actually use this objective. They use a different regularization which is based on the MPD loss but they say is more stable because it's always greater than zero, and ensures that all latent dimensions are used. In addition to the MPD based term, they also add another term which encouraging the pairwise KL-divergences to have a low standard-deviation, to encourge more even spreading over the z-space rather than the clumpy distribution that they observed with only the MPD based term.\\n\\nThey show state of the art results on MNIST and Omniglot, improving over the VLAE. But on natural data (CIFAR10), their results are worse than VLAE.\", \"pros\": \"1. The technique has a nice intuitive (but not particularly novel) motivation which is kinda-sorta theoretically motivated if you squint at it hard enough.\\n\\t2. The results on the simple datasets are solid and encouraging.\", \"cons\": \"1. The practical implementation is a bit ad-hoc and requires turn two additional hyper parameters (like most regularization techniques).\\n\\t2. The basic motivation and observations are the same as InfoVAE, so it's not completely novel.\\n\\t3. The CIFAR10 results are bit concerning, and one can't help but wondering if the technique really only helps when the data has simpler shared structure.\", \"overall\": \"I think the idea is interesting enough, and the results encouraging enough to be just above the bar for acceptance at ICLR.\", \"i_have_the_following_question_for_the_authors\": \"1. Why do you use the truncated pixelcnn on CIFAR10? Did you try it with the more expressive decoder (as was used on the binary images) and got worse results? or is there some other justification for this difference?\", \"i_would_have_like_to_see_the_following_modifications_to_the_paper\": \"1. The paper essentially presents two related but separate regularization techniques. It would be nice to have ablation results to show how each of these perform on their own.\\n\\t2. Bonus points for showing results which combine VLAE (which already has a form of the MPD regularization) with the smoothness regularization.\\n\\t3. It would be nice to see samples from VLVAE in Figure 3 next to the MAE samples to more easily compare them directly.\\n\\t4. There are many grammatical and English mistakes. The paper is still quite readably, but please make sure the paper is proofread by a native English speaker.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, but the experiments could be improved\", \"review\": \"In this paper the authors present mutual posterior divergence regularization, a data-dependent regularization for the ELBO that enforces diversity and smoothness of the variational posteriors. The experiments show the effectiveness of the model for density estimation and representation learning.\\nThis is an interesting paper dealing with the important issues of fully exploiting the stochastic part of VAE models and avoiding inactive latent units in the presence of very expressive decoders. The paper reads well and is well motivated. \\n\\nThe authors claim that their method is \\\"encouraging the learned variational posteriors to be diverse\\\". While it is important to have models that can use well the latent space, the constraints that are encoded seem too strong. If two data points are very similar, why should there be a term encouraging their posterior approximation to be different? In this case, their true posteriors will be in fact be similar, so it seems counter-intuitive to force their approximations to be different.\\n\\nThe numerical results seem promising, but I think they could be further improved and made more convincing.\\n- For the density estimation experiments, while there is an improvement in terms of NLL thanks to the new regularizer, it is not clear which is the additional computational burden. How much longer does it takes to train the model when computing all the regularization terms in the experiments with batch size 100? \\n- I am not completely convinced by the claims on the ability of the regularizer to improve the learned representations. K-means implicitly assumes that the data manifold is Euclidean. However, as shown for example by [Arvanitidis et al. Latent space oddity: on the curvature of deep generative models, ICLR 2018] and other authors, the latent manifold of VAEs is not Euclidean, and curved riemannian manifolds should be used when computing distances and performing clustering. Applying k-means in the high dimensional latent spaces of ResNet VAE and VLAE does not seem therefore a good idea.\\nOne possible reason why your MAE model may perform better in the unsupervised clustering of table 2 is that the terms added to the elbo by the regularizer may force the space to be more Euclidean (e.g. the squared difference term in the Gaussian KL) and therefore more suitable for k-means. \\n- The semi-supervised classification experiment is definitely better to assess the representation learning capabilities, but KNN suffers with the same issues with the Euclidean distance as in the k-means experiments, and the linear classifier may not be flexible enough for non-euclidean and non-linear manifolds. Have you tried any other non-linear classifiers?\\n- Comparisons with other methods that aim at making the model learn better representation (such as the kl-annealing of the beta-vae) would be useful.\\n- The lack of improvements on the natural image task is a bit concerning for the generalizability of the results.\", \"typos_and_minor_comments\": [\"devergence -> divergence in introduction\", \"assistant -> assistance in 2.3\", \"the items (1) and (2) in 3.1 are not very clear\", \"set -> sets in 3.2\", \"achieving -> achieve below theorem 1\", \"cluatering -> clustering in table 2\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"thanks for pointing out missing related work\", \"comment\": \"Thanks for pointing out the related work missed in the paper.\\nWe will cite and compare with it in our revised version.\\n\\nWe really appreciate your comments about the notation used in the appendix.\\nWe will revise it.\"}",
"{\"comment\": \"This work is closely related to the following work:\\n R. D. Hjelm et al, \\\"Learning deep representations by mutual information estimation and maximization\\\", https://arxiv.org/abs/1808.06670.\\n\\n I would suggest the authors cite the latest work and compare the performance between the two methods. \\n\\n By the way, in the appendix, it mentions that the KL divergence is equal to H(.,.) - H(.), where H(.,.) denotes the relative entropy. Note that relative entropy is actually the KL divergence. Please use a proper name to define H(.,.). The different information measures can be found in \\n 1. Cover and Thomas, \\\"Elements of Information Theory\\\".\\n 2. Raymond Yeung, \\\"Information Theory and Network Coding\\\"\\n 3. Robert Gallager, \\\"Information Theory and Reliable Communication\\\"\", \"title\": \"suggestion\"}"
]
} |
|
SkMQg3C5K7 | A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks | [
"Sanjeev Arora",
"Nadav Cohen",
"Noah Golowich",
"Wei Hu"
] | We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018). | [
"Deep Learning",
"Learning Theory",
"Non-Convex Optimization"
] | https://openreview.net/pdf?id=SkMQg3C5K7 | https://openreview.net/forum?id=SkMQg3C5K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJxU-an4xN",
"H1e_FvysRm",
"Byxc7V1sC7",
"r1eIQn65A7",
"rklG12aqCX",
"rJlJC5aqAm",
"HygW8mcwaX",
"rklm2LHvpQ",
"H1lFEi7Dam",
"ByeG0q7wTm",
"B1laO57wam",
"ryx_BxxhhQ",
"SyeSyuqq3m",
"r1lBZQJZnm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545026814004,
1543333759802,
1543332898505,
1543326749958,
1543326682346,
1543326406663,
1542067016832,
1542047403266,
1542040368980,
1542040266501,
1542040180559,
1541304383626,
1541216221390,
1540580093291
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1063/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1063/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1063/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1063/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This is a well written paper that contributes a clear advance to the understanding of how gradient descent behaves when training deep linear models. Reviewers were unanimously supportive.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Solid advance in convergence analysis of gradient descent for deep linear networks\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the swift response and positive feedback!\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you, I have changed my score!\"}",
"{\"title\": \"Experiments\", \"comment\": \"We have added experiments to our submission which we believe may address your comment regarding impact on deep learning practice. The procedure of balanced initialization, motivated by our theory, leads to improved (faster and more stable) convergence in the settings we evaluated. We hope this will inspire similar ideas that will lead to improvements in larger, state-of-the-art settings.\"}",
"{\"title\": \"Experiments added\", \"comment\": \"See above\"}",
"{\"title\": \"Submission update\", \"comment\": \"Submission was updated in accordance with our responses to reviewers --- several clarifications were added, and a new experimental section demonstrates the potential of balanced initialization to improve convergence in practice, analogously to its theoretical benefits.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the positive feedback! We plan to add an experimental section before deadline which will illustrate useful facts about the dynamics and support our theory. That may also be of interest.\"}",
"{\"title\": \"Reply\", \"comment\": \"Dear author(s),\\n\\nThank you for your response! \\n\\nYes, I do agree that theoretical aspects of neural network need to be investigated and spread more. I also agree that you have a nice theoretical result for deep linear neural network. My only concern is just whether this network could have some impact in practice and why we have to use it instead of doing minimization \\\"end-to-end\\\" model. \\n\\nAlthough there are still some limitations of the results, I hope this paper could influence others to expand more theoretical sides of neural networks rather than just experiments. \\n\\nI will likely increase my score for your paper!\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you for the support and the very thoughtful review!\\n\\n----\\n\\nWith regards to the \\u201cT blowing up in N\\u201d matter:\\n\\nPlease note that our paper only provides *upper bounds* on the time it takes gradient descent to converge. These bounds indeed increase with depth, but we do not claim they are tight. It is possible that while our bound becomes worse, optimization actually accelerates.\\n\\nAs you have noted, if we assume initialization admits a fixed deficiency margin $c > 0$, our main convergence result suffers from polynomial deterioration in depth (balancedness needs to be tighter, learning rate smaller, and number of iterations larger). This corresponds to the case of balanced initialization (Procedure 1). When layers are initialized independently around zero, the deficiency margin decreases exponentially with depth, further lengthening convergence time by an exponential factor. In this sense our results comply with [1], though we provide only upper bounds (positive results), without lower bounds (negative results) like theirs.\\n\\nBecause we provide only upper bounds, there is no contention between our results and [2], which claims that added depth can sometimes accelerate convergence. Moreover, to the best of our knowledge, [2] does not consider asymptotic dependence on depth. It shows (through a mix of theory and experiments) that increasing depth from one layer to two or three can accelerate gradient descent under $l_p$ regression with $p > 2$, while pointing out that even in such settings, additional layers can cause a \\u201cvanishing gradient problem\\u201d (which may be alleviated by replacing near-zero initialization with identity initialization). As you suggest, an additional point reconciling our results with those of [2] is the fact that in the setting we treat --- $l_2$ regression --- they did not find any acceleration by depth.\\n\\n\\n[1] Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks. Shamir. 2018.\\n[2] On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. Arora et al. ICML 2018.\\n\\n----\\n\\nAnswers to specific questions and comments (by the order in which they appear):\\n\\n* Throughout the paper, whenever we mention the necessity of our key assumptions (approximate balancedness and deficiency margin), we clearly state that this means a violation of any one of them *can* cause divergence. This is indeed not a strong form of necessity. A clear separation between all settings that lead to convergence and those leading to divergence would be extremely valuable; this ambitious goal is left for future work.\\n\\n* In Claim 3, the requirement $d_0 \\\\geq 20$ is purely technical, designed to allow a more compact presentation of our results. It is used after Equation (47) in the appendix, to simplify expressions there.\\n\\n* In Theorem 2, the constants $d\\u2019_0$ and $a$ are not presented explicitly because their dependence on $p$ is implicit. Namely, as discussed at the end of the proof (Appendix D.3), any choice of $d\\u2019_0$ and $a$ that ensures Equation (38) is greater than $p$ suffices. We give there an example demonstrating that these constants need not be large ($d\\u2019_0 = a = 100$ suffice for $p = 0.25$). We will add this information as a footnote below Theorem 2. Thank you for raising the matter!\\n\\n* In Section 5, the statement of balanced initialization (Procedure 1) circumventing the \\u201cvanishing gradient problem\\u201d means precisely what you mentioned in the beginning of your review --- as opposed to layer-wise independent initialization, balanced initialization is not prone to producing extremely small end-to-end matrices when the depth is large. This means that it can escape the near-zero region, thereby ensuring that $\\\\sigma_min$ multiplier in Equation (9) is sufficiently large at initialization (our analysis then shows it will remain that way throughout optimization).\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": [\"Thank you for the positive and enlightening feedback! Below we address your comments/questions by the order in which they appear.\", \"The assumption of deficiency margin at initialization does not trivialize the optimization problem. It induces a sublevel set without saddles, but nonetheless, this sublevel set is unbounded, and the landscape over it is non-convex and non-smooth. Indeed, we show in Appendix C that initialization with deficiency margin alone is not enough to ensure convergence --- without approximate balancedness, the non-smoothness can cause divergence. This is an excellent question that will be addressed in an updated version of the paper.\", \"In the case where $\\\\Phi$ is rank deficient the problem can be reduced to a subspace in which it has full rank, and if the end-to-end matrix is initialized within that subspace our analysis holds. The scenario of initialization outside the subspace is currently not covered. We regard its treatment as a direction for future work. We will mention this issue in the text; thank you for raising the question!\", \"In the paragraph closing Section 3.1, the sentence you quote means that when the standard deviation of initialization is very small, the deficiency margin (if exists) will be small as well, thus the convergence rate we provide will be slow. This accords with the \\u201cvanishing gradient problem\\u201d, by which small (near-zero) initialization can significantly hinder convergence. We realize from your question that in its current form the sentence may be confusing. It will be rephrased.\", \"We will add an appendix with some numerical experiments demonstrating our findings.\"]}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"Thank you for the feedback. Below are answers to your comments by their numbering.\\n\\n1) Linear neural networks are of interest not for their practical importance (as stated in the introduction, they are no better than linear predictors), but because they are viewed as a first step in theoretical analysis of optimization for deep learning. Similar to non-linear neural networks, their loss landscape is highly non-convex with multiple minima and saddles. Hence the extensive past work on this model, e.g. [1]-[6] below.\\n\\n2) The loss landscape of a linear neural network always includes saddle points (non-strict saddles if depth > 2). Example: network with all-zero weights, where gradient vanishes but (in any reasonable setting) there exist directions that will decrease the loss. As you suggest, it is indeed easier to optimize the end-to-end model directly (convex program), but the entire point in studying linear neural networks is analysis of gradient descent on the non-convex loss.\\n\\n3) Our paper --- specifically the analysis of discrete updates --- treats only L2 loss (as do [1]-[4] below), but can be extended to cover any smooth and strongly convex loss. The more idealized analysis of gradient flow can allow non-strongly convex losses as well, in particular ones used for classification (e.g. cross-entropy). We will mention this in the text.\\n\\n4) The constant $c$ in Definition 2 --- deficiency margin --- indeed affects our established convergence rate. The likelihood of it being sufficiently large under random initialization is a major topic in the paper, addressed extensively throughout Section 3 and Appendix B. In a nutshell, one can ensure significant deficiency margin (with constant probability), but that may come at the expense of approximate balancedness (Definition 1). The challenge is to satisfy both conditions, and for that we define the balanced initialization in Section 3.3.\\n\\n6) We will add an appendix with some numerical experiments demonstrating our findings. Note that in general, the performance of linear neural networks has been evaluated in various papers (e.g. [1] and [5] below).\\n\\n\\nFinally, with regards to your summary of our paper, we would like to point out that linear residual networks --- the subclass of linear neural networks for which convergence has previously been established (by Bartlett et al.) --- are characterized not only by the input, output and hidden dimensions being the same, but also by a restriction to the specific initialization of identity.\\n\\n\\n[1] Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. Saxe et al. ICLR 2014.\\n[2] Deep learning without poor local minima. Kawaguchi. NIPS 2016.\\n[3] Identity matters in deep learning. Hardt and Ma. ICLR 2016.\\n[4] Gradient descent with identity initialization efficiently learns positive definite linear transformations. Bartlett et al. ICML 2018.\\n[5] On the optimization of deep networks: Implicit acceleration by overparameterization. Arora et al. ICML 2018.\\n[6] Deep linear networks with arbitrary loss: All local minima are global. Laurent and Brecht. ICML 2018.\"}",
"{\"title\": \"nice theoretical result about deep linear neural networks\", \"review\": \"This paper continues the recent line of study on the convergence behaviour of gradient descent for deep linear neural networks. For more than 2 layers, the optimization problem is nonconvex and it is known strict saddle points exist. The main contribution is a relaxation of the balancedness condition in previous work by Arora et al and a new deficiency margin condition, which together allowed the authors to prove that gradient descent will converge to an epsilon solution in at most O(log 1/epsilon) iterations (under reasonable assumptions on step size and other parameters). Examples on how to satisfy the two conditions are discussed. Overall, the obtained results appear to be a solid contribution beyond our current understanding of deep linear neural networks, and potentially may be helpful towards our understanding of deep nonlinear neural networks.\\n\\nThis paper is very well-written. The authors gave an elegant short proof for the gradient flow case and spent efforts in proving the discretized version as well. The discussion of related works seems to be appropriate and thorough. \\n\\nOne thing I would love to see more discussions about is the deficiency margin assumption. I know the authors provided some argument about its necessity in the appendix, but is it possible that under the deficiency margin assumption, the nonconvex optimization problem is really \\\"trivial\\\" hence the linear convergence of gradient descent? For instance, can one prove that on this level set there still could be some (strict) saddle-point? And what if Phi is rank-deficient?\\n\\nIn the paragraph proceeding Section 3.2, the authors mentioned that \\\"overly small standard deviation will render high magnitude for the deficiency margin, and therefore fast convergence improbable.\\\" Is there a typo here? Do you mean a smaller deficiency margin? If not, can you please provide more details?\\n\\nLastly, it would be great if the authors could complement the theoretical results with some numerical experiments, especially to test the initialization strategies in Section 3.3.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well-written paper, I would vote for acceptance, but I have some concerns as well.\", \"review\": \"This paper studies the convergence of gradient descent on the squared loss of deep linear neural networks. The authors prove linear convergence rate if (1) the network dimensions are big enough so that the full product can have full rank, (2) the singular values of each weight matrices are approximately the same, (3) the initialized point is \\u201cclose enough\\u201d to the target.\\n\\nFirst of all, this paper is well-written. It reads smoothly, effectively presents the key ideas and implications of the result, and properly answers to possible concerns that arise while reading. The improvement over the previous work ([Bartlett et al 18\\u2019]) is quite substantial.\\n\\nDeep linear neural networks are important, and having a good understanding of linear neural networks can provide us useful insights for understanding the more complex ones, i.e., the nonlinear neural networks. In that regard, I really liked the discussion at the end of Section 3.1. My general opinion for this paper is acceptance, but I also have a number of concerns and questions.\\n\\nMy main concern about the study of GD on linear neural network is whether we really get any \\u201cbenefit\\u201d or \\u201cacceleration\\u201d from depth, i.e., is GD on linear neural nets any faster than GD on linear models. It\\u2019s been shown that we get acceleration in some cases (e.g., $\\\\ell_p$ regression when $p>2$ [Arora et al. 18\\u2019]), but some other results (e.g., [Shamir 18\\u2019] mentioned in Section 5) show that GD on linear neural nets (when weight matrices are all scalar) suffer exponential (in depth) increase in convergence time at near zero region, due to the vanishing gradient phenomenon. From my understanding, this paper circumvents this problem by assuming deficiency margin, because in the setting of [Shamir 18\\u2019], deficiency margin means that the initialized product ($W_{1:N}$) has the same sign as $\\\\Phi$ and far enough from zero, so we don\\u2019t have to pass through the near-zero region.\\n\\nEven with the deficiency margin assumption, the exponential dependence in depth can also be observed in this paper, if we use independent initialization of each weight matrices. In Claim 3, in order to get the probability 0.49 result, the margin $c$ must be very small (O(1/N^N)) as N goes to infinity, resulting in very small $\\\\delta$ and $\\\\eta$ in Theorem 1, and convergence time $T$ exploding in depth. On the other hand, if we fix $0 < c < 1$, then the probability of satisfying deficiency margin will be smaller and smaller as $N$ increases. Is this \\u201cblow-up in N\\u201d problem due to the fact that the loss is l2? Or am I making false claims? I would like to hear the authors\\u2019 opinion about this.\\n\\nThe paper proposes a balanced initialization scheme that doesn\\u2019t suffer exponential blow up (Procedure 1 and Theorem 2), but even with this, the learning rate must decay to zero in polynomial rate in N, also resulting in polynomial increase in convergence time as depth increases. Moreover, this type of initialization scheme (specifically tailored for linear neural networks) is not what people would do in practice; we normally would initialize each layer at random, and may suffer the problems discussed in the above paragraph. That is why I\\u2019d love to hear about the authors\\u2019 future work on layer-wise independent initialization, as noted in the conclusion section.\\n\\nBelow, I\\u2019ll list specific concerns/questions/comments.\\n* In my opinion, the statements about \\u201cnecessity\\u201d of two key assumptions are too strong, because the authors only provide counterexamples of non-convergence. As [Theorem 3, Shamir 18\\u2019] shows (although in scalar case), even when the assumptions are not satisfied, a convergence rate $O(exp(N) * log(1/\\\\epsilon))$ is possible. It will be an interesting future work to clearly delineate the boundary between convergence and non-convergence.\\n\\n* In Thm 2 and Claim 3, what happens if dimension $d_0$ is smaller? What is the reason that you had to restrict it to high dimension? Is it due to high variance with few samples?\\n\\n* In Thm 2, constants $d\\u2019_0$ and $a$ hide the dependence of the result on p, but I would suggest stating the dependence of those parameters on p, and also dependence on other parameters such as N.\\n\\n* In Section 5, there is a statement \\u201cThis negative result, a theoretical manifestation of the \\u201cvanishing gradient problem\\u201d, is circumvented by balanced initialization.\\u201d Can you elaborate more on that? If my understanding is correct, there is still $\\\\sigma_min$ multiplier in Eq (9), which means that at near-zero regions, the gradient will still vanish.\\n\\nI appreciate the authors for their efforts, especially on the heavy math in the proof of the main theorem. I would like to hear your comments and/or corrections (especially on my \\u201cT blowing up in N\\u201d claim) and discuss further.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks\", \"review\": \"Summary:\\n \\nThe paper provides the convergence analysis at linear rate of gradient descent to global minima for deep linear neural networks \\u2013 the fully-connected neural networks with linear activation with l2 loss. The convergence only works under two necessary assumptions on initialization: \\u201cweight matrices at initialization are approximately balanced\\u201d and \\u201cthe initial loss is smaller than the loss of any rank-deficient solution\\u201d. The result of this work is similar to that of Barlett et al. 2018, but the difference is that, in Barlett et al. 2018, they consider a subclass of linear neural networks (linear residual networks \\u2013 a subclass of linear neural networks which the input, output and all hidden layers are the same dimensions).\", \"comments\": \"This paper focuses on theoretical aspect of Deep Learning. Yes, theoretical study of gradient-based optimization in deep learning is still open and needs to spread more. I have the following comments and questions to the author(s) and hope to discuss further during the rebuttal period: \\n \\n1) Most of the deep learning applications are well-known used the neural networks with non-linear activation (specifically ReLU). Could you please provide any successful applications that linear neural networks could achieve better performance over the \\u201cnon-linear\\u201d one? Yes, more layers may lead to better performance since we have more parameters. However, it is still not clear that which one is better between \\u201clinear\\u201d and \\u201cnon-linear\\u201d with the same size of networks. I am not sure if this linear neural networks could generalize well. \\n \\n2) For N=1, the problem should become linear regression with strongly convex loss, which means that there exists a unique W: y = W*x in order to minimize the loss. Hence, if W = W_N*....*W_1, the problem becomes non-convex w.r.t parameters W_N, ...., W_1 but all the minima could be global. Can you please provide some intuitions why the loss function could have saddle points? Also, is not easier to just solve the minimization problem on W?\\n\\n3) Similar with l2 loss, it seems that the problem needs to be restricted on l2 loss. In understand that it could have in some applications. Do you try to think of different loss for example in binary classification problems? \\n \\n4) I wonder about the constant \\u201cc > 0\\u201d in the definition 2 and it would use it to determine the learning rate. Do you think that in order to satisfy the definition 2 for the most cases, constant c would be (arbitrarily) small or may be very close to 0? If so, the convergence rate may be affected in this case. \\n \\n5) The result of Theorem 2 is nice and seems new in term of probabilistic bound. I did not see the similar result in the existing literature for neural networks. \\n \\n6) It would be nice if the author(s) could provide some experiments to verify the theory. I am also curious to know what performance it could achieve for this kind of networks. \\n \\nI would love to discuss with the author(s) during the rebuttal period.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJx7l309Fm | Actor-Attention-Critic for Multi-Agent Reinforcement Learning | [
"Shariq Iqbal",
"Fei Sha"
] | Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in single-agent settings. We present an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multi-agent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems | [
"multi-agent",
"reinforcement learning",
"attention",
"actor-critic"
] | https://openreview.net/pdf?id=HJx7l309Fm | https://openreview.net/forum?id=HJx7l309Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJexvLvNg4",
"S1gqkJrKTX",
"SkxC3RNF67",
"SygCO0Nt6X",
"S1xzqlrChm",
"rkl9CLxv27",
"B1xTnUUMhm",
"HkeukF12cQ",
"HkgCyD3c9Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1545004631810,
1542176482329,
1542176438113,
1542176374204,
1541456009825,
1540978385983,
1540675253431,
1539205343973,
1539127013651
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1062/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1062/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1062/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1062/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1062/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1062/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1062/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1062/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors propose an approach for a learnt attention mechanism to be used for selecting agents in a multi agent RL setting. The attention mechanism is learnt by a central critic, and it scales linearly with the number of agents rather than quadratically. There is some novelty in the proposed method, and the authors clearly explain and motivate the approach. However the empirical evaluation feels quite limited and does not show conclusively that the method is superior to the others. Moreover, the simple empirical results don't give any evidence how the attention mechanism is working or whether it is truly the attention that is affecting the results. The reviewers were split on their recommendation and did not come to a consensus. The AC feels that the paper is not quite strong enough and encourages the authors to broaden the work with additional experiments and analysis.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for your comments. With respect to your concern over scalability, the need to input the actions and observations of all agents in the value function (i.e. centralized value function) limits scalability only during training time, and it is a necessary measure to reduce the non-stationarity of multi-agent environments, as discussed in previous work [1].\\n\\nWe would also like to re-emphasize the fact that our final trained policies are decentralized and do not require any information exchange between agents. This trait makes our approach (and other centralized-critic/decentralized-policy approaches) useful in situations where one can train in a simulation where communication is less taxing, but deploy in the real world, where communication may be more challenging.\\n\\nWe also compared to other methods demonstrating the better scalability of our approach, cf. Table 2.\\n\\nYour thinking of \\u2018semantically probable\\u2019 exchange of information is interesting. We note that it is possible to compress each agent\\u2019s actions/observations before they are sent to a central critic. Our setup naturally allows for this. Consider a case with high-dimensional image observations. In our approach, each agent needs to embed these observations (along with their actions) before sharing with other agents. In a situation where information exchange between agents is expensive, even during training, we can select a sufficiently small embedding space such that performance and efficiency are balanced. This notion of compressing embeddings prior to sharing across agents does not fit as naturally into the competing methods.\\n\\nOur experiments were especially designed to have two contrasting environments, so that we can illustrate two different aspects of multi-agent RL where we felt like the current approaches have not been able to address at the same time. Thus, it is by design that different baselines perform differently on them, as every approach has its own strengths and weaknesses. \\n\\nOur experiments demonstrate that our approach handles both environments well, which none of the baselines is able to do. Our experiments on Cooperative Treasure Collection demonstrate that the general structure of our attention model (even without considering dynamic attention as in our uniform attention baseline) is able to handle large observation spaces (and relatively larger numbers of agents) better than existing approaches which concatenate observations and actions from all agents together. Furthermore, our experiments on Rover-Tower demonstrate that the general model structure alone is not sufficient in all tasks, specifically those with separately coupled rewards for groups of agents, and dynamic attention becomes necessary.\\n\\nWe have added a new section 6.3 to the supplement that includes visualizations of the attention mechanism both over the course of training and within episodes.\\n\\nOur code is available online and a link will be included in the paper once the anonymized review period is over.\\n\\n[1] Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Advances in Neural Information Processing Systems, pp. 6382\\u20136393, 2017.\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for your comments. We apologize for the oversight and have updated our bibliography to reference the appropriate conference publications where applicable.\\n\\nIn the case of DDPG-based methods, we do make a slight modification in order to enable discrete action spaces; however, these modifications were first suggested by the original MADDPG paper (Lowe et al. 2017) in order to enable discrete communication action spaces. Furthermore, it seems that the released code for MADDPG by the original authors uses discrete action spaces by default (https://github.com/openai/multiagent-particle-envs/blob/master/multiagent/environment.py#L29) even for non-communication control. With that being said, we have implemented our method for continuous action spaces and find that it performs competitively with MADDPG on a cooperative task from that paper. We do not expect our approach to significantly outperform their method on their tasks, as those tasks do not necessitate the use of attention (all agents are generally relevant to each agent\\u2019s rewards at every time step). The results can be seen in section 6.4 of the appendix in our revised draft.\\n\\nIn our experiments, we use up to 16 agents. We can further scale up, for example, using some ideas from existing works, including assuming homogenous agents and global rewards which allow for shared critics, etc. Note that, even without those simplifications, our approach is still able to scale better than an approach, MADDPG+SAC, that follows a similar paradigm as ours (Table 2 on page 8).\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for your comments. With regard to the structural choices of the attention model, our decision was based on a survey of attention-based methods used across various applications and their suitability for our problem setting. Our mechanism was designed such that, given a set of independent embeddings, each item in the set can be used to both extract a weighted sum of the other items as well as contribute to the weighted sums that other items extract. When applied to multi-agent value-function approximation, each item can belong to an agent and the separate weighted sums can be used to estimate each agent\\u2019s expected return. Some other choices of attention mechanisms such as RNN-based ones (widely used in NLP), while interesting, do not naturally extend to our setting as our inputs (ie embeddings from agents) do not form a natural temporal order. We have updated our draft to provide more insight into our choices.\\nWe have included a new section 6.3 in the appendix of our revised draft that visualizes the behavior of our attention mechanism, as well as how it evolves over the course of training.\\n\\nWhile our approach does not significantly outperform the best individual baseline in each environment, it consistently performs near the top in all environments --- other methods falter in at least one of the two settings. Our experiments on Cooperative Treasure Collection demonstrate that the general structure of our attention model (even without considering dynamic attention as in our uniform attention baseline) is able to handle large observation spaces (and relatively larger numbers of agents) better than existing approaches which concatenate observations and actions from all agents together. Furthermore, our experiments on Rover-Tower demonstrate that the general model structure alone is not sufficient in all tasks, specifically those with separately coupled rewards for groups of agents, and dynamic attention becomes necessary.\"}",
"{\"title\": \"Interesting contribution to multiagent RL\", \"review\": \"The paper considers an actor-critic scheme for multiagent RL, where the critic is specific to each agent and has access to all other agents' embedded observations. The main idea is to use an attention mechanism in the critic that learns to selectively scale the contributions of the other agents.\\n\\nThe paper presents sufficient motivation and background, and the proposed algorithmic implementation seems reasonable. The proposed scheme is compared to two recent algorithms for centralized training of decentralized policies, and shows comparable or better results on two synthetic multiagent problems. \\n\\nI believe that the idea and approach of the paper are interesting and contribute to the multiagent learning literature.\", \"regarding_cons\": [\"The critical structural choices (such as the attention model in section 3.2) are presented without too much justification, discussion of alternatives, etc.\", \"The experiments show the learning results, but do not provide a peak \\\"under the hood\\\" to understand the way attention evolved and contributed to the results.\", \"The experiments show good results compared to existing algorithms, but not impressively so.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting new method, though more thorough experiments are needed\", \"review\": \"This paper introduces a new method for multi-agent reinforcement learning. The proposed algorithm -- which uses shared critics at training time but individual policies at test time -- makes use of a specialised attention mechanism. The benefits include better scalability (as the dependency of the inputs is linear in the number of agents, rather than quadratic), and also being more amenable to diverse reward and action structures than the previous work.\\n\\n---------Quality and clarity---------\\nThe paper is nicely written, and the ideas are developed in a clear fashion, if slightly verbose (the first 3 pages, though informative, might have been condensed a bit to make more room for the new algorithm). The problem is well-motivated and the benefits of the new algorithm are well showcased.\\n\\nOne negative point that does stick out is the bibliography, where papers that have been published for years (e.g. the Adam paper) are still referenced as arXiv preprints.\\n\\n---------Originality and significance----------\\nAlthough attentive mechanisms have been around for a while, their use in this specific setting (learning shared critics for multi-agent RL) is, and yields desirable properties. The new algorithm opens the door for training in more complex environments, with a larger number of agents (although the number is still limited in the presented experiments).\\n\\nThe main issue I do see with the paper is its experimental section. \\nThe two tasks are picked to showcase the benefits of the new approach. This does mean that the competing algorithms have to undergo significant changes (at least in the case of the DDPG-based methods), which takes away from the validity of the comparison. \\n\\nIdeally, there would be at least one other task on which the other algorithms have been trained on by their respective authors. As mentioned right before Section 4, MAAC can be used on continuous action spaces at the price of increased computational cost, so this should be doable.\\n\\n\\nOverall, this is a nicely written paper which introduces an interesting new method for multi-agent RL, with promising initial results. A more thorough experimental section with slightly fairer comparisons would increase its quality significantly.\\n\\nPros\\n- clear paper, easy to read\\n- interesting application of attention mechanism to multi-agent RL\\n- promising initial results\\n\\nCons\\n- no comparison to related algorithms on tasks where they have already been evaluated externally\\n- the amount of workers is still quite limited in the experiments\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Simple method, but gives insufficient insight in model behavior and how it could generalize\", \"review\": \"Summary\\n\\nAuthors present a decentralized policy, centralized value function approach (MAAC) to multi-agent learning. They used an attention mechanism over agent policies as an input to a central value function. \\n\\nAuthors compare their approach with COMA (discrete actions and counterfactual (semi-centralized) baseline) and MADDPG (also uses centralized value function and continuous actions)\\n\\nMAAC is evaluated on two 2d cooperative environments, Treasure Collection and Rover Tower. MAAC outperforms baselines on TC, but not on RT. Furthermore, the different baselines perform differently: there is no method that consistently performs well.\\n\\nPro\\n- MAAC is a simple combination of attention and a centralized value function approach.\\n\\nCon\\n- MAAC still requires all observations and actions of all other agents as an input to the value function, which makes this approach not scalable to settings with many agents. \\n- The centralized nature is also semantically improbable, as the observations might be high-dimensional in nature, so exchanging these between agents becomes impractical with complex problems.\\n- MAAC does not consistently outperform baselines, and it is not clear how the stated explanations about the difference in performance apply to other problems. \\n- Authors do not visualize the attention (as is common in previous work involving attention in e.g., NLP). It is unclear how the model actually operates and uses attention during execution.\\n\\nReproducibility\\n- It seems straightforward to implement this method, but I encourage open-sourcing the authors' implementation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for the comment\", \"comment\": \"We thank you for your comment. The approach of utilizing policy gradients in partially observed multi-agent environments has been established in the literature, provided that a centralized critic receives the global state [1] or the local observations of all agents [2]. The paper that you bring up (ACCNet) also appears to follow this paradigm.\\n\\nThe authors of [1] provide a proof (pages 4-5) which shows that the multi-agent policy gradient (w/ a state-dependent baseline) reduces to the standard single agent policy gradient, provided that the combined observation histories of each agent combine to form the global state (Eqn. 15). Our approach makes a similar assumption, such that the combined observations of all agents (o = {o_1, \\u2026, o_N}) represents the global state, and this holds true for the environments that we test in. We apologize for the lack of clarity regarding this subject in the initial submission. We will revise our draft once the rebuttal period opens to reflect this point.\\n\\n[1] Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual Multi-Agent policy gradients. arXiv preprint arXiv:1705.08926, May 2017a\\n\\n[2] Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Advances in Neural Information Processing Systems, pp. 6382\\u20136393, 2017.\"}",
"{\"comment\": \"In this work, the authors propose an actor-critic algorithm for multi-agent POMDP. The algorithm depends on the policy gradient theorem in the POMDP setting. In equation (1) the authors summarize the policy gradient theorem for MDP, however, this result does not hold for POMDP. The policy update step, namely, equation (8) lacks substantiation. It would be great if the authors could discuss more the validity of equation (8).\", \"a_related_paper\": \"\", \"accnet\": \"Actor-Coordinator-Critic Net for \\\"Learning-to-Communicate\\\" with Deep Multi-agent Reinforcement Learning. Mao et. al.\", \"title\": \"Policy Gradient Theorem Holds for POMDP?\"}"
]
} |
|
HklQxnC5tX | Overlapping Community Detection with Graph Neural Networks | [
"Oleksandr Shchur",
"Stephan Günnemann"
] | Community detection in graphs is of central importance in graph mining, machine learning and network science. Detecting overlapping communities is especially challenging, and remains an open problem. Motivated by the success of graph-based deep learning in other graph-related tasks, we study the applicability of this framework for overlapping community detection. We propose a probabilistic model for overlapping community detection based on the graph neural network architecture. Despite its simplicity, our model outperforms the existing approaches in the community recovery task by a large margin. Moreover, due to the inductive formulation, the proposed model is able to perform out-of-sample community detection for nodes that were not present at training time | [
"community detection",
"deep learning for graphs"
] | https://openreview.net/pdf?id=HklQxnC5tX | https://openreview.net/forum?id=HklQxnC5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkereSXx1E",
"SJg2Eh7CnQ",
"rJlq687527",
"H1lyCKNP3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543677165363,
1541450803951,
1541187265782,
1540995526855
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1061/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1061/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1061/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1061/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper provides an interesting combination of existing techniques (such as GCN and and the Bernoulli-Poisson link) to address the problem of overlapping community detection. However, there were concerns about lack of novelty, evaluation metrics, and missing comparisons with previous work. The authors did not provide a response to address these concerns.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting combination of existing techniques, with questions remaining be answered on modeling choices and experimental evaluations.\"}",
"{\"title\": \"Application of GNN to overlapping community detection\", \"review\": \"The current paper considers the overlapping community detection problem and suggests to use the so-called graph neural networks for its solution.\\n\\nThe approach starts from BigCLAM model and suggests to parametrize factor matrices (or embedding vectors) via neural network with graph adjacency matrix and node attributes as inputs. The obtained algorithm is tested on several datasets and its reported performance is superior to competitors.\\n\\nThis paper basically tries to introduce the dependence between embedding vectors for graph nodes, which recently became de facto standard approach in machine learning for graphs. The paper is very well aligned with recent literature on ML for graphs, which is focused on combining different ideas of deep learning, tailoring them to particular graph problem and reporting results on some datasets. Unfortunately, very rarely interesting new ideas appear in these papers, and current paper is not an exception.\\n\\nI apologize for such a pessimistic view, but I don't see the results significantly interesting for the ICLR community and don't recommend acceptance. Some additional algorithmic/computational/theoretical insights are needed.\", \"i_have_couple_of_minor_issues_to_discuss\": \"1. For the sake of generality, I would recommend to use the general formula instead of particular 3-layer case in equation 3.\\n2. I don't think that it is really appropriate to call 3-layer model a 'deep learning model', I would recommend to just name it 'neural network'\\n\\nAlso, I think that experimentally paper is pretty strong, but it would be nice to see the repository with algorithm code and experiments available.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Application of GCN for overlapping community detection\", \"review\": [\"This paper presents an overlapping community detection method. The idea is to use a graph neural network (namely, the graph convolutional network) with node embeddings constrained to be non-negative. The non-negative embeddings helps to learn the community membership of each node (and each node can belong to multiple communities).\", \"The idea is natural, though not novel. The only main novelty, as compared to various other recently proposed graph embedding approaches, lies in making the node embeddings non-negative. Rest of the pieces are fairly standard, including the link functions, such as Bernoulli-Poisson. Therefore the paper is quite thin in technical novelty.\", \"In addition to the limited technical novelty, I have a few other concerns as well, including some on the experimental evaluation:\", \"Real-valued node embeddings obtained from shallow/deep graph embedding methods can be used with *overlapping* versions of k-means. This can be a solid baseline.\", \"The paper relies on subsampling the edges and non-edges to speed-up optimization. However, the encode still seems to use the entire adjacency matrix. If that is not the case, please clarify.\", \"The reported results are only on overlapping community detection. Most of the shallow/deep graph embedding methods can also be used for link prediction task (many of the recent paper report such results). It will be nice to provide results on this task.\", \"There has been some recent work on using deep generative models for overlapping community detection with node side information. For example, see \\\"Deep Generative Models for Relational Data with Side Information\\\" (Hu et al, 2017). Interestingly, they too use Bernoulli-Poisson link (but not GCN).\", \"None of the baselines are deep learning methods. As I pointed out, one can use real-valued embeddings from such methods with overlapping k-means (or other overlapping clustering methods). Link-prediction results can also be compared.\", \"In summary, I think the paper lacks both in terms of technical novelty as well as experimental evaluation and therefore doesn't seem to be ready. I would encourage the authors to consider the suggestions above.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting combination of neural nets and network models, but not solid.\", \"review\": \"This paper proposes Deep Overlapping Community detection model (DOC), a graph convolutional network (GCN) based community detection algorithm for network data. The model is a simple combination of GCN and existing framework for community detection. The proposed algorithm is compared to baselines on various datasets, and demonstrated to be accurate in many cases.\\n\\nI think the paper does not deal with one of the most important aspects of network modeling - the degree heterogeneity of nodes. Many works reported that lack of degree corrections would result in bad estimates of community structures [1,2,3]. Probably including the degrees as feature of nodes would be helpful. \\n\\nRegarding the stochastic gradient descent by edge subsampling, I think the authors should mention [4], where the idea of edge subsampling in stochastic gradient descent setting was introduced before this work. Also, it is worth noting that we may lose some important distributional properties in graphs if we naively subsample from it [5]. For instance, sampling from positive and negative pairs to balance the class contribution may distort the sparsity and degree distributions of subsampled graphs. \\n\\nIf we choose to use Bernoulli-Poisson link function, we can reduce the time complexity of likelihood and gradient computation to O(N + E), where N is the number of nodes and E is the number of edges, with the auxiliary variable trick introduced in [6]. In that case we don't really have to worry about subsampling. Why didn't you consider applying this to your model?\\n\\nRegarding the experiments, I think some important baselines are missing [3, 6]. Also, I wonder whether the proposed algorithm would scale to the graphs with more than 100,000 nodes. \\n\\nReferences\\n[1] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Physical Review E, 83(1):016107, 2011.\\n[2] P. K. Gopalan, C. Wang, and D. Blei. Modeling overlapping communities with node popularities. NIPS 2013.\\n[3] A. Todeschini, X. Miscouridou and F. Caron. Exchangeable Random Measures for Sparse and Modular Graphs with Overlapping Communities. CoRR 2016.\\n[4] J. Lee, C. Heakulani, Z. Ghahramani, L. F. James, and S. Choi. Bayesian inference on random simple graphs with power law degree distributions. ICML 2017.\\n[5] P. Orbanz. Subsampling large graphs and invariance in networks. CoRR 2017.\\n[6] M. Zhou. Infinite edge partition models for overlapping community detection and link prediction. AISTATS 2015\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
ByfXe2C5tm | NLProlog: Reasoning with Weak Unification for Natural Language Question Answering | [
"Leon Weber",
"Pasquale Minervini",
"Ulf Leser",
"Tim Rocktäschel"
] | Symbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge. However, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language. Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability. We propose to reap the benefits of both approaches by applying a combination of neural networks and logic programming to natural language question answering. We propose to employ an external, non-differentiable Prolog prover which utilizes a similarity function over pretrained sentence encoders. We fine-tune these representations via Evolution Strategies with the goal of multi-hop reasoning on natural language. This allows us to create a system that can apply rule-based reasoning to natural language and induce domain-specific natural language rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it complements two very strong baselines – BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al.,2017) – and outperforms both when used in an ensemble. | [
"symbolic reasoning",
"neural networks",
"natural language processing",
"question answering",
"sentence embeddings",
"evolution strategies"
] | https://openreview.net/pdf?id=ByfXe2C5tm | https://openreview.net/forum?id=ByfXe2C5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlP8EEUgN",
"HJlfVSl8C7",
"S1gEDWeLAm",
"rJgkJWxI0Q",
"HkgkdleUAX",
"SJxXSeeUR7",
"BJlraS3Ch7",
"S1x2IuwcnX",
"H1xQHGlYh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545122895067,
1543009578181,
1543008604139,
1543008471442,
1543008359240,
1543008314849,
1541486013253,
1541204051726,
1541108283452
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1060/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1060/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1060/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1060/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1060/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1060/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1060/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1060/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1060/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper combines Prolog-like reasoning with distributional semantics, applied to natural language question answering. Given the importance of combining neural and symbolic techniques, this paper provides an important contribution. Further, the proposed method complements standard QA models as it can be easily combined with them.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) The evaluation consisted primarily on small subsets of existing benchmarks, \\n(2) the reviewers were concerned that the handcrafted rules were introducing domain information into the model, and (3) were unconvinced that the benefits of the proposed approach were actually complementary to existing neural models. \\n\\nThe authors addressed a number of these concerns in the response and their revision. They discussed how OpenIE affects the performance, and other questions the reviewers had. Further, they clarified that the rule templates are really high-level/generic and not \\\"prior knowledge\\\" as the reviewers had initially assumed. The revision also provided more error analysis, and heavily edited the paper for clarity. Although these changes increased the reviewer scores, a critical concern still remains: the evaluation is not performed on the complete question-answering benchmark, but on small subsets of the data, and the benefits are not significant. This makes the evaluation quite weak, and the authors are encouraged to identify appropriate evaluation benchmarks. \\n\\nThere is disagreement in the reviewer scores; even though all of them identified the weak evaluation as a concern, some are more forgiving than others, partly due to the other improvements made to the paper. The AC, however, agrees with reviewer 2 that the empirical results need to be sound for this paper to have an impact, and thus is recommending a rejection. Please note that paper was incredibly close to an acceptance, but identifying appropriate benchmarks will make the paper much stronger.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Important contribution, but ultimately weak evaluation\"}",
"{\"title\": \"In-depth error analysis, additional experiments, extensive improvements in clarity\", \"comment\": [\"Summary of changes, Nov 23, 2018\", \"We thank all three reviewers for their detailed and insightful feedback. We used it to update our submission by introducing the following changes:\", \"We added an extensive error analysis (Section 5.6) which elucidates the strengths and weaknesses of NLProlog and the neural QA models and provides additional evidence that the approaches are indeed complementary. Additionally, the error analysis revealed that the OpenIE step is a likely bottle neck of NLProlog, which shows a path for future improvement.\", \"We updated Section 5.7 with an additional experiment on bAbI which studies the effect of varying the size of the rule templates, finding that the number and structure of rules has a strong effect on the convergence speed and predictive performance.\", \"We updated various sections to improve the clarity of the paper, especially with regards to the details of the proposed method: we clarified the structure of the employed rule templates in Section 4.3, added Section A.2 to discuss the runtime of the proof search step and applied optimizations, as well as expanded on the initialization of the symbol embeddings in Section 4.1.\"]}",
"{\"title\": \"Hybrid NLProlog-Neural models, OpenIE, and rules\", \"comment\": \"Thank you for your feedback and suggestions. It is great to hear that you found our paper interesting.\\n\\nQ14. In this work, the first step is completely relied on an off-the-shelf tool, Open-IE. It would be useful to see whether this step is the bottleneck of such approaches.\\n\\nThank you for this useful suggestion. We performed an in-depth error analysis of NLProlog which revealed that indeed a large portion of errors are due to a failure in the OpenIE step (see Q3). We added this analysis to Section 5.6.\\n\\nQ15. For the ensemble results in Table 1, usually even ensembling same models trained with different seed would show improvements, so I'm not completely convinced that BiDAF and NLProlog are complementary - would be nice to see error analysis here.\\n\\nThis is an important observation and question. Indeed, the error analysis we now added to the paper supports our hypothesis that the two approaches are complementary on our evaluation data (see Section 5.6.). The proposed method and neural Q&A models seem to have orthogonal inductive bias, and thus complementary strengths and weaknesses.\\n\\nQ16. Question: What is the size of hand-coded predicates and rules?\\n\\nPlease also see Q4, Q6, and Q9: in all experiments, we use the same set of just six rule templates, since rules with two body atoms are sufficient for expressing arbitrarily complex rules [1].\\n\\n[1] Evans and Grefenstette. Learning Explanatory Rules from Noisy Data. JAIR 2018\\n\\nQ17. What is the coverage of these rules on the datasets, i.e. are there questions unanswerable by the provided rules?\\n\\nFor \\u201ccountry\\u201d questions, the six rule templates in principle were sufficient to learn rules that can answer all problems but four -- see the new error analysis for the reasons where the actual results deviate from this statement. As stated in the paper, rule induction did not perform well for \\u201cdeveloper\\u201d and \\u201cpublisher\\u201d questions, either because OpenIE was not able to extract relevant facts (in the case of \\u201cpublisher\\u201d), or a low proportion of answerable multi-hop questions (in the case of \\u201cdeveloper\\u201d). For more detail, please see Section 5.6. We think that, given the correct rules, the provided rule templates would be sufficient for answering all studied queries. See Q6 for an additional discussion of this issue.\"}",
"{\"title\": \"Datasets and rule annotations\", \"comment\": \"Thank you a lot for your feedback!\\n\\n\\nQ8. The data sets are selected subsets of other standard benchmarks, rather than the entire benchmarks, and the test sets are quite small \\n\\nWe agree that a more extensive evaluation would strengthen our work. However, we also note that NLProlog is a technique specifically designed to support multi-hop reasoning (see also Q2 above). As this today is a non-standard feature, most evaluation datasets we are aware of do not contain (or just a few) structured predicates which require such capabilities. For all performed evaluations in which our framework was applicable, we observe consistent improvements of NLProlog when ensembled with neural Q&A models. Following Q3 (above) and Q14 (below), we added to the paper a detailed error analysis demonstrating that NLProlog has strengths that are directly complementary to neural Q&A models. \\n\\nQ9. Given the hand-annotated nature of much of the input knowledge (the rule templates), this introduces an important concern that the experimental wins will not be robust in more realistic settings where different knowledge may be required. \\n\\nOur method indeed requires the manual specification of rule templates. However, we actually use the same set of just six rule templates across all tasks (minus the ablation study). Rule instantiation as required by a specific task is performed automatically during the learning phase. We rephrased Section 4.3 to make this difference more clear. \\n\\nWhether the proposed approach would work if multiple query predicates are involved indeed is an open yet very interesting question. We added this thought as future work to Section 6.\\n\\nQ10. I did not understand how individual symbols, predicates and entities, have embeddings that come from sentence vectors (Section 4.1)\\n\\nWe associate every entity and predicate to an embedding vector, initialised with the Sent2Vec sentence encoder: starting from text, we extract the relevant facts via OpenIE, and encode their predicate and entities using Sent2Vec. We apologise if this was not clear enough and rephrased the description in Section 4.1.\\n\\nQ11. The learning objective in Section 4.2 seems reasonable, but I did not understand how \\\"evolution\\\" was part of the strategy there.\\n\\n\\u201cEvolution Strategies\\u201d is a gradient estimation method proposed in [1], which is commonly also interpreted under an evolutionary computing perspective [2]. In this case, \\u201cEvolution\\u201d stems from the fact that the gradient is a function of a population of sampled model parameters. We rephrased Section 1 to make these issues clearer.\\n\\n[1] Salimans et al., Evolution Strategies as a Scalable Alternative to Reinforcement Learning, 2017\\n[2] Eiben, Agoston E., and James E. Smith. Introduction to evolutionary computing. Vol. 53. Berlin: springer, 2003.\\n\\nQ12. The example rule template for transitivity isn't actually transitivity unless p_i=p_j for all i,j, I found that a little confusing. \\n\\nThank you for pointing this out\\u2014indeed \\u201ctransitivity\\u201d is not the best term here. We changed it to \\u201cmulti-hop rule\\u201d throughout the paper.\\n\\nQ13. Where are \\\"t-norms\\\" (mentioned at the top of page 6) used? I did not see this.\\n\\nSorry, this was a typo from an older version of the manuscript. While our aggregation functions are t-norms in a mathematical sense [3], we have replaced the mention by \\u201caggregation function\\u201d to be consistent with the rest of the paper.\\n\\n[3] Sessa, Maria I. 2002. \\u201cApproximate Reasoning by Similarity-Based SLD Resolution.\\u201d Theoretical Computer Science 275 (1): 389\\u2013426.\"}",
"{\"title\": \"Rules and run-time complexity\", \"comment\": \"Q5. What is the run-time/complexity of the exhaustive proof search during training?\\n\\nAs in most logic programming frameworks, the number of candidate proofs essentially grows exponentially with the depth of the proof trees. This is a particular problem in our setting where in principle all predicates match with all others (to a certain degree). However, we do not use an exhaustive proof search but a simple pruning heuristic which disregards all proof steps with a score lower than a given threshold. This threshold is updated dynamically during the search step. We now make this part of our method more clear in the appendix (see Section A.2). \\n\\nQ6. Relatedly, you state that you limit the rule complexity to two body atoms in the rule templates for bAbI. Can you estimate what rule complexity is required in the Wikihop tasks?\\n\\nAs mentioned above, we hypothesized that direct entailment, symmetry, and transitivity are the most important types of reasoning. In our analysis we found that, for answering most questions on the considered WikiHop predicates, a limited number of rules of the form \\u2018p(X,Z) :- p(X,Y), p(Y,Z)\\u2019 is sufficient. This is because the WikiHop dataset was constructed by traversing the graph connecting the entities mentioned in the support texts, a property that can be well exploited by our method.\\n\\nQ7. Minor quibble: Evolutionary learning strategies, such as genetic algorithms, go back a long way. It's strange using only a reference from 2017 to introduce them.\\n\\nThank you for pointing this out. We updated our references to better reflect the long tradition in studying evolutionary learning (See Section 1). Any further suggestions are welcome.\"}",
"{\"title\": \"Evaluation and error analysis\", \"comment\": \"Thanks a million for your detailed feedback and suggestions. We are glad to hear that you found our paper to be clear and this line of work interesting, particularly the fact that NLProlog succeeds where existing neural models fail.\\n\\nQ1. Unfortunately, this is only done for the \\\"country\\\" subset of WikiHop, on which the model was already shown to have the strongest performance. I'd find this more convincing if similar improvements were shown on the other subsets (publisher, developer).\\n\\nExtending our evaluation with further relationships would certainly be a worthwhile extension of our paper. Following your advice, we performed additional experiments on the predicate record_label. In these experiments, OpenIE was not able to extract the relevant facts, so it was not possible to apply our framework. Following Q3 (below), we performed an extensive error analysis to also explaining results on \\u201cdeveloper\\u201d and \\u201cpublisher\\u201d questions (see Section 5.6).\\n\\nQ2. On the other hand, it's very positive to see that NLProlog seems to succeed where the neural models fail, and vice versa, so that the two approaches can be combined in an ensemble to achieve state-of-the-art results.\\n\\nThanks for your comment. Indeed, the last two rows of Table 1 show consistently strong results for an ensemble of the proposed logic-based approach and deep neural models. We rephrased Section 5.5 to better explain our ensembling strategy and added a detailed qualitative analysis (see Q3 below) demonstrating that the two approaches have complementary strengths.\\n\\nQ3. I'd find this result more interesting if an error analysis elucidated some characteristics of the examples that each approach does well/poorly on.\\n\\nThank you for this excellent suggestion. We added to the paper an extensive error analysis, see Section 5.6. The takeaway message is that (up to issues with data quality) the OpenIE component causes the majority of NLProlog errors, while neural Q&A models are robust even when crucial information is missing in the support documents. However, neural Q&A models often also pick up spurious data artifacts which sometimes coincidentally even lead to correct results (see Section 5.6). We show that NLProlog is much less affected by such effects. Furthermore, NLProlog provides proofs for its predictions which make it easy for users to spot errors. \\n\\nQ4. How well-specified must an priori rule structure be to achieve good performance? Further, how does the number and structure of the rules (a hyperparameter in this work) affect performance?\\n\\nPlease note that we use the same rule templates in all evaluation results - we apologize that this was mentioned only in the appendix in our submission. Actually, the model can learn more complex rules by composing simpler rules [1]. We employ two templates for each of the following three structures \\u2018p(X, Y) :- q(X, Y)\\u2019, \\u2018p(X, Y) :- q(Y, X)\\u2019, and \\u2018p(X,Z) :- q(X,Y), r(Y,Z)\\u2019, because these capture the basic reasoning steps of entailment, symmetry and transitivity (see Q12 regarding the appropriateness of this term). We studied the impact of having more/less structures in the ablation study on bAbI and found that these have a significant impact on convergence speed and overall performance. We expand on this in Section 5.7 of the revised paper.\\n\\n[1] Evans and Grefenstette, Learning Explanatory Rules from Noisy Data, 2017\"}",
"{\"title\": \"interesting approach towards combining neural networks with logic reasoning\", \"review\": \"Update:\\nI appreciate the through error analysis the authors have done in the revision, which addressed my major previous concerns. I've updated my score accordingly.\\n\\nThis paper presents an approach to combine Prolog-like reasoning with distributional semantics. First, extracted fact triples are unified (i.e. mapped to) predicates and entities. Next, reasoning is performed with rule templates, where predicates and entities are abstracted. Since the reasoning process is non-differentiable, zero-oder optimization is used to fine-tune the predicate / entity embeddings.\\n\\nThe general idea of combining logical reasoning with neural models is quite appealing. A sketch of the algorithm is to first build structured knowledge from the text, then do reasoning over it to answer queries. In this work, the first step is completely relied on an off-the-shelf tool, Open-IE. It would be useful to see whether this step is the bottle neck of such approaches. One possibility is to apply the model to knowledge graph reasoning, which would remove any noise introduced from the knowledge extraction step, and solely focus on evaluating reasoning.\\n\\nThe results are a bit restricted, as in only a subset of the datasets are evaluated. I suspect part of the reason is that most of the QA datasets which claims to require multi-step reasoning don't really need much reasoning... However, it would be useful to do some simple (perhaps qualitative) analysis on the data quality, and make sure that it indeed tests what it intended to. For the ensemble results in Table 1, usually even ensembling same models trained with different seed would show improvements, so I'm not completely convinced that BiDAF and NLProlog are complementary - would be nice to see error analysis here.\", \"question\": \"What is the size of hand-coded predicates and rules? What's the coverage of these rules on the datasets, i.e. are there questions unanswerable by the provided rules?\\n\\nOverall, while the results are limited, the approach is interesting, and hopefully will spur more work towards interpretable models with explicit reasoning.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting direction, experiments are unconvincing\", \"review\": \"Updated after reading author revisions:\\nI appreciate the clarifications, the response answered almost all of my small technical questions. That plus the new error analysis increases my opinion about the paper, and I'm no longer concerned that the rule templates are hand-generated given their generality and small number. I am still concerned that we don't actually know how well the methods work, because the test sets are small and the performance differences between the methods (in Table 1) are quite close. I will raise my score one point.\\n\\nThe authors might try to evaluate using k-fold cross-validation with the training set, to obtain more examples for evaluation.\", \"original_review\": \"The paper presents a technique for using prolog along with neural representations and Open IE to perform reasoning with weak unification.\\n\\nI like the basic direction of trying to combine prolog with neural models, and the weak unification notion. The approach seems sufficiently novel, and the GRL is a reasonable heuristic.\\n\\nI do, however, have significant concerns about the experiments. The data sets are selected subsets of other standard benchmarks, rather than the entire benchmarks, and the test sets are quite small (e.g., the \\\"developer\\\" column where the NLProlog approach shows some of the larger wins -- when ensembled with previous techniques -- is based on a test set of only 29 examples). Given the hand-annotated nature of much of the input knowledge (the rule templates), this introduces an important concern that the experimental wins will not be robust in more realistic settings where different knowledge may be required.\\n\\nMinor comments/questions\", \"page_2\": \"\\\"without the need to transforming\\\"\\nI did not understand how individual symbols, predicates and entities, have embeddings that come from sentence vectors (Section 4.1).\\nThe learning objective in Section 4.2 seems reasonable, but I did not understand how \\\"evolution\\\" was part of the strategy there.\\nThe example rule template for transitivity isn't actually transivity unless p_i=p_j for all i,j, I found that a little confusing.\\nWhere are \\\"t-norms\\\" (mentioned at the top of page 6) used? I did not see this.\\n\\\"candidates entities\\\" -> \\\"candidate entities\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Promising unification of neural representations with learned logical rules\", \"review\": \"\", \"update\": \"Given the authors' rebuttal and the clear improvements to their paper, I've increased my rating of the work.\\n\\n=======================\\n\\nThis paper presents NLProlog, a method that combines logical rules with distributed representations for reasoning on natural language statements. Natural language statements (first converted to logical triples) and templated logical rules are embedded in a vector space using a pretrained sentence encoder. These embedded \\\"symbols\\\" can be compared in vector space, and their similarity used in a theorem prover (Prolog) modified to support weak unification. The theorem prover determines the answer to a natural language query by constructing a proof according to its logical rules.\\n\\nTraining through the non-differentiable theorem prover occurs via an \\\"evolutionary strategy,\\\" which enables the model to fine-tune its sentence encoders and learn domain-specific logic rules directly from text. The authors also propose a Gradual Rule Learning (GRL) algorithm that seems necessary for the optimization process to converge on good solutions.\\n\\nDespite the model's complexity, the paper was fairly clear to me.\\n\\nAlthough the proposed model is a conglomeration of pre-existing parts, the combination is original to my knowledge. The use of Open Information Extraction to transform natural language statements to logical statements, which amenable to theorem provers, is novel and also circumvents the complicated preprocessing required by previous related works.\\n\\nThe authors evaluate the proposed approach on subsets of the Wikihop dataset and BABI-1k. NLProlog performs competitively with neural models, similarly augmented with Sent2Vec but lacking explicit logical rules, only on the 'country' subset of Wikihop. It does not compete with or clearly outperform these models in general. As the authors state, it further \\\"struggles to find meaningful rules for the predicates 'developer' and 'publisher'.\\\"\\n\\nNLProlog demonstrates strong performance on a subset of problems labelled unanimously by annotators to require multi-hop reasoning. Unfortunately, this is only done for the \\\"country\\\" subset of Wikihop, on which the model was already shown to have the strongest performance. I'd find this more convincing if similar improvements were shown on the other subsets (publisher, developer).\\n\\nTaking into account also that the BABI subset was used only for ablation, the limited results call into question the significance of the work. It would definitely benefit from more extensive experimental validation. On the other hand, it's very positive to see that NLProlog seems to succeed where the neural models fail, and vice versa, so that the two approaches can be combined in an ensemble to achieve state-of-the-art results. This suggests that the paper's line of research has something to add to the community and should be pursued further. I'd find this result more interesting if an error analysis elucidated some characteristics of the examples that each approach does well/poorly on.\\n\\nI'd like to see more analysis in general, that answers questions including:\\n- How reliable is the Open IE system and how does its performance impact the end task?\\n- How well-specified must the a priori rule structures be to achieve good performance? Further, how does the number and structure of the rules (a hyperparameter in this work) affect performance?\\n- What is the run-time/complexity of the exhaustive proof search during training?\\n- Relatedly, you state that you limit the rule complexity to two body atoms in the rule templates for BABI. Can you estimate what rule complexity is required in the Wikihop tasks?\\n\\nI would like to recommend this work more confidently because it tackles such an important problem and does so in an interesting, well-conceived way. My reluctance arises from the limited experimental validation and analysis. Given more analysis details and experimental evidence from the authors, I'm happy to raise my recommendation.\", \"pros\": [\"the method complements standard deep QA models to achieve state-of-the-art results in an ensemble.\", \"unifying neural representations with logical/symbolic formalisms is an important research direction.\", \"a code release is planned.\"], \"cons\": [\"a very complex model, whose details are occasionally unclear; the algorithms in Appendix A are helpful but they are not in the main text.\", \"the model expresses only a limited subset of first order logic; dynamically changing world states are not supported (yet).\", \"limited experimental validation.\", \"it's good to be able to incorporate prior knowledge, but it seems like it's quite necessary to pre-specify rules (in template form).\"], \"minor_quibble\": \"Evolutionary learning strategies, such as genetic algorithms, go back a long way. It's strange using only a reference from 2017 to introduce them.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkeXehR9t7 | Graph2Seq: Graph to Sequence Learning with Attention-Based Neural Networks | [
"Kun Xu",
"Lingfei Wu",
"Zhiguo Wang",
"Yansong Feng",
"Michael Witbrock",
"Vadim Sheinin"
] | The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks. However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence. To address this challenge, we introduce a general end-to-end graph-to-sequence neural encoder-decoder architecture that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors. Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings. We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs. Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance. | [
"Graph Encoder",
"Graph Decoder",
"Graph2Seq",
"Graph Attention"
] | https://openreview.net/pdf?id=SkeXehR9t7 | https://openreview.net/forum?id=SkeXehR9t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxg5436kE",
"SJeVih7mJN",
"SyxNIcGX1V",
"r1xMb8TaCm",
"BklNzoQaR7",
"rkliiSzaCQ",
"BJehojWpCm",
"rJgfjwl2AX",
"SJl8tDxhCm",
"r1gV0ocjC7",
"BylxnicsAm",
"HJlZDoqo0m",
"r1e4N3kj07",
"HkgbmcIcCm",
"H1ebaFIc0Q",
"B1eTkY89Cm",
"rkgCK_UqAQ",
"rJlfIu89Cm",
"BylkEj0gTm",
"B1x_Q__qhQ",
"HJKBboE3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544565896284,
1543875740289,
1543871052347,
1543521785702,
1543482124428,
1543476643481,
1543474083676,
1543403417992,
1543403389772,
1543379915811,
1543379880252,
1543379800682,
1543334955763,
1543297560742,
1543297464901,
1543297252828,
1543297158009,
1543297098270,
1541626662976,
1541208096515,
1540825408517
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1059/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1059/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths:\\nThe work proposes a novel architecture for graph to sequence learning.\\nThe paper shows improved performance on synthetic transduction tasks and for graph to text generation.\", \"weaknesses\": \"Multiple reviewers felt that the experiments were insufficient to evaluate the novel aspects of the submission relative to prior work. Newer experiments with the proposed aggregation strategy and a different graph representation were not as promising with respect to simple baselines.\", \"points_of_contention\": \"The discussion with the authors and one of the reviewers was particular contentious.\\nThe title of the paper & sentences within the paper such as \\\"We propose a new attention-based neural networks paradigm to elegantly address graph- to-sequence learning problems\\\" caused significant contention, as this was perceived to discount the importance of prior work on graph-to-sequence problems which led to a perception of the paper \\\"overclaiming\\\" novelty.\", \"consensus\": \"Consensus was not reached, but both the reviewer with the lowest score and one of the reviewers giving a 6 came to the consensus that the experimental evaluation does not yet evaluate the novel aspects of the submission thoroughly enough.\\n\\nDue to the aggregate score, factors discussed above (and others) the AC recommends rejection; however, this work shows promise and additional experimental work should allow a new set of reviewers to better understand the behaviour and utility of the proposed method.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas, but a more targeted evaluation is needed\"}",
"{\"title\": \"Response to \\\"Reponse to rebuttal and discussion\\\"\", \"comment\": \"We are sorry to hear that the reviewer 1 did not feel that this is a strong enough submission. However, we would like to clarify some points as follows:\", \"q1\": [\"The paper cannot claim that its main contribution is to propose a general graph-to-seq framework as (a) multiple models falling under the framework already exists and (b) the paper does not make extensive enough comparisons between different graph-to-seq approaches.\"], \"answer\": \"We have discussed there are no much general \\\"alternative architectures\\\", expect two popular Graph-to-Seq models (GCN+RNN or GGS-NN+RNN). The most recent progress (or applications) - two parallel ICLR submissions that reviewer 2 pointed out earlier could also be counted into two categories. For GCN and GGS-NN, we have extensive comparisons among our graph encoder and these two models on three tasks. In the future, we would like to apply our Grpah2Seq model to more real applications, however, we think this is truly not related to the qualification of the acceptance of a paper.\", \"q2\": [\"As pointed out multiple times by the other reviewers, there is nothing novel about the attention mechanism used by the model, as it is just standard attention applied to a different encoder (to which the attention is agnostic).\"], \"q3\": [\"The proposed graph encoder is novel, but experiments do not clearly disentangle the various design choices which differentiate it from other models. The SQL to Text generation results reported in a comment to reviewer 2 is a step in the right direction, but it is not sufficient to answer all questions about comparing the model to alternative architectures, as raised in detail by reviewer 2.\"]}",
"{\"title\": \"Reponse to rebuttal and discussion\", \"comment\": [\"I thank the authors for their extensive response to all the reviewers' comments. However, I have to agree with the other reviewers that the current version of the paper is most likely not strong enough to be accepted. To summarize the concerns that I share:\", \"The paper cannot claim that its main contribution is to propose a general graph-to-seq framework as (a) multiple models falling under the framework already exists and (b) the paper does not make extensive enough comparisons between different graph-to-seq approaches.\", \"As pointed out multiple times by the other reviewers, there is nothing novel about the attention mechanism used by the model, as it is just standard attention applied to a different encoder (to which the attention is agnostic).\", \"The proposed graph encoder is novel, but experiments do not clearly disentangle the various design choices which differentiate it from other models. The SQL to Text generation results reported in a comment to reviewer 2 is a step in the right direction, but it is not sufficient to answer all questions about comparing the model to alternative architectures, as raised in detail by reviewer 2.\"]}",
"{\"title\": \"Final Response: Reply to Final conclusions\", \"comment\": \"First, we are deeply grateful for the reviewer2's time and effort for providing a swift and insightful feedback on our responses in the previous several days. Such intensive interactions between the reviewers and the authors are such a beautiful thing that ICLR provides to researchers who are very responsible for their jobs.\\n\\nSecond, we would like to provide our final explanations to reviewer's final arguments to conclude our reponses as well.\", \"q1\": \"The \\\"paradigm\\\" is concisely described by\\n (node_reprs, graph_repr) = graph_encoder(graph)\\n seq_decoder(initial_state=graph_repr, memories=node_reprs)\\nThat is present in a substantial number of existing papers, and so I believe the authors cannot claim this as a contribution.\", \"answer\": \"We have explained why we think both reasons that the reviewer listed can be easlily addressed. We hope the reviewer will take these final responses into account to give your final recommendation.\\n\\nThanks much again for your time and great review job!\", \"q2\": \"one of their models (with split forward/backward information and their proposed message aggregation strategy, but with a different graph representation) performs less well than simple baselines. This is a great example of why disentangled experiments are required to judge the value of a set of smaller design choices, and would usually lead me to rate a paper as 5 (marginally below acceptance threshold)\", \"q3\": \"To summarize, I believe the paper should not be published at ICLR (or at any venue), for the following two reasons.\"}",
"{\"title\": \"Final conclusions\", \"comment\": \"This discussion seems to be going nowhere. To summarize, I believe the paper should not be published at ICLR (or at any venue), for the following two reasons:\\n(1) The title and sentences such as \\\"We propose a new attention-based neural networks paradigm to elegantly address graph- to-sequence learning problems\\\" discount the fact that there is a substantial set of prior work handling graph-to-sequence problems in the same way as the authors propose. The \\\"paradigm\\\" is concisely described by\\n (node_reprs, graph_repr) = graph_encoder(graph)\\n seq_decoder(initial_state=graph_repr, memories=node_reprs)\\nThat is present in a substantial number of existing papers, and so I believe the authors cannot claim this as a contribution. This overclaiming in itself is reason for me to reject the paper.\\n\\n(2) The experiments are insufficient to compare the actually novel aspects of this submission with the existing work properly. The new results provided by the authors show that one of their models (with split forward/backward information and their proposed message aggregation strategy, but with a different graph representation) performs less well than simple baselines. This is a great example of why disentangled experiments are required to judge the value of a set of smaller design choices, and would usually lead me to rate a paper as 5 (marginally below acceptance threshold)\\n\\nI do not think that repeating these points will improve anything and will thus not reply to any further comments unless significant new points are raised.\"}",
"{\"title\": \"Clarification on designing experimental evaluations\", \"comment\": \"Q1: As discussed in my earlier messages here... The focus on new tasks in the graph2seq setting thus makes it harder to compare to the rich literature on graph encoders.\", \"answer\": \"We feel that the reviewer is very thoughtful and might have not considered this is a conference paper that is supposed to focus on reporting a new idea or other new findings. What you required are really good but with all these results we can almost write a survey paper on the effect of the combinations of various node aggregation methods, graph aggregation methods, and so on for Graph encoder, Graph2Seq and any neural network on graphs in the literature.\\n\\nWe respectfully ask the reviewer to consider this is a conference paper and we would be happy to incorporate your comprehensive experimental suggestions in our future work.\", \"q2\": \"On contribution (a), we have Table 3, which indicates diffferences between the three aggregation methods; however, this is not compared to the aggregation strategies from the literature (in the GGNN case, summation; in the GCN case, summation weighted by the renormalized adjacency matrix; in the GAT case, attention), no theoretical analysis is provided, and so the value of this contribution relative to the existing literature remains unclear.\", \"q3\": \"On contribution (b), we have Table 3 and the new Table 4. These indicate that using only forward/backward information leads to worse results, but does not compare to the setting used in GGNN and R-GCN, in which different edge types are used for the forward/backward direction, allowing for different message passing functions.\", \"q4\": \"on contribution (c), you have now provided additional information (and thank you for quickly providing these additional results!). I assume that your row on GGS-NN uses the weighted sum aggregation method from Li et al. 2015, but I'm not sure how you compute the graph-level representation for GCN.\", \"q5\": \"Overall, I am just disappointed in the design of your experiments: ....\"}",
"{\"title\": \"Clarification on novelty of the graph2seq architecture\", \"comment\": \"Q1: The VarNaming task from Allamanis et al. takes a graph and produces a sequence of tokens (that make up a variable name) and the model discussed by Allamanis et al. is explicitly refered to as an instance of a \\\"graph2seq architecture\\\" (this I why remembered the paper in this context actually).\", \"answer\": \"We thank the reviewer agreed with our contributions as you summarized here except the novelty of our attention mechanism. However, we haven't seen existing related references that used Graph2Seq models in NLP applications have used a sophisticated way to select a subset of graph nodes to apply attention. Instead, they just simply used domain knowledge to apply attentions directly on word nodes (in a word sequence) such as neural machine translation in (Bastings et al., 2017) and (Beck et al., ACL'18), which are just part of the all nodes in a graph. Compared to these existing works, we think our attention over all graph nodes looks simple but definitely more general because it is a simple yet effective way to apply attention without considering the domain knowledge of any underlying applications.\", \"q2\": \"Just because no one felt it necessary to explicitly claim that they have a \\\"general graph2seq architecture\\\" instead of a \\\"specific graph2seq architecture\\\" (whatever the difference there is supposed to be), you can not pretend that this idea doesn't already exist...\", \"q3\": \"That is perfectly fine, and a useful contribution, but just as the authors of the first paper using a bidirectional RNN in a seq2seq setting could not claim that their paper is the first to introduce a seq2seq architecture, you cannot claim novelty on the graph2seq architecture.\", \"q4\": \"The core differences to other graph encoder models that you have identified are (a) your message aggregation strategy, (b) the explicit split between forward and backward direction and (c) the computation of the graph embedding from node embeddings. [I continue to discount your claim of novelty on attention overall graph nodes instead of attention over a subset of graph nodes, because the latter is a more complex case than the former]\"}",
"{\"title\": \"On designing experimental evaluations\", \"comment\": \"As discussed in my earlier messages here, I feel that the value of these contributions would require proper evaluation in comparison with existing baselines. My main problem here is that these contributions are of course not restricted to the graph2seq setting, but can be relevant to any setting in which a graph encoder could be used (just as you keep pointing out that you could switch out the graph encoder in your experiments). The focus on new tasks in the graph2seq setting thus makes it harder to compare to the rich literature on graph encoders. Let me discuss in detail what I see in the experiments, and what I would expect.\\n\\nOn contribution (a), we have Table 3, which indicates diffferences between the three aggregation methods; however, this is not compared to the aggregation strategies from the literature (in the GGNN case, summation; in the GCN case, summation weighted by the renormalized adjacency matrix; in the GAT case, attention), no theoretical analysis is provided, and so the value of this contribution relative to the existing literature remains unclear. Table 3 also suffers from reporting results on a task on synthetic data where node labels play no role, and thus may not be indicative of a general result.\\n\\nOn contribution (b), we have Table 3 and the new Table 4. These indicate that using only forward/backward information leads to worse results, but does not compare to the setting used in GGNN and R-GCN, in which different edge types are used for the forward/backward direction, allowing for different message passing functions. These functions can conceivably learn to use \\\"half\\\" of the hidden dimensions for forward and the rest for backwards information, and so you would expect them to perform similarly or better (as they can adapt to the importance of the directionality). Again, no experiments are provided to compare to these baselines and thus we cannot conclude anything. As for (a), the fact that the task is synthetic and largely label-agnostic additionally adds doubts about the generalizability of these results.\\n\\nFinally, on contribution (c), you have now provided additional information (and thank you for quickly providing these additional results!). I assume that your row on GGS-NN uses the weighted sum aggregation method from Li et al. 2015, but I'm not sure how you compute the graph-level representation for GCN. This is finally an experiment in which the effect of a modeling choice you made can be compared in isolation to the literature.\\n\\n\\nOverall, I am just disappointed in the design of your experiments: To evaluate the effect of different contributions, it would be crucial to (i) evaluate each contribution on its own, and keep everything else fixed, (ii) compare to existing work, not just new ablations of your own work, and (iii) use well-known tasks whereever possible. Your original submission was lacking on all three of these criteria, and your additional experiments are slowly moving towards fixing (i). To be constructive, these are the things that I would expect to see in a good evaluation of your paper:\\n (1) Model variations using sum aggregation (as in GGNN), weighted sum aggregation (as in GCN), mean aggregation, (max)pooling in the node aggregation part, while keeping everything else the same. At least on the WikiSQL data (which at least is not synthetic), or even better, the tasks from Gilmer et al. or any other paper from the literature. This would shed light on the value of contribution (a).\\n (2) Model variations using only forward, only backwards, bidirectional, or both directions with different edge types/separate weights (as in GGNN/R-GCN), while keeping everything else the same. Again, at least on WikiSQL, preferably on a task from the literature. This would shed light on contribution (b).\\n (3) Model variations using a node-based graph representation, a pooling-based graph representation, or a weighted sum as Li et al. and Gilmer et al. use, while keeping everything else the same (i.e., add one more variation to your updated Table 4). This would shed light on contribution (c).\\n\\nI apologize for not posting these explicit instructions originally. I understand that we are past the end of the rebuttal period, but want to point out that the authors chose to post their response on the last day of the rebuttal period.\"}",
"{\"title\": \"On novelty of the graph2seq architecture\", \"comment\": \"First, as this seems to have been lost on the authors of this submission: The VarNaming task from Allamanis et al. takes a graph and produces a sequence of tokens (that make up a variable name) and the model discussed by Allamanis et al. is explicitly refered to as an instance of a \\\"graph2seq architecture\\\" (this I why remembered the paper in this context actually).\\n\\nSecond, let me clarify why I brought up very recent submissions as well: If the idea is widespread enough that other papers just use it as a part of another model, you cannot claim that the idea is novel anymore. Just because no one felt it necessary to explicitly claim that they have a \\\"general graph2seq architecture\\\" instead of a \\\"specific graph2seq architecture\\\" (whatever the difference there is supposed to be), you can not pretend that this idea doesn't already exist. The idea of a graph2seq work is an obvious extension of the seq2seq, seq2tree, tree2seq, etc. models in the literature and the list of papers I cited shows that plenty of other authors have already used this idea.\\n\\nThird, your model obviously differs from the models from the literature and in other papers (as you discuss in great detail), but not in the general architecture of a graph encoder and a sequence decoder. That is perfectly fine, and a useful contribution, but just as the authors of the first paper using a bidirectional RNN in a seq2seq setting could not claim that their paper is the first to introduce a seq2seq architecture, you cannot claim novelty on the graph2seq architecture. This claim in the paper is factually wrong and should be removed.\\n\\nIf you truly believe that your novel contribution is the graph2seq architecture, then you should focus the paper on that and I will lower my score substantially again, as I believe that claim to be trivially wrong. [The AC may then judge if I'm mistaken and you are correct in claiming novelty]\\n\\nBut because you describe other things in your paper that may be interesting, I believe it is important to consider these other claimed contributions in my review. The core differences to other graph encoder models that you have identified are (a) your message aggregation strategy, (b) the explicit split between forward and backward direction and (c) the computation of the graph embedding from node embeddings. [I continue to discount your claim of novelty on attention over all graph nodes instead of attention over a subset of graph nodes, because the latter is a more complex case than the former]\"}",
"{\"title\": \"Clarification on original contributions of the paper: Part III\", \"comment\": \"Q3: The only information we obtain is from the new Table 4, which indicates that the new encoder beats GCN on a new synthetic dataset, but we don't know if this is due to the dataset, the fact that GCN is a weak baseline, or if the newly proposed encoder is actually better. All reviewers have asked for more informative experiments on this question, but the authors have declined to offer any results beyond this new Table 4. I believe that more information on this is crucial for the quality of this paper.\\n\\n---------------------------------------------------------------------------------------------------------------------------\", \"answer\": \"Although we think our results in Table 1, 4, and Figure 4 have enough information to show that our graph encoder indeed has more impressive power than other well-known graph encoder baselines GGS-NN and GCN, we respect the reviewer #2\\u2019s comments and have performed a new set of experiments on NLG task. We replaced our graph encoder with GCN or GGS-NN with other components fixed such as graph embedding scheme and attention mechanism. The experimental results are shown below and we will incorporate these results in the final version of the paper.\\nWe would like to emphasize that our Graph2Seq is a general learning framework, and is highly extensible where its two building blocks, graph encoder, and sequence decoder, can be replaced by other more advanced models. \\n\\n\\t\\t \\t \\t \\t\\t\\n\\t\\t BLEU-4\\t\\t\\n\\t\\t\\t\\t\\t\\nSeq2Seq 20.91\\nSeq2Seq + Copy 24.12\\nTree2Seq 26.67\\nGCN 35.99\\nGGS-NN 35.53\\nGraph2Seq-NGE 34.28\\nGraph2Seq-PGE 38.97\\n\\t\\t\\t\\t\\n\\n---------------------------------------------------------------------------------------------------------------------------\", \"final_remark\": \"we hope our replies clarify the reviewer\\u2019 concerns and are helpful in making the final recommendation.\"}",
"{\"title\": \"Clarification on original contributions of the paper: Part II\", \"comment\": \"2) https://arxiv.org/abs/1711.00740 (Allamanis et al., ICLR'18)\\nAllamanis et al., (2018) presented an application of learning to represent programs with graphs using existing GGN-SS model in Li et al., (2015). However, we did not find any new model for Graph-to-Sequence Learning mentioned in this paper. We are not sure why the reviewer brought this recent work up to argue against the novelty of our Graph2Seq model. Instead, our Grpah2Seq model can actually be applied to this application as well. \\n\\n3) http://aclweb.org/anthology/P18-1026 (Beck et al., ACL'18)\\nWe thank the reviewer for pointing out Beck et al., (2018) to us. In this work, Beck et al. proposed a similar model to the one proposed by Bastings et al. (2017). The main difference between these two works is that Beck et al., (2018) used a variant of GGN-SS proposed by Li et al., (2015) as graph encoder while Bastings et al. (2017) used a variant of GCN proposed by Kipf & Welling, (2016). As we discussed the difference between our Graph2Seq model and Bastings et al. (2017) above, the similar arguments of the differences between our Graph2Seq model and Beck et al., (2018) can also be discussed. So we refer the reviewer and other readers to the detailed differences above in 1). In short, since both Bastings et al. (2017) and Beck et al., (2018) are proposed mainly for attacking neural machine translation problems, many of their model design choices are tailored for these tasks. In contrast, our Graph2Seq model is independent of underlying applications and thus a truly general end-to-end learning framework for graph-to-sequence problems. \\n\\n4) https://openreview.net/forum?id=H1ersoRqtm (Structured Neural Summarization, ICLR'19 submission)\\nThis work presented an improved Seq2Seq model for neural summarization task by leveraging GNN-SS in Li et al., (2015). Since this is another NLP application, they also leverage Bi-LSTM to firstly obtain initial word representation and then feed them to GNN-SS. This work is very similar to the work proposed by Beck et al., (ACL'18) since both models used GNN-SS as the encoder and LSTM as the decoder. However, as we mentioned before, this model is very different from our Graph2Seq model in terms of both graph encoder and attention mechanism. All previous arguments about the differences above can also be applied here. \\n\\n5) https://openreview.net/forum?id=B1fA3oActQ (GraphSeq2Seq: Graph-Sequence-to-Sequence for Neural Machine Translation, ICLR'19 submission)\\nThis work presented a GraphSeq2Seq model dedicated to neural machine translation. Similar to previous works in (Bastings et al. 2017) and (Beck et al., 2018), they utilize the dependency tree of the sentence sequence to leverage the model of Gildea et al., (2018). Differently, instead of following the order of Bi-LSTM-GNN for graph encoder, they choose the opposite order GNN-Bi-LSTM for graph encoder. It is easy to see that this is a very specific choice for graph encoder design dedicated to neural machine translation task. In contrast, our Graph2Seq model is very different from this work with respecting to both graph encoder and attention mechanism since we aim to design a Graph2Seq model that is application independent.\", \"q2\": \"the chosen baselines (GGS-NN and GCN) are the earliest representatives of (deep) graph message passing models, and more recent work on graph encoders from the last 3 years has been ignored.\\n\\n---------------------------------------------------------------------------------------------------------------------------\", \"answer\": \"As we discussed different line of neural network on graphs in the Related Work Section, GGS-NN (which was proposed by Li et al., (2015)) has been state-of-the-art model in the line of graph recurrent networks, and GCN (which was proposed by Kipf & Welling, (2016)) has been the standard model in the line of graph convolutional networks. There are some variants of these two models for some particular applications but we haven\\u2019t found the well recognized better models for both lines so far. From the above recent references the reviewer has pointed out, most of them are just simply adopted one of these models for their graph encoder, which might demonstrate their effectiveness over other models as well. Therefore, we think our chosen baselines GGS-NN and GCN are appropriate. Otherwise, we would be looking forward to hearing more state-of-the-art models that reviewer #2 think are better.\"}",
"{\"title\": \"Clarification on original contributions of the paper: Part I\", \"comment\": \"We first thank the reviewer for the swift reply on our newly posted rebuttal. The reviewer #2 has a good memory to point out these references without searching for our paper topic. However, we would like to point out the Reviewer #2 is a little bit harsh on our submission for questioning the novelty of our paper. We are very surprised to see that there are two parallel ICLR submissions that reviewer #2 pointed out against our submission. We also want to remind the reviewer #2 that the second and third references you pointed out are actually contemporaneous works as ours (our work was posted to Arxiv even early than these very recently accepted papers). Despite these facts, we still want to explain in detail about the key differences between our Graph2Seq model and these references reviewer #2 mentioned.\", \"q1\": \"Regarding (1), I can list the following papers from memory (to avoid searching for something which may reveal the identity of the authors) that use graph2seq architectures:\", \"https\": \"//openreview.net/forum?id=B1fA3oActQ (ICLR'19 submission)\\n\\nSome of these works use attention over graph-generated embeddings (e.g., the first and oldest on the list), and I am sure that substantially more such works exist, as the idea is absolutely straightforward. Hence, I do not believe that you can claim novelty on the graph2seq structure, or the attention mechanism over graph-generated node embeddings. Hence, the claim to the originality of (1) should be removed from the paper.\\n\\n---------------------------------------------------------------------------------------------------------------------------\", \"http\": \"//aclweb.org/anthology/P18-1026 (ACL'18)\", \"answer\": \"We will explain the differences between our proposed Graph2Seq model and existing works as follows. More importantly, why our Graph2Seq model is a novel general end-to-end neural network for learning the mapping between any graph inputs and sequence outputs independent of underlying applications.\\n\\n1) https://arxiv.org/abs/1704.04675 (Bastings et al., EMNLP'17)\\nWe have already discussed the differences between our Graph2Seq model and Bastings's model in the response of Q3 for reviewer #1. In order to make it easy to follow, we have rephrased our previous arguments here:\\n\\nWe noticed that Bastings et al., (2017) has utilized GCN for improving the encoder of the neural machine translation system, as we discussed in the related work. However, we would like to point out several major differences between their model and our Graph2Seq model in terms of model architecture:\\n\\na) First of all, Bastings et al., (2017) are not claiming that they developed general Graph-to-Sequence learning model. Instead, they claimed that \\\"GCNs use predicted syntactic dependency trees of source sentences to produce representations of words (i.e. hidden states of the encoder) that are sensitive to their syntactic neighborhoods\\\". In other words, they just used a version of GCNs to enhance sequence encoder to better capture syntactic information by taking into account syntactic dependency trees with the original sentence sequence SOLELY for neural machine translation. Therefore, many of their choices are centered around how to design a specific graph encoder for the original Seq2Seq model. These particular choices are listed in the subsequent items. \\nb) Since Bastings\\u2019s encoder are built on GCN, which itself is derived from spectral graph convolutional neural networks, their model can be only used under transductive settings. In contrast, our graph encoder can be used under both transductive and inductive settings. \\nc) Bastings's GCNs are on top of CNN or LSTM layers on input sentence, while our graph coder initializes all node embeddings as random vectors. \\nd) Although Bastings\\u2019s encoder takes into account both incoming and outgoing edges as well as the edge labels, they only compute a single node embedding using the information from both directions. This is quite different from our bidirectional aggregation strategies, which is inspired from the bi-LSTM architecture, where we generate the representation of a node for each direction (forward or backward), and then concatenating them. Intuitively, this architecture could explicitly represent the context of a node, i.e., the forward context representation and backward context representation. \\ne) Bastings et al. (2017) used the same attention based decoder of Bahdanau et al. (2015) while we design an attention-based decoder over the graph node embeddings. In other words, Bastings et al. (2017) used a domain knowledge to pre-select the attention applied only on original words in a sentence and completely ignore other nodes in the graph. Therefore, our attention mechanism is independent of underlying specific tasks and generally applicable to different tasks.\"}",
"{\"title\": \"Still no clarity on original contributions of the paper\", \"comment\": \"Thank you for your clarifications. However, it remains unclear to me what the authors are claiming as the contribution of this paper. The introduction lists three contributions: (1) Attention-based graph-to-sequence learning; (2) a new bi-directional graph encoder with new graph embedding techniques; (3) experiments that show the value of these contributions.\\n\\nRegarding (1), I can list the following papers from memory (to avoid searching for something which may reveal the identity of the authors) that use graph2seq architectures:\", \"https\": \"//openreview.net/forum?id=B1fA3oActQ (ICLR'19 submission)\\n\\nSome of these works use attention over graph-generated embeddings (e.g., the first and oldest on the list), and I am sure that substantially more such works exist, as the idea is absolutely straightforward. Hence, I do not believe that you can claim novelty on the graph2seq structure, or the atttention mechanism over graph-generated node embeddings.\\n\\nHence, the claim to the originality of (1) should be removed from the paper.\\n\\n\\nNow, if contribution (2), the graph encoder, is the remaining contribution of the paper, then there are many baselines from the literature, on better-known datasets, which the authors could compare their work to. This was not done (neither in the original submission nor in the revisions), and hence it remains unclear what the value of this contribution is. Indeed, the chosen baselines (GGS-NN and GCN) are the earliest representatives of (deep) graph message passing models, and more recent work on graph encoders from the last 3 years has been ignored. \\n\\nWhile the additional experiments in the second revision include valuable ablations, the lack of a comparison to current state of the art baselines makes it hard to judge if (2) indeed an improvement in the construction of graph encoders. The only information we obtain is from the new Table 4, which indicates that the new encoder beats GCN on a new synthetic dataset, but we don't know if this is due to the dataset, the fact that GCN is a weak baseline, or if the newly proposed encoder is actually better. All reviewers have asked for more informative experiments on this question, but the authors have declined to offer any results beyond this new Table 4. I believe that more information on this is crucial for the quality of this paper.\\n\\n\\nOverall, I continue to believe that the paper in its current form should not be accepted as neither the text nor the experimental results clearly articulate what conclusions the reader should take away: The graph2seq idea is not a new one (with or without attention); and the differences in the graph encoder are not compared sufficiently with existing graph encoders to allow any conclusions. However, I have slightly raised my rating to reflect the additional experiments.\", \"http\": \"//aclweb.org/anthology/P18-1026 (ACL'18)\"}",
"{\"title\": \"Response to Review #2: Part 2\", \"comment\": \"Q5: (Sect. 3.3) This is a standard attention-based decoder; the fact that the memories come from a graph doesn't change anything fundamental.\", \"answer\": \"Although this is an interesting suggestion, this is slightly out of the scope of this paper. First of all, our Graph2Seq model is proposed to serve a generalized Seq2Seq model for graph inputs. On NLG task, we have already demonstrated the superior performance of Graph2Seq over Seq2Seq and Tree2Seq models. Second, in the previous two tasks (in Table 1), we also demonstrated the advantages of our Graph2Seq over GGS-NN and GCN. Finally, as we mentioned in the Introduction, \\u201cGraph2Seq is simple yet general and is highly extensible where its two building blocks, graph encoder and sequence decoder, can be replaced by other models\\u201d. We have released our code and data, and we would be happy to see more researchers and practitioners in using/adopting our Graph2Seq model for different tasks.\", \"q6\": \"The experiments are not very informative, as simple baselines already reach >95% accuracy on the chosen tasks.\", \"q7\": \"The most notable difference between GGS-NNs and this work seems to be the attention-based decoder, but that is not evaluated explicitly.\", \"q8\": \"Experimental results for either GGS-NN with an attentional decoder, or their model without an attentional decoder, to check if the reported gains come from that. The final paragraph in Sect. 4 seems to indicate that the attention mechanism is the core enabler of the (small) experimental gains on the baselines.\", \"q9\": \"The results of the GCN/GG-NN models (i.e., just as an encoder) with their decoder on the NLG task.\"}",
"{\"title\": \"Response to Review #2: Part 1\", \"comment\": \"We first would like to thank the referees for their very careful reading, for identifying subpar language, typos, and discrepancies in text, and for asking questions that will help us significantly improve the presentation.\", \"q1\": \"The submission discusses a graph2seq architecture that combines a graph encoder that mixes GGNN and GCN components with an attentional sequence encoder.\", \"answer\": \"we noticed that in (Li et al., 2015) and (Gilmer et al., 2017) they employed some soft attention based weighted node embedding to compute the graph-level embedding. This is very different from two graph embeddings we exploit in Sec 3.2. In particular, we have explored two graph embedding schemes:\\n1) Pooling-based graph embedding: we fed the node embeddings to a fully-connected neural network and then applied an pooling method (i.g. max-pooling, min-pooling, and average pooling) element-wise. This is different from weighted node embedding using soft attention. \\n2) Node-based graph embedding: we add on supernode into the input graph and all other nodes in the graph direct to this super node. Then the graph embedding can be obtained by aggregating the embeddings of the neighbor nodes. This approach has been discussed in the original GNN work (Scarselli et al., 2009) but with different aggregation approach.\", \"q2\": \"The resulting model is evaluated on three very simple tasks, showing small improvements over baselines.\", \"q3\": \"(Sect. 3.1) The separation of forward/backward edges was already present in the (repeatedly cited) Li et al 2015 paper on GGNN (and in Schlichtkrull et al 2017 for GCN). The state update mechanism (a FC layer of the concatenation of old state / incoming messages) seems to be somewhere between a gated unit (as in GGNN) and the \\\"add self-loops to all nodes\\\" trick used in GCN; but no comparison is provided with these existing baselines.\", \"q4\": \"(Sect 3.2) The discussed graph aggregation mechanism are those proposed in Li et al and Gilmer et al; no comparison to these baselines is provided.\"}",
"{\"title\": \"Response to Review #1: Interesting paper\", \"comment\": \"We first would like to thank the referees for their very careful reading, for identifying subpar language, typos, and discrepancies in text, and for asking questions that will help us significantly improve the presentation such as motivation and organization.\", \"q1\": \"Novel architecture for graph to sequence learning\\u2026 Transduction with structured inputs such as graphs is still an under-explored area, so this paper makes a valuable contribution in that direction...\\n\\nWe are very grateful for the kind comments of reviewers #1, in particular for your recognition of the key contributions of the paper.\", \"q2\": \"Experiments could provide more insight into model architecture design and the strengths and weaknesses of the model on non-synthetic data.\", \"answer\": \"we hope that our previous responses have helped better explain the novelty of our Graph2Seq model architecture over existing works. According to your and reviewer #3\\u2019s comments, we have rephrased our key contributions of our model, that is, a novel graph encoder to learn a bi-directional node embedding for directed and undirected graphs with node attributes by employing various aggregation strategies, and to learn graph-level embedding by exploiting two different graph embedding techniques. In addition, to the best of knowledge, our attention mechanism to learn the alignments between nodes and sequence elements to better cope with larger graphs is also proposed for the first time.\", \"q3\": \"The model is relatively similar to the architecture proposed by Bastings et al (2017)\", \"q4\": \"However the paper could make difference between these architectures clearer, and provide more insight into whether different graph encoder architectures might be more suited to graphs with different structural properties.\", \"q5\": \"However, very little insight is provided into this result. It would be interesting to apply this model to established NLG tasks such as AMR to text generation.\", \"q6\": \"Overall, this is an interesting paper, and I\\u2019d be fine with it being accepted. However, the modeling contribution is relatively limited and it feels like for this to be a really strong contribution more insight into the graph encoder design, or more applications to real tasks and insight into the model\\u2019s performance on these tasks is required.\"}",
"{\"title\": \"Response to Review #3: Part 2\", \"comment\": \"Q5: The two Graph Embedding methods are also well presented, however, I didn\\u2019t see them in experiments. Actually, it isn\\u2019t clear at all if these are even used since the decoder is attending over node embeddings, not graph embedding\\u2026 Could benefit a little more explanation\", \"answer\": \"we appreciated your careful reading and have fixed all of them based on your comments.\", \"q6\": \"The change of baselines between table 1 for the first two tasks and table 2 for the third task is not explained and thus confusing.\", \"q7\": \"Better explanation of \\u201cbi-directional\\u201d node embeddings\", \"q8\": \"\\u201cImpact of Attention Mechanism\\u201d\", \"q9\": \"Other minor notes.\"}",
"{\"title\": \"Response to Review #3: Part 1\", \"comment\": \"We first would like to thank the referees for their very careful reading, for identifying subpar language, typos, and discrepancies in text, and for asking questions that will help us significantly improve the presentation such as motivation and organization.\\n\\nWe are very grateful for the kind comments of reviewers #3, in particular for your recognition of the key contributions of the paper.\", \"q1\": \"The paper could benefit a little more motivation: motivated applications and creteria to select datasets\", \"answer\": \"Yes, we fully agree with you that the self-attention scheme proposed in the work of graph attention networks can be combined with our graph encoder as well. As we mentioned in the Introduction, \\u201cGraph2Seq is simple yet general and is highly extensible where its two building blocks, graph encoder, and sequence decoder, can be replaced by other models\\u201d.\", \"q2\": \"Rephrase the novelty argument \\u201cnovel attention mechanism\\u201d\", \"q3\": \"The novelty added by this paper is the \\u201cbi-edge-direction\\u201c aggregation technique with the exploration of various pooling techniques. This could be emphasized more.\", \"q4\": \"The Related Work section could mention Graph Attention Networks (https://arxiv.org/abs/1710.10903) as an alternative to the node aggregation strategy.\"}",
"{\"title\": \"Interesting work but lacking some organization\", \"review\": \"This work proposes an end-to-end graph encoder to sequence decoder model with an attention mechanism in between.\\nPros (+) :\\n+ Overall, the paper provides a good first step towards flexible end-to-end graph-to-seq models.\\n+ Experiments show promising results for the model to be tested in further domains.\\nCons (-) :\\n- The paper would benefit more motivation and organization.\\n\\nFurther details below (+ for pros / ~ for suggestions / - for cons):\", \"the_paper_could_benefit_a_little_more_motivation\": [\"Mentioning a few tasks in the introduction may not be enough. Explaining why these tasks are important may help. What is the greater problem the authors are trying to solve?\", \"Same thing in the experiments, not well motivated, why these three? What characteristics are the authors trying to analyze with each of these tasks?\"], \"rephrase_the_novelty_argument\": \"- The authors argue to present a \\u201cnovel attention mechanism\\u201d but the attention mechanism used is not new (Bahdanau 2014 a & b). The fact that it is applied between a sequence decoder and graph node embeddings makes the paper interesting but maybe not novel.\\n~ The novelty added by this paper is the \\u201cbi-edge-direction\\u201c aggregation technique with the exploration of various pooling techniques. This could be emphasized more.\", \"previous_work\": \"~ The Related Work section could mention Graph Attention Networks (https://arxiv.org/abs/1710.10903) as an alternative to the node aggregation strategy.\", \"aggregation_variations\": \"+ The exploration between the three aggregator architectures is well presented and well reported in experiments.\\n~ The two Graph Embedding methods are also well presented, however, I didn\\u2019t see them in experiments. Actually, it isn\\u2019t clear at all if these are even used since the decoder is attending over node embeddings, not graph embedding\\u2026 Could benefit a little more explanation\", \"experiments\": [\"Experiments show some improvement on the proposed tasks compared to a few baselines.\", \"The change of baselines between table 1 for the first two tasks and table 2 for the third task is not explained and thus confusing.\", \"~ There are multiple references to the advantage of using \\u201cbi-directional\\u201d node embeddings, but it is not clear from the description of each task where the edge direction comes from. A better explanation of each task could help.\"], \"results\": [\"Page 9, the \\u201cImpact of Attention Mechanism\\u201d is discussed but no experimental result is shown to support these claims.\"], \"some_editing_notes\": \"(1) Page 1, in the intro, when saying \\u201cseq2seq are excellent for NMT, NLG, Speech Reco, and drug discovery\\u201d: this last example breaks the logical structure of the sentence because it has nothing to do with NLP.\\n(2) Page 1, in the intro, when saying that \\u201c<...> a network can only be applied to sequential inputs\\u201d: replace network by seq2seq models to be exact.\\n(3) Typo on page 3, in paragraph \\u201cNeural Networks on Graphs\\u201d, on 8th line \\u201cusig\\u201d -> \\u201cusing\\u201d\\n(4) Page 3, in paragraph \\u201cNeural Networks on Graphs\\u201d, the following sentence: \\u201cAn extension of GCN can be shown to be mathematically related to one variant of our graph encoder on undirected graphs.\\u201d is missing some information, like a reference, or a proof in Appendix, or something else\\u2026\\n(5) Page 9, the last section of the \\u201cImpact of Hop Size\\u201d paragraph talks about the impact of the attention strategy. This should be moved to the next paragraph which discusses attention.\\n(6) Some references are duplicates:\\n|_ Hamilton 2017 a & c\\n|_ Bahdanau 2014 a & b\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper\", \"review\": \"This paper proposes a graph to sequence transducer consisting of a graph encoder and a RNN with attention decoder.\", \"strengths\": [\"Novel architecture for graph to sequence learning.\", \"Improved performance on synthetic transduction tasks and graph to text generation.\"], \"weaknesses\": \"- Experiments could provide more insight into model architecture design and the strengths and weaknesses of the model on non-synthetic data. \\n\\nTransduction with structured inputs such as graphs is still an under-explored area, so this paper makes a valuable contribution in that direction. Previous work has mostly focused on learning graph embeddings producing outputs. This paper extends the encoder proposed by Hamilton et al (2017a) by modelling edge direction through learning \\u201cforward\\u201d and \\u201cbackward\\u201d representations of nodes. Node embeddings are pooled to a form a graph embedding to initialize the decoder, which is a standard RNN with attention over the node embeddings. \\n\\nThe model is relatively similar to the architecture proposed by Bastings et al (2017) that uses a graph convolutional encoder, although the details of the graph node embedding computation differs. Although this model is presented in a more general framework, that model also accounted for edge directionality (as well as edge labels, which this model do not support). \\n\\nThis paper does compare the proposed model with graph convolutional networks (GCNs) as encoder experimentally, finding that the proposed approach performs better on shortest directed path tasks. However the paper could make difference between these architectures clearer, and provide more insight into whether different graph encoder architectures might be more suited to graphs with different structural properties. \\n\\nThe model obtains strong performance on the somewhat artificial bAbI and Shortest path tasks, while the strongest result is probably that of strong improvement over the baselines in SQL to text generation. However, very little insight is provided into this result. It would be interesting to apply this model to established NLG tasks such as AMR to text generation. \\n\\nOverall, this is an interesting paper, and I\\u2019d be fine with it being accepted. However, the modelling contribution is relatively limited and it feels like for this to be a really strong contribution more insight into the graph encoder design, or more applications to real tasks and insight into the model\\u2019s performance on these tasks is required.\", \"editing_notes\": \"Hamilton et al 2017a and 2017c is the same paper.\", \"in_some_cases_the_citation_format_is_used_incorrectly\": \"when the citation form part of the sentence, the citation should be inline. E.g. (p3) introduced by (Bruna et al., 2013) -> introduced by Bruna et al. (2013).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Weak increment on graph to sequence tasks\", \"review\": [\"The submission discusses a graph2seq architecture that combines a graph encoder that mixes GGNN and GCN components with an attentional sequence encoder. The resulting model is evaluated on three very simple tasks, showing small improvements over baselines.\", \"I'm not entirely sure what the contribution of this paper is supposed to be. The technical novelty seems to be limited to new notation for existing work:\", \"(Sect. 3.1) The separation of forward/backward edges was already present in the (repeatedly cited) Li et al 2015 paper on GGNN (and in Schlichtkrull et al 2017 for GCN). The state update mechanism (a FC layer of the concatenation of old state / incoming messages) seems to be somewhere between a gated unit (as in GGNN) and the \\\"add self-loops to all nodes\\\" trick used in GCN; but no comparison is provided with these existing baselines.\", \"(Sect 3.2) The discussed graph aggregation mechanism are those proposed in Li et al and Gilmer et al; no comparison to these baselines is provided.\", \"(Sect. 3.3) This is a standard attention-based decoder; the fact that the memories come from a graph doesn't change anything fundamental.\", \"The experiments are not very informative, as simple baselines already reach >95% accuracy on the chosen tasks. The most notable difference between GGS-NNs and this work seems to be the attention-based decoder, but that is not evaluated explicitly. For the rebuttal phase, I would like to ask the authors to provide the following:\", \"Experimental results for either GGS-NN with an attentional decoder, or their model without an attentional decoder, to check if the reported gains come from that. The final paragraph in Sect. 4 seems to indicate that the attention mechanism is the core enabler of the (small) experimental gains on the baselines.\", \"The results of the GCN/GG-NN models (i.e., just as an encoder) with their decoder on the NLG task.\", \"More precise definition of what they feel the contribution of this paper is, taking into account my comments from above.\", \"Overall, I do not think that the paper in its current state merits publication at ICLR.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkxXg2C5FX | Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors | [
"Vitalii Zhelezniak",
"Aleksandar Savkov",
"April Shen",
"Francesco Moramarco",
"Jack Flann",
"Nils Y. Hammerla"
] | Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks. Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art results on standard STS benchmarks. Inspired by these insights, we push the limits of word embeddings even further. We propose a novel fuzzy bag-of-words (FBoW) representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors. We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity. Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity. | [
"word vectors",
"sentence representations",
"distributed representations",
"fuzzy sets",
"bag-of-words",
"unsupervised learning",
"word vector compositionality",
"max-pooling",
"Jaccard index"
] | https://openreview.net/pdf?id=SkxXg2C5FX | https://openreview.net/forum?id=SkxXg2C5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byeb2yZ8fV",
"r1lgmDCllV",
"BklqRMCkeE",
"ryl7fx3iyV",
"Bkg3mlksRm",
"HkxPI1yi0X",
"BkxkUCC9CQ",
"BJlNKpAcCm",
"SkxAJTC5R7",
"SylUlhAqCX",
"rke4o1cOTX",
"ByeCpELDTX",
"HJx3gDVvpX",
"BygtQLlm6Q",
"rylBW7K1pX",
"S1lC2zgqhQ",
"SyeN7H9S3m",
"rygwTXlCiX",
"SJgsHga3i7",
"r1e6Hyfos7"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1547206568732,
1544771351746,
1544704721707,
1544433675209,
1543331875745,
1543331663352,
1543331398957,
1543331195601,
1543331045880,
1543330797551,
1542131612154,
1542051013708,
1542043379675,
1541764640600,
1541538556592,
1541173942286,
1540887835594,
1540387774796,
1540309058932,
1540198213347
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1058/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1058/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1058/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Camera ready version\", \"comment\": \"Dear Area Chair, Reviewers and Readers,\\n\\nWe were delighted to learn that our work has been recommended for acceptance and are looking forward to presenting it at the conference. We have now uploaded the camera ready version of the manuscript and linked the source code repository.\\n\\nWe were absolutely thrilled by the amount of positive feedback our work has received and would like to thank everyone who participated in this forum.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"metareview\": \"This paper presents new generalized methods for representing sentences and measuring their similarities based on word vectors. More specifically, the paper presents Fuzzy Bag-of-Words (FBoW), a generalized approach to composing sentence embeddings by combining word embeddings with different degrees of membership, which generalize more commonly used average or max-pooled vector representations. In addition, the paper presents DynaMax, an unsupervised and non-parametric similarity measure that can dynamically extract and max-pool features from a sentence pair.\", \"pros\": \"The proposed methods are natural generalization of exiting average and max-pooled vectors. The proposed methods are elegant, simple, easy to implement, and demonstrate strong performance on STS tasks.\", \"cons\": \"The paper is solid, no significant con other than that the proposed methods are not groundbreaking innovations per say.\", \"verdict\": \"The simplicity is what makes the proposed methods elegant. The empirical results are strong. The paper is worthy of acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"The simplicity is what makes the proposed methods elegant. The empirical results are strong.\"}",
"{\"title\": \"Good question! Answer: no need because uSIF is NOT better than SIF.\", \"comment\": \"Dear Reader,\\n\\nThank you very much for your interest in our paper and a very good question.\\n\\nEthayarajh (2018) discovered a very clever method to estimate the weight parameter \\\"a\\\" in an unsupervised way, improving upon Arora et al. (2017), who technically had to estimate the same parameter on the training set. We cited Ethayarajh (2018) precisely for this neat and important contribution to the community.\\n\\nHowever, we must inform the Reader that uSIF model is NOT any better than SIF of Arora et al. (2017).\\nThe numbers in Ethayarajh (2018) are correct, but all the improvements are due to post-processing tricks and differences in experimental setups.\\n\\nWe conducted an ablation study using the codebase released by Ethayarajh (2018): https://github.com/kawine/usif\", \"and_glove_vectors_http\": \"//nlp.stanford.edu/data/glove.840B.300d.zip\", \"sif_weights\": \"a/(a + p_w), a=1.0e-3\", \"usif_weights\": \"a/(a/2 + p_w), a derived automatically, approx. 1.2e-3\\n\\n\\n STS12 STS13 STS14 STS15\\n\\nSIF 60.1 60.2 66.5 62.9 \\nuSIF 60.4 60.6 67.0 63.6\", \"table_1\": \"SIF vs uSIF; no PC removal, no custom norm\\n\\n\\n STS12 STS13 STS14 STS15\\n\\nSIF +1PC +norm 64.6 70.9 73.7 75.2\\nSIF +5PC +norm 64.9 71.8 74.4 76.3\\nuSIF +5PC +norm 64.9 71.7 74.4 76.1\", \"table_2\": \"SIF vs uSIF, +PC removal, +custom norm\\n\\n\\nOn both occasions, uSIF was not better than SIF.\\nMoreover, removing 5 PCs as in Ethayarajh (2018) leads to only marginal (if significant) improvement over removing just 1 PC as in Arora et al. (2017).\\nHowever, the actual sources of improvement are worth discussing\\n\\n- Text pre-processing and custom codebase\\n\\nWe rely on the established SentEval toolkit (Conneau & Kiela, 2018) for all the evaluations in our paper (including prior work). A lot of improvement reported by Ethayarajh (2018) is due to a custom codebase, therefore it's inappropriate to say uSIF+PCA is \\\"better\\\".\\n\\n\\n- PC removal\\n\\nNeither it is appropriate to compare DynaMax with methods that run PCA on the whole test set as in Arora et al. (2017) and Ethayarajh (2018). They are simply different in nature. DynaMax is a similarity measure between 2 sentences, it doesn't require anything beyond that. PCA-based embeddings are \\\"fitted\\\" to a concrete dataset. They can be used for things like clustering but not for on-the-fly processing of incoming queries.\\n\\n\\n- Custom normalisation scheme\\n\\nThe normalisation scheme proposed by Ethayarajh (2018) is not derived from the modelling assumptions of uSIF, therefore it's a post-processing trick (just like PCA). We see no issues with this scheme per se, but we can hardly justify its use in the present work.\\n\\n\\nAgain, we would like to thank the Reader for bringing up a good and important question.\\nThere were a lot of comments to other OpenReview submissions this year asking to compare against uSIF.\\nBased on our careful analysis, we believe including DynaMax + uSIF in our tables adds no value.\\nWe hope our reply is acceptable to the Reader and useful to the community.\\n\\nPlease do not hesitate to contact us for any further queries/clarifications.\\n\\n\\nSanjeev Arora, Yingyu Liang and Tengyu Ma. A Simple but Tough-to-Beat Baseline for Sentence Embeddings. ICLR 2017.\\nAlexis Conneau and Douwe Kiela. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449, 2018.\\nKawin Ethayarajh. Unsupervised Random Walk Sentence Embeddings: A Strong but Simple Baseline. Rep4NLP@ACL 2018: 91-100\\n\\n\\nBest wishes,\\n\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"comment\": \"Very interesting paper!\\n\\nI was wondering why you left out the results from uSIF (Ethayarajh, 2018) in your Table 2, despite briefly citing it earlier on. avg-uSIF+PCA -- which the original paper denotes as UP -- looks like it gets much better results on the STS tasks than DynaMax-SIF (see Table 1 in (Ethayarajh, 2018)). For example, DynaMax-SIF gets 61.1 with GloVe for STS'12 while uSIF+UP gets 64.9 with GloVe for STS'12.\\n\\nIt looks like your method improves anything it's applied to, so I would suggest doing DynaMax-uSIF and then reporting those results in Table 2 as well -- I imagine they would be even better than what you have now.\", \"title\": \"Add more comparisons to Table 2?\"}",
"{\"title\": \"Updates to the paper\", \"comment\": \"Dear Reviewers and Readers,\\n\\nWe wanted to let you know that the new version of the paper has now been uploaded.\\nHere is a short summary of the changes.\\n\\n1. The main text remains almost the same. We have fixed some typos, added citations and made some statements a bit clearer.\\n\\n2. We have added a novel significance analysis of our results in Appendix D.\\nWe found that the majority of recent literature on STS either uses inappropriate or unspecified parametric tests or leaves out significance analysis altogether. We propose to construct nonparametric bootstrap confidence intervals with bias correction and acceleration. These intervals have much milder assumption on the test statistic than the parametric tests. To the best of our knowledge, such methodology has not been applied to STS benchmarks before and can be viewed as an additional contribution of our work to the community.\\n\\n3. We conduct ablation studies on our best-performing algorithm, DynaMax Jaccard, in Appendix C. In particular, we play with different universes, pooling schemes, and similarity functions. We find that DynaMax Jaccard remains the winner and conclude that the three components - the dynamic universe, the max-pooling operation and the fuzzy Jaccard index - all contribute to the strong performance of the model.\\n\\n\\n4. We compare fuzzy set similarity measures derived from Jaccard, Otsuka-Ochiai and S\\u00f8rensen\\u2013Dice coefficients and show they have almost identical performance across tasks and word vectors, quantitatively confirming that our results are in no way specific to the Jaccard index.\\n\\n\\n5. We discuss the difference between [0,1] and real-valued membership functions in Appendix A. In particular, we show that one simple way to construct [0,1]-fuzzy sets is to simply normalise the vectors. However, normalisation hurts both our method and the baseline (because it renders all the words equally important) and should generally be avoided in this setting.\\n\\n\\n\\nWe hope to have addressed the Reviewers' questions and concerns. We also hope the Reviewers and Readers will find the new Appendix sections to be a useful and interesting addition to the main text.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Fixed\", \"comment\": \"This typo has been fixed in the updated version of the paper.\"}",
"{\"title\": \"Updates to the paper\", \"comment\": \"Dear Reader,\\n\\nWe wanted to let you know that the new version of the paper has now been uploaded.\\n\\nAs promised, we have included a detailed discussion on [0,1] vs R in Appendix A as well as results for more word vector types. The discussion is a distilled version of what we already wrote here; nevertheless we hope the Reader finds it useful to see this discussion in the context of the entire paper.\\nAgain, thanks a lot for your suggestion.\\n\\nAs ever, please do not hesitate to contact us for any queries/clarifications.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Updates to the paper\", \"comment\": \"Dear Reviewer,\\n\\n\\nWe wanted to let you know that the new version of the paper has now been uploaded.\\nThe main text remains almost the same, however, we have added 4 new sections to the Appendix.\\n\\nWe included ablation studies on the DynaMax in Appendix C; in one of those we compare the dynamic universe, the identity matrix and also the random projection. We hope this further explains the connection between the universe of DynaMax and other universes.\\n\\nWe also hope this addresses the Reviewer's comment saying that \\\"DynaMax can work with other methods too\\\". We show that the best-performing version of DynaMax is the one described in the paper.\\n\\nAlso, we have now cited both works suggested by the Reviewer.\\n\\n\\nFinally, we added many other interesting results, including a novel significance analysis methodology for STS, comparison between different fuzzy similarity coefficients and discussion on [0, 1]-fuzzy sets and their connection with normalised vectors.\\n\\nWe hope the Reviewer finds these new Appendix sections to be an interesting and useful addition to the main text.\\n\\n\\nAs ever, please do not hesitate to contact us for any queries/clarifications.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Updates to the paper\", \"comment\": \"Dear Reviewer,\\n\\nWe wanted to let you know that the new version of the paper has now been uploaded.\\nThe main text remains almost the same, however, we have added 4 new sections to the Appendix.\\n\\nWe included ablation studies on the DynaMax in Appendix C; in one of those we compare the dynamic universe, the identity matrix and also the random projection (as suggested by the Reviewer).\\n\\nAlso, we added [1] to the citation list of papers that use the max-pooling operation.\\n\\n\\nFinally, we added many other interesting results, including a novel significance analysis methodology for STS, a comparison between different fuzzy similarity coefficients and discussion on [0, 1]-fuzzy sets and their connection with normalised vectors.\\n\\nWe hope the Reviewer finds these new Appendix sections to be an interesting and useful addition to the main text.\\n\\n[1] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. \\nSupervised Learning of Universal Sentence Representations from Natural Language Inference Data. EMNLP 2017, pp. 670\\u2013680\\n\\n\\nAs ever, please do not hesitate to contact us for any queries/clarifications.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Updates to the paper\", \"comment\": \"Dear Reviewer,\\n\\nWe wanted to let you know that the new version of the paper has now been uploaded.\\nThe main text remains almost the same, however, we have added 4 new sections to the Appendix.\\n\\nAs promised, we have now conducted the significance analysis of the results and included our findings in the paper (Intro + Appendix D).\\nIn summary, we found that recent literature on STS tends to apply unspecified or inappropriate parametric tests, or leave out significance analysis altogether in the majority of cases.\\nWe propose to construct nonparametric bootstrap confidence intervals with bias correction and acceleration. These intervals have much milder assumption on the test statistic than the parametric tests. \\nTo the best of our knowledge, such methodology has not been applied to the STS benchmarks before and can be viewed as an additional yet important contribution of our work. We hope the Reviewer and the community find our analysis useful and interesting. It was also good fun for us! Thanks for bringing this up.\\n\\nWe also added many other interesting results including ablation studies on DynaMax, a comparison between different fuzzy similarity coefficients and discussion on [0, 1]-fuzzy sets and their connection with normalised vectors.\\nWe hope the Reviewer finds these new Appendix sections to be an interesting and useful addition to the main text.\\n\\n\\nAs ever, please do not hesitate to contact us for any queries/clarifications.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer,\\n\\nWe would like to thank you for such a positive assessment of our work. \\nWe were especially thrilled the Reviewer found our paper to be among the best they reviewed this year.\\n\\nRegarding the significance analysis, unfortunately, the SentEval toolkit [1] we're using does not support this functionality.\\nMoreover most works known to us, including some of the most prominent works published at ICLR, do not conduct significance analysis for STS benchmarks ([2], [3], [4], [5]). Admittedly, some other works apply the Fisher's z test, which we believe is not appropriate in this setting. More appropriately, some apply the William's t test [6] or Steiger's z test [7] for correlated correlations. To the best of our knowledge, these tests require that data comes from a normal distribution, which is not case for STS. Although we have done a similar analysis using Steiger's z, we have to refrain from reporting (potentially) statistically invalid results and are looking to obtain further evidence that these tests can in fact be applied here. We are also looking into alternative (non-parametric) methods and will let the Reviewer know when our analysis is complete.\\n\\n[1] Alexis Conneau and Douwe Kiela (2018). SentEval: An Evaluation Toolkit for Universal Sentence Representations. http://arxiv.org/abs/1803.05449\\n[2] John Wieting, Mohit Bansal, Kevin Gimpel and Karen Livescu. ICLR 2016.\\n[3] Sanjeev Arora, Yingyu Liang and Tengyu Ma. A Simple but Tough-to-Beat Baseline for Sentence Embeddings. ICLR 2017.\\n[4] Jiaqi Mu and Pramod Viswanath. All-but-the-Top: Simple and Effective Postprocessing for Word Representations. ICLR 2018.\\n[5] Sandeep Subramanian Adam Trischler, Yoshua Bengio and Christopher J Pal. Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning. ICLR 2018.\\n[6] Williams, E. J. (1959). The comparison of regression variables. Journal of the Royal Statistical Society, Series B, 21, 396-399.\\n[7] Steiger, J. H. (1980). Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87(2), 245-251.\\n\\n\\nAgain, thank you very much and please do not hesitate to contact for with any further queries/clarifications.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer,\\n\\nWe would like to thank you for your assessment of our paper and positive comments regarding presentation and coverage of related work.\\nThe Reviewer additionally had some concerns which we would like to address.\\n\\n\\n1.\\nAs we explain in Section 2.2.3, the universe of DynaMax contains only the word embeddings from the 2 sentences being compared. If x1, x2,...xk and y1, y2,...,yl are word embeddings for sentences 1 and 2 respectively, then U = [x1; x2;...xk; y1; y2;...;yl]. This construction is also shown in Algorithm 1 (Lines 5-7).\\n\\nWe are not quite sure what the Reviewer meant by \\\"In principle DynaMax can work with other methods too\\\".\\n\\nThe DynaMax-Jaccard (DMJ) similarity has 3 components. The \\\"dynamic\\\" universe U (Section 2.2.3), the max-pooling operation (Eq. 2), and finally the fuzzy Jaccard index (Section 2.2.1). All 3 components are reflected and implemented in Algorithm 1.\\nBelow we show the change in performance when max-pooling is replaced by average and when fuzzy Jaccard is replaced by cosine similarity.\\n\\nGloVe STS12 STS13 STS14 STS15 STS16\\t\\n\\nDynaMax Jaccard 58.2 53.9 65.1 70.9 71.1\\nDynaMax Cosine 58.2 53.6 63.2 67.2 67.4\\n\\nDynaAvg. Jaccard 43.5 37.0 38.8 45.3 39.4\\nDynaAvg. Cosine\\t 40.0 39.1 38.3 39.7 31.2\\n\\nWe see that all 3 components in DynaMax-Jaccard are very important. When we replaced Jaccard with cosine, the performance dropped. When we replaced max with average it fell even further. Unfortunately, there is only a limited number of combinations we could report in the paper.\\n\\n\\n2. > \\\"... max-pooled word vectors are a special case of fuzzy bag of words. This is not correct.\\\"\", \"please_allow_us_to_elaborate_why_max_pooled_vectors_are_in_fact_a_special_case_of_fuzzy_bow\": \"We deliberately left the matrix U unspecified in the definition of FBoW (Section 2.2, Eq. 1 & 2).\\nWhen U=W, then U represents \\\"concrete\\\" words. We said this was the most intuitive (but not the only) choice.\\n\\nIn some cases, we no longer have concrete words but instead the words are \\\"abstract\\\". We acknowledge this in Section 2.2.2. In case of max-pooled vectors, U is the identity matrix I. However, we can always assign text labels to the rows of I, for example 'dim1', 'dim2', ..., 'dim300'.\\nThese \\\"words\\\" represent abstract concepts learned by a neural network in its representations. In fact, there has been some work to figure out what these dimensions could contain (e.g. [1])\\nMore generally, for any vector we can always generate a text label for that vector, and take that as a word.\\nThe fuzzy BoW is then fuzzy with respect to these abstract words (concepts).\\n\\nWe will clarify this in the updated manuscript.\\n\\n[1] Yulia Tsvetkov, Manaal Faruqui, and Chris Dyer. Correlation-based Intrinsic Evaluation of Word Vector Representations.\\nProceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pp. 111\\u2013115\\n\\n\\n3. \\\"... max-pooled vectors should be compared with the fuzzy Jaccard index instead of cosine similarity. There is no proof or substantial justification to support this. \\\"\\n\\nWe appreciate there is no proof but we respectfully disagree there is no substantial justification. As discussed above, max-pooled vectors are a special case of fuzzy BoW and so the fuzzy Jaccard index is fully justified for this representation. Empirically fuzzy Jaccard outperforms cosine similarity on most tasks (Figure 1).\\n\\n\\n4. Thanks for bringing these works to our attention. We're happy to cite them where appropriate.\\n\\n\\nOverall, we showed that word embeddings by themselves (without any weights, tricks or supervision) are still a formidable baseline for semantic textual similarity. We reported up to 20-point increase on standard benchmark datasets. We also tried to rekindle the interest in fuzzy set theory, which is quite underrepresented in the mainstream ML research.\\n\\nIn addition to our replies, we hope the Reviewer can take these contributions into account and perhaps reconsider their score.\\n\\nAgain, thank you very much for your assessment and please do not hesitate to contact us with any further queries.\\n\\n\\nBest wishes,\\n\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer,\\n\\nWe would like to thank you for such a kind assessment of our work and so many positive comments.\\nThe Reviewer has asked some fascinating questions and we are jumping straight to them.\\n\\n\\n* On InferSent\\n\\nAbsolutely, in principle the linear operator U can be replaced by any non-linear function, such as a (deep) neural network. But because InferSent is a Bi-LSTM, the membership vector for a word w_t would depend on the membership vectors of words w_(t-1) and w_(t+1). By contrast, in our fuzzy *bag*-of-words model all the memberships vectors are computed separately and independently of each other.\\nWe genuinely feel these \\\"fuzzy sequences\\\" have a great research potential but have to leave them to future work.\\n\\nRandomly initialised InferSent uses GloVe vectors for its embeddings layer, followed by a randomly initialised Bi-LSTM. However, we see from [1] (Table 4) that its performance on STS14 is only 0.39, when averaged GloVe vectors already attain 0.54, while avg. fastText and word2vec both score above 0.63.\\nIn other words, random InferSent is very unlikely to be a good baseline for unsupervised semantic textual similarity. Of course, the trained InferSent is a very strong model and we already compare against it in Table 1.\\n\\n[1] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. \\nSupervised Learning of Universal Sentence Representations from Natural Language Inference Data. EMNLP 2017, pp. 670\\u2013680\\n\\n\\n* Different choices for U (the universe matrix)\\n\\nWe were very excited the Reviewer suggested the random matrix. We didn't mention this in the paper but in fact we tried all of the following universes:\\n\\n-------------------------------------------------------------------------------------\\nGloVe| STS12 STS13 STS14 STS15 STS16\\t\\n--------------------------------------------------------------------------------------\\nAvg. 52.1 49.6 54.6 56.1 51.4\\n\\nW (top 100K) 58.6 48.2 62.8 69.3 69.4\\nDynaMax 58.2 53.9 65.1 70.9 71.1\\nRandom 300x300\\t 57.0\\t 49.5 64.9 70.4 70.8\\nIdentity 300 57.7 51.4 65.9 70.7 70.4\\nSVD basis 58.1 51.8 66.1 70.7 71.0\\nSVD (top 200 vec) 57.0 49.5 64.6 69.4 69.9\\n\\n\\nFor GloVe vectors the methods are basically the same but DynaMax gets good improvement over the max-pooled word vectors (identity matrix) with most other word vectors (Figures 1 & 2).\\n\\nWe chose to focus on DynaMax and max-pooled vectors exclusively because only these 2 universes are non-parametric and deterministic.\\nWe ourselves feel that DynaMax is probably the strongest and safest choice overall for any kind of word vectors.\\nHowever, it is sensible to start with just max-pooled word vectors because they avoid matrix multiplication altogether.\\nWe will be linking our code repository after the anonymity period and hope the community discovers universes that we haven't so far.\\n\\nAlso, we agree that sequence of equalities in Eq. 3 is awkward, this is purely to emphasise the origins of max-pooled word vectors. We will consider how to alter this equation while keeping this message.\\n\\nAgain, we would like to thank the Reviewer for such a positive assessment and great questions.\\nPlease do not hesitate to contact us for any further queries/clarifications.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Please stay tuned\", \"comment\": \"Dear Reviewers and Readers,\\n\\nWe were absolutely thrilled that our work received such a positive assessment. \\nWe would like to apologise for the delay in our replies; we are in fact working very hard to run additional analyses to quantitatively support our replies to each Reviewer.\", \"these_include\": \"- significance tests (Reviewer 3)\\n\\n- comparisons of different choices for the universe matrix U and a short story on how we arrived at each of them (Reviewer 2)\\n\\n- certain experiments to support our response to Review 1\\n\\nWe expect to post very detailed replies by Monday at the latest. We hope you all have a nice weekend and please stay tuned.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Very polished paper, simple but effective.\", \"review\": \"This is one of the best papers I reviewed so far this year (ICLR, NIPS, ICML, AISTATS), in terms of both the writing and technical novelty.\", \"writing\": \"the author provided sufficient context and did comprehensive literature survey, which made the paper easily accessible to a larger audience. And the flow of this paper was very smooth and I personally enjoyed reading it.\", \"novelty\": \"I wouldn't say this paper proposed a groundbreaking innovation, however, compared to many other submissions that are more obscure rather than inspiring to the readers, this paper presented a very natural extension to something practitioners were already very familiar with: taking an average of word vectors for a sentence and measure by cosine similarity. Both max pooling and Jaccard distance are not something new, but the author did a great job presenting the idea and proved it's effectiveness through extensive experiments. (disclaimer: I didn't follow the sentence embedding literature recently, and I would count on other reviewers to fact check the claimed novelty of this paper by the authors)\", \"simplicity\": \"besides the novelty mentioned above, what I enjoyed more about this paper is it's simplicity. Not just because it's easy to understand, but also it's easy to be reproduced by practitioners.\", \"quibbles\": \"the authors didn't provide error bar / confidence interval to the results presented in experiment session. I'd like to know whether the difference between baselines and proposed methods were significant or not.\", \"miscellaneous\": \"I have to say the authors provided a very eye-catching name to this paper as well, and the content of the paper didn't disappoint me neither. Well done :)\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting and simple idea\", \"review\": \"This submission presents a simple model for sentence representation based on max-pooling of word vectors. The model is motivated by fuzzy-set theory, providing both a funded pooling scheme and a similarity score between documents. The proposed approach is evaluated on sentence similarity tasks (STS) and achieves very strong performance, comparable to state-of-the-art, computationally demanding methods.\", \"pros\": [\"The problem tackled by this paper is interesting and well motivated. Fast, efficient and non-parametric sentence similarity has tons of important applications (search, indexing, corpus mining).\", \"The proposed solution is elegant and very simple to implement.\", \"When compared to standard sentence representation models, the proposed approach has very good performance, while being very efficient. It only requires a matrix vector product and a dimension-wise max.\", \"The paper is very well written and flows nicely.\", \"Empirical results show significant differences between different word vectors. The simplicity of this approach makes it a good test bed for research on word vectors.\"], \"cons\": [\"Nothing much, really.\", \"Eq. (3) is awkward, as it is a sequence of equalities, which has to be avoided. Moreover, if U is the identity, I don't think that the reader really need this Eq...\", \"I have several questions and remarks that, if answered would make the quality of the presentation better:\", \"In infersent, the authors reported the performance of a randomly-initialized and max-pooled bi-lstm with fasttext vectors as the input lookup. This can be seen as an extreme case of the presented formalism, where the linear operator U is replaced by a complicated non linear function that is implemented by the random LSTM. Drawing that link, and maybe including this baseline in the results would be good.\", \"Related to this previous question, several choices for U are discussed in the paper. However, only two are compared in the experiments. It would be interesting to have an experimental comparison of:\", \"taking U = W\", \"taking U = I\", \"taking U as the principal directions of W\", \"taking U as a random matrix, and comparing performance for different output dimensions.\", \"Overall, this paper is a very strong baseline paper. The presented model is elegant and efficient. I rate it as an 8 and await other reviews and the author's response.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper presents (a) a fuzzy bag of word representation and (b) DynaMax, a similarity measure that max pools salient features from sentence pairs.\", \"review\": [\"Strengths:\", \"Good coverage of related work\", \"Clear presentation of the methods\", \"Evaluation using established SemEval datasets\"], \"weaknesses\": \"1. It is not entirely clear what is the connection between fuzzy bag of words and DynaMax. In principle DynaMax can work with other methods too. This point should be elaborated a bit more.\\n2. It is claimed that the this paper shows that max-pooled word vectors are a special case of fuzzy bag of words. This is not correct. The paper shows how to \\\"convert\\\" one to the other. \\n3. It is also claimed that point 2 above implies that max-pooled vectors should be compared with the fuzzy Jaccard index instead of cosine similarity. There is no proof or substantial justification to support this. \\n4. Some relevant work that is missing:\\n- De Boom, C., Van Canneyt, S., Demeester, T., Dhoedt, B.: Representation learning for very\\nshort texts using weighted word embedding aggregation. Pattern Recognition Letters 80,\\n150\\u2013156 (2016)\\n- Kenter, T., De Rijke, M.: Short text similarity with word embeddings. In: International on\\nConference on Information and Knowledge Management. pp. 1411\\u20131420. ACM (2015)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Small typo in Algorithm 1\", \"comment\": \"Dear Readers,\\n\\nWe have spotted a small typo in the manuscript. \\nIn Algorithm 1, Line 3 the vector with all zeros z should have dimensions 1 x (k+l) and not 1 x d.\\nThe dimension of the zero vector has to match the dimension of other vectors in the max-pooling operation, all of which are 1 x (k+l) after the projection onto U.\\nAlternatively, we can keep zero vector to be 1 x d but then we need to project it as well in Lines 8 and 9, i.e. compute zU^T.\\nHowever, the latter would be a useless computation, so we prefer the first option.\\n\\nWe apologise for any inconveniences this typo might have caused and will fix it in the next version of the manuscript.\\n\\n\\nBest wishes,\\n\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"title\": \"Good question, thanks!\", \"comment\": \"Thank you for your kind feedback and interest in our paper.\\nWe also believe fuzzy set theory is a very useful framework and we were excited it worked well in this setting.\\n\\nOnto your question, we briefly discuss the choice of R as opposed to [0, 1] in Sections 2.1 and 2.2.4 but will extend these sections in a revised version of the manuscript.\\n\\n\\nThe universe matrix U is a (K x 300) matrix, i.e. the universe contains K entities and each row of U is an embedding of a single entity. When U=W, the entities are words and the rows of U are simply the word embeddings.\\n\\nNow we want to convert a singleton {w} into a fuzzy set. We compute the membership values:\\n\\nmu = [sim(w, u1), sim(w, u2), ..., sim(w, uK)]\\n\\nWe see that the membership values actually come from the similarity function sim(w, u) and not from the matrix U directly, so what's really important is whether values of sim(w, u) are in [0, 1] or R.\\n\\nIn our work, sim(w, u) is the dot product w * u, which does indeed take any real value. Below we discuss why dot product is a reasonable choice.\\n\\n1. \\nIntuitively, it's all the same up to the scale. We can easily map any real number into (0, 1) using, e.g. the \\nlogistic function s(x) = 1/(1+e^-x) and vice versa. So it's really not that important whether the values are in (0, 1)\\nor in R. In fact, there are many various extensions of fuzzy set theory that work with more general ranges than [0,1].\", \"for_a_quick_reference\": \"https://en.wikipedia.org/wiki/Fuzzy_set#Extensions\\n\\n2.\\nThe membership function for multisets (bags) takes values in N (i.e. the non-negative integers), so these values are already outside [0, 1]. We see that the standard [0, 1]-fuzzy sets are incompatible with multisets, that's why we constructed the most general sensible thing. Sets, bags, and [0,1]-fuzzy sets are all just a special case of fuzzy BoW.\\nInterestingly, since we always max-pool with a zero vector, fuzzy BoW will not contain any negative membership values.\\nThis was not our intention, just a by-product of the model. As discussed in 1., negative values are fine. They just mean\\nthe element is \\\"really\\\" not in the set.\\n\\n\\n3.\\nStill, why did we choose dot product and not, say, sim(w, u) = max(cos(w, u), 0)?\\nWe know that cosine similarity is in [-1, 1], so sim(w, u) will be in [0, 1].\\nOne reason is that word vectors are usually trained to maximise some kind of dot product in their objectives.\\nA more practical reason is simply because dot product works much better.\\n\\nTurns out, if we normalise word embeddings we will get the same fuzzy BoW as if we used max(cos(w, u), 0)\\n(simply because dot product is the same as cosine similarity for normalised vectors).\\n\\nBelow we give the results for GloVe vectors\\n\\n\\t\\t\\t STS12\\t STS13\\t STS14\\t STS15\\t STS16\\t\\n\\nAvg.\\t\\t 52.1\\t 49.6\\t 54.6\\t 56.1\\t 51.4\\nAvg. norm\\t 47.1\\t 44.9\\t 49.7\\t 52.0\\t 44.0\\n\\nDynaMax\\t 58.2\\t 53.9\\t 65.1\\t 70.9\\t 71.1\\nDynaMax norm\\t 53.7\\t 47.8\\t 59.5\\t 66.3\\t 62.9\\n\\nWe see here that normalisation hurts both averaged word vectors and well as max-pooled word vectors.\\nHowever, DynaMax norm still significantly outperforms Avg. norm. It even outperforms Avg. without norm on all but one tasks.\\n\\nHopefully, the above taken together explains why the membership values can indeed be any real numbers.\\nAgain, thank you very much for a good question. \\nWe will add this discussion as well as results for more word vectors into a separate section in the Appendix.\\n\\nPlease do not hesitate to ask us any additional questions.\\n\\nBest wishes,\\n\\nICLR 2019 Conference Paper1058 Authors\"}",
"{\"comment\": \"This work is very good, I am very interested. Fuzzy set theory is a very effective tool, which can explain and describe the uncertainty of data. The matrix U in this paper is very important, indicating the degree of membership of the elements in the fuzzy set, so the element value in U should be [0,1]. However, when U=W (W represents the word representation), the elements are a real number, which can be positive or negative. If U is unconstrained, the overall interpretability will be discounted.\\nIn addition, in the experimental part, if the author can give an example of U, it will be more perfect.\", \"title\": \"About the matrix U\"}"
]
} |
|
SJxfxnA9K7 | Structured Prediction using cGANs with Fusion Discriminator | [
"Faisal Mahmood",
"Wenhao Xu",
"Nicholas J. Durr",
"Jeremiah W. Johnson",
"Alan Yuille"
] | We propose a novel method for incorporating conditional information into a generative adversarial network (GAN) for structured prediction tasks. This method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture higher-order statistics from the data. This method also increases the strength of the signals passed through the network where the real or generated data and the conditional data agree. The proposed method is conceptually simpler than the joint convolutional neural network - conditional Markov random field (CNN-CRF) models and enforces higher-order consistency without being limited to a very specific class of high-order potentials. Experimental results demonstrate that this method leads to improvement on a variety of different structured prediction tasks including image synthesis, semantic segmentation, and depth estimation. | [
"Generative Adversarial Networks",
"GANs",
"conditional GANs",
"Discriminator",
"Fusion"
] | https://openreview.net/pdf?id=SJxfxnA9K7 | https://openreview.net/forum?id=SJxfxnA9K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByeFzNxLg4",
"ryltS5xtyN",
"HkgNIX0dyV",
"SkgtYOLq0Q",
"SkgLoB3p3Q",
"rJehRQM9nQ",
"HJeZKeJqh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545106448612,
1544256064659,
1544246091586,
1543297153477,
1541420445711,
1541182420317,
1541169273411
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1057/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1057/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1057/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1057/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1057/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1057/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1057/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All three reviewers argue for rejection on the basis that this paper does not make a sufficiently novel and substantial contribution to warrant publication. The AC follows their recommendation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not sufficient novelty\"}",
"{\"title\": \"I have throughly considered the comments and suggestions raised by R1 & R3 and I agree with the opinion that the novelty part of this paper seems to be limited. Thus, I have changed the initial rate to 5.\", \"comment\": \"I have throughly considered the comments by R1 & R3 and I agree with the opinion that the novelty and contributions of this paper are limited. Thus, I have changed the initial rate to \\\"5: Marginally below acceptance threshold\\\".\"}",
"{\"title\": \"Insufficient contributions\", \"comment\": \"The main point of my review is that the paper's contributions are too meager for ICLR. It \\\"does not present substantively new ideas\\\", with an emphasis on \\\"substantively\\\". R3 made a similar point, saying the approach is not \\\"sufficiently novel\\\", and also took issue with the theoretical justification. The authors have not refuted these criticisms.\\n\\nR2 says the approach is conceptually simpler than the baselines -- I agree with this.\\nR2 says the \\\"paper shows a promising approach\\\" -- I somewhat agree with this too, but I would like the authors to deliver more on the promise. At the moment, the paper simply recommends to incrementally fuse the generator's features into the conditional discriminator, instead of concatenating output & condition at the front. This does not make a paper!\\n\\nEveryone seems to say \\\"the paper makes sense\\\" (me) and \\\"the writing is mostly clear\\\" (R3), and \\\"the paper is written clearly\\\" (R2), which is a strong compliment to the authors, but I think the method is not publication-quality.\"}",
"{\"title\": \"Thanks\", \"comment\": \"We thank the reviewer for their comments and appreciate the support for our work.\"}",
"{\"title\": \"This paper presents a new method for incorporating conditional information into a GAN for structured prediction tasks (image conditioned GAN problems). Thorough experimental results on Cityscapes and NYU v2 verify the efficacy of the proposed method.\", \"review\": \"This paper presents a new method for incorporating conditional information into a GAN for structured prediction tasks (image conditioned GAN problems). The proposed method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture higher-order statistics from the data. The proposed method also increases the strength of the signals passed through the network where the real or generated data and the conditional data agree. The proposed method is conceptually simpler than joint CNN-CRF models and enforces higher-order consistency without being limited to a very specific class of high-order potentials. Thorough experimental results on Cityscapes and NYU v2 verify the efficacy of the proposed method. I believe this paper shows a promising approach to solve the structured prediction problems that I have not seen elsewhere so far. The paper is written clearly, the math is well laid out and the English is fine.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Not enough here\", \"review\": \"This paper proposes a new method to input data to a conditional discriminator network. The standard setup would simply concatenate the \\\"condition\\\" image with the \\\"output\\\" image (i.e., the real image, or the generator's output corresponding to the condition). The setup here is to feed these two images to two separate encoders, and gradually fuse the features produced by the two. The experiments show that this delivers superior performance to concatenation, on three image-to-image translation tasks.\\n\\nI think this is a good idea, but it's a very small contribution. The entire technical approach can be summarized in 2-3 sentences, and it is not particularly novel. Two-stream models and skip connections have been discussed and explored in hundreds of papers. Applying these insights to a discriminator is not a significant leap. \\n\\nThe theoretical \\\"motivation\\\" equations in Sec. 3.1 are obvious and could be skipped entirely. \\n\\nIn summary, the paper makes sense, but it does not present substantively new ideas. I do not recommend the paper for acceptance.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper presents an a particular architecture for conditional discriminators in the cGAN framework. Different to the conventional approach of concatenating the conditioning information to the input, the authors propose to process them separately with two distinct convolutional networks fusing (by element-wise addition) intermediate features of the conditioning branch into the input branch at each layer.\", \"pros\": [\"The writing is mostly clear and easy to follow.\", \"I feel that exploring better conditioning strategies is an important direction. Quite often the discriminator discards additional inputs if no special measures against this behaviour are taken.\", \"The proposed method seem to outperform the baselines\"], \"cons\": [\"I\\u2019m generally not excited about the architecture as it seems a slight variation of the existing methods. See, for example, the PixelCNN paper [van den Oord et al., 2016] and FiLM [Perez et al., 2017].\", \"Theoretical justification of the approach is quite weak. The paper shows that the proposed fusion method may result in higher activation values (in case of the ReLU non-linearity, other cases are not considered at all) but this is not linked properly to the performance of the entire system. Paragraph 3 of section 3.1 (sentence 3 and onward) seems to contain a theoretical claim which is never proved.\", \"It seems that the authors never compare their results with the state-of-the-art. The narrative would be much more convincing if the proposed way of conditioning yielded superior performance compared to the existing systems. From the paper it\\u2019s not clear how bad/good the baselines are.\", \"Notes/questions:\", \"Section 3.1, paragraph 1: Needs to be rephrased. It\\u2019s not totally clear what the authors mean here.\", \"Section 3.1, paragraph 4: \\u201cWe observed that the fusion \\u2026\\u201d - Could you elaborate on this? I think you should give a more detailed explanation with examples because it\\u2019s hard to guess what those \\u201cimportant features\\u201d are by looking at the figure.\", \"Figure 4: I would really want to see the result of the projection discriminator as it seems to be quite strong according to the tables. The second row of last column (which is the result of the proposed system) suspiciously resembles the ground-truth - is it a mistake?\", \"Figure 5: It seems that all the experiments have not been run until convergence. I\\u2019m wondering if the difference in performance is going to be as significant when the model are trained fully.\", \"In my opinion, the proposed method is neither sufficiently novel nor justified properly. On top of that, the experimental section is not particularly convincing. Therefore, I would not recommend the paper in its present form for acceptance.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByezgnA5tm | Constraining Action Sequences with Formal Languages for Deep Reinforcement Learning | [
"Dong Xu",
"Eleanor Quint",
"Zeynep Hakguder",
"Haluk Dogan",
"Stephen Scott",
"Matthew Dwyer"
] | We study the problem of deep reinforcement learning where the agent's action sequences are constrained, e.g., prohibition of dithering or overactuating action sequences that might damage a robot, drone, or other physical device. Our model focuses on constraints that can be described by automata such as DFAs or PDAs. We then propose multiple approaches to augment the state descriptions of the Markov decision process (MDP) with summaries of recent action histories. We empirically evaluate these methods applying DQN to three Atari games, training with reward shaping. We found that our approaches are effective in significantly reducing, and even eliminating, constraint violations while maintaining high reward. We also observed that the total reward achieved by an agent can be highly sensitive to how much the constraints encourage or discourage exploration of potentially effective actions during training, and, in addition to helping ensure safe policies, the use of constraints can enhance exploration during training. | [
"reinforcement learning",
"constraints",
"finite state machines"
] | https://openreview.net/pdf?id=ByezgnA5tm | https://openreview.net/forum?id=ByezgnA5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJe7OJyZe4",
"rJe6YEgqR7",
"BylDXuqCnQ",
"rJeD7ZN937",
"SkgaHjmch7"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544773483259,
1543271556804,
1541478430952,
1541189918711,
1541188420842
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1056/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1056/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1056/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1056/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1056/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper studies the problem of reinforcement learning under certain constraints on action sequences. The reviewers raised important concerns regarding (1) the general motivation, (2) the particular formulation of constraints in terms of action sequences and (3) the relevance and significance of experimental results. The authors did not submit a rebuttal. Given the concerns raised by the reviewers, I encourage the authors to improve the paper to possibly resubmit to another venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper needs to be improved\"}",
"{\"title\": \"We appreciate the detailed reviews\", \"comment\": \"We thank the reviewers for their detailed comments, and we are using this feedback to improve the paper.\"}",
"{\"title\": \"interesting approach with inconclusive results\", \"review\": \"This paper presents an DFA-based approach to constrain certain behavior of RL agents, where \\\"behavior\\\" is defined by a sequence of actions. This approach assumes that the developer has knowledge of what are good/bad behavior for a specific task and that the behavior can be checked by hand-coded DFAs or PDAs. During training, whenever such behavior is detected, the agent is given a negative reward, and the RL state is augmented with the DFA state. The authors experimented with different state augmentation methods (e.g. one-hot encoding, learned embedding) on 3 Atari tasks.\\n\\nThe paper is clearly written. I also like the general direction of biasing the agent's exploration away from undesirable regions (or conversely, towards desired regions) with prior knowledge. However, I find the results hard to read.\\n\\n1. Goal. The goal of this work is unclear. Is it to avoid disastrous states during exploration / training, or to inject prior knowledge into the agent to speed up learning, or to balance trade-offs between constraint violation and reward optimization? It seems the authors are trying to do a bit of everything, but then the evaluation is insufficient. For example, when there are trade-offs between violation and rewards, we expect to see trade-off curves instead of single points for comparison. Without the trade-off, I suppose adding the constraint should speed up learning, in which case learning curves should be shown.\\n\\n2. Interpreting the results. 1) What is the reward function used? I suppose the penalty should have a large effect on the results, which can be tuned to generate a trade-off curve. 2) Why not try to add the enforcer during training? A slightly more complex baseline would be to enforce with probability (1-\\\\epsilon) to control the trade-off. 3) Except for Fig 3 right and Fig 4 left, the constraints doesn't seem to affect the results much (judging from the results of vanilla DQN and DQN+enforcer) - are these the best settings to test the approach?\\n\\nOverall, an interesting and novel idea, but results are a bit lacking.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Approach for biasing RL agent away from particular action sequences\", \"review\": \"This paper presents an approach for biasing an agent to avoid particular action sequences. These action sequence constraints are defined with a deterministic finite state automaton (DFA). The agent is given an additional shaping reward that penalizes it for violating these constraints. To make this an easier learning problem for the agent, its state is augmented with additional information: either an action history, the state of the DFA, or an embedding of the DFA state. The authors show that these approaches do reduce these action constraint violations over not doing anything about them.\\n\\nIt's unclear to me what the use case is for constraints solely on the action space of the agent, and why it would be useful to treat them this way. The authors motivate and demonstrate these constraints on 3 Atari games, but it is clear that the constraints they come up with negatively affect performance on most of the games, so they are not improving performance or safety of the agent. Are there useful constraints that only need to view the sequence of actions of the agent and not any of the state? If there are such constraints, why not simply restrict the agent to only take the valid actions? What is the benefit of only biasing it to avoid violating those constraints with a shaping reward? This restriction was applied during testing, but not during training. \\n\\nIn all but the first task (no 1-d dithering in breakout), none of the proposed approaches were able to completely eliminate constraint violations. Why was this? If these are really constraints on the action sequence, isn't this showing that the algorithm does not work for the problem you are trying to solve? \\n\\nThe shaping reward used for the four Atari games is -1000. In most work on DQN in Atari, the game rewards are clipped to be between -1 and 1 to improve stability of the learning algorithm. Were the Atari rewards clipped or unclipped in this case? Did having the shaping reward be such large magnitude have any adverse effects on learning performance?\\n\\nAdding a shaping reward for some desired behavior of an agent is straightforward. The more novel part of this work is in augmenting the state of the agent with the state of a DFA that is tracking the action sequence for constraint violations. Three approaches are compared and it does appear that DFA one-hot is better than the other approaches or no augmentation.\", \"pros\": [\"Augmenting agent state with state of DFA tracking action sequence constraints is novel and useful for this problem\"], \"cons\": [\"Unclear if constraints on action sequences alone useful\", \"No clear benefit of addressing this problem through shaping rewards.\", \"No comparison to simply training with only non-violating action sequences.\", \"Algorithm still results in action constraint violations in 5/6 tasks.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This work aims to use formal languages to add a reward shaping signal in the form of a penalty on the system when constraints are violated. There is also an interesting notion of using an embedding based on the action history to aid the agent in avoiding violations. However, I do not believe this paper did a good enough job in situating this work in the context of prior work \\u2014 in particular (Camacho 2017). There is a significant related work section that does an ok job of describing many other works, but to my knowledge (Camacho 2017) is the most similar to this one (minus the embedding), yet is not mentioned here. It is difficult to find all related work of course, so I would encourage revision with detailed description of the novelty of this work in comparison with that one. I would also encourage an more thoughtful examination of the theoretical ramifications of the reward shaping signal with respect to the optimal policy as (Camacho 2017) do and as is modeled in the (Ng 1999) paper. As of this revision, however, I'm not sure I would recommend it for publication. Additionally, I suggest that the authors describe the reward shaping mechanism a bit more formally, it was unclear whether it fits into Ng's potential function methodology at first pass.\", \"comments\": [\"It would be nice to explain to the reader in intuitive terms what \\u201cno-1D-dithering\\u201d means near this text. I understand that later on this is explained, but for clarity it would be good to have a short explanation during the first mentioning of this term as well.\", \"It would be good to clarify in Figure 1 what . * (lr)^2 is since in the main text near the figure is is just (lr)^2 and the .* is only explained several pages ahead\", \"An interesting connection that might be made is that Ng et al.\\u2019s reward shaping mechanism, if the\\u00a0shaping function is based on a state-dependent potential then the optimal policy under the new MDP is still optimal for the old MDP. It would be interesting to see how well this holds under this holds under this schema. In fact, this seems like analysis that several other works have done for a very similar problem (see below).\", \"I have concerns about the novelty of this method. It seems rather similar to\", \"Camacho, Alberto, Oscar Chen, Scott Sanner, and Sheila A. McIlraith. \\\"Decision-making with non-markovian rewards: From LTL to automata-based reward shaping.\\\" In\\u00a0Proceedings of the Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM), pp. 279-283. 2017.\", \"Camacho, Alberto, Oscar Chen, Scott Sanner, and Sheila A. McIlraith. \\\"Non-Markovian Rewards Expressed in LTL: Guiding Search Via Reward Shaping.\\\" In Proceedings of the Tenth International Symposium on Combinatorial Search (SoCS), pp. 159-160. 2017.\", \"However, that work proposes a similar framework in a much more formal way. In fact, in that work also a DFA is used as a reward shaping signal -- from what I can tell for the same purpose through a similar mechanism. It is possible, however, that I missed something which contrasts the two works.\"], \"another_work_that_can_be_referenced\": \"De Giacomo, Giuseppe, Luca Iocchi, Marco Favorito, and Fabio Patrizi. \\\"Reinforcement Learning for LTLf/LDLf Goals.\\\"\\u00a0arXiv preprint arXiv:1807.06333\\u00a0(2018).\\n\\nI think it is particularly important to situate this work within the context of those others. \\n\\n+ General the structure of the paper was a bit all over the place, crucial details were spread throughout and it took me a couple of passes to put things together. For example, it wasn't quite clear what the reward shaping mechanism was until I saw the -1000 and then had to go back to figure out that basically -1000 is added to the reward if the constraint is violated. I would suggest putting relevant details all in one place. For example, \\\"Our reward shaping function F(x) was { -1000, constraint violation, 0 otherwise}\\\".\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1Gfx3Rqtm | End-to-End Hierarchical Text Classification with Label Assignment Policy | [
"Yuning Mao",
"Jingjing Tian",
"Jiawei Han",
"Xiang Ren"
] | We present an end-to-end reinforcement learning approach to hierarchical text classification where documents are labeled by placing them at the right positions in a given hierarchy.
While existing “global” methods construct hierarchical losses for model training, they either make “local” decisions at each hierarchy node or ignore the hierarchy structure during inference. To close the gap between training/inference and optimize holistic metrics in an end-to-end manner, we propose to learn a label assignment policy to determine where to place the documents and when to stop. The proposed method, HiLAP, optimizes holistic metrics over the hierarchy, makes inter-dependent decisions during inference, and can be combined with different text encoding models for end-to-end training.
Experiments on three public datasets show that HiLAP yields an average improvement of 33.4% in Macro-F1 and 5.0% in Samples-F1, outperforming state-of-the-art methods by a large margin. | [
"Hierarchical Classification",
"Text Classification"
] | https://openreview.net/pdf?id=H1Gfx3Rqtm | https://openreview.net/forum?id=H1Gfx3Rqtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1g2wQ0lxV",
"H1ekqPbzkN",
"HJxgQvZzJN",
"r1gU3ZzaC7",
"ryxHNgf6C7",
"rygbzifGA7",
"rkx7a5GGCm",
"H1gPFFMfC7",
"r1xXfFzf0m",
"BJeYVNKAhX",
"rJxlOb6o2m",
"HkeXkVvo27",
"rJgxyLuWoX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1544770403795,
1543800711242,
1543800600385,
1543475630365,
1543475245498,
1542757128534,
1542757050668,
1542756735505,
1542756619251,
1541473329298,
1541292392239,
1541268442948,
1539569112028
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1055/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1055/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1055/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1055/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1055/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1055/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1055/Authors"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a reinforcement learning approach to hierarchical text classification.\", \"pros\": \"A potentially interesting idea to drive the search process over a hierachical set of labels using reinforcement learning.\", \"cons\": \"The major concensus among all reviewers was that there were various concerns about experimental results, e.g., apple-to-apple comparisons against prior art (R1), proper tuning of hyper-parameters (R1, R2), the label space is too small (539) to have practical significance compared to tens of thousands of labels that have been used in other related work (R3), and other missing baselines (R3). In addition, even after the rebuttal, some of the technical clarity issues have not been fully resolved, e.g., what the proposed method is actually doing (optimizing F1 metric vs the ability to fix inconsistent labeling problem).\", \"verdict\": \"Reject. While authors came back with many detailed responses, they were not enough to address the major concerns reviewers had about the empirical significance of this work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"author response did not address the major concerns of the reviewers regarding the empirical results\"}",
"{\"title\": \"We very much appreciate you for pointing out this unclear answer.\", \"comment\": \"We very much appreciate you for pointing out this unclear answer. To further clarify our responses:\\n\\nFirst, \\u201clabel consistency\\u201d is guaranteed by our proposed label assignment policy, e.g., if a label is assigned to the document then its ancestor label must be also assigned. We apply RL to learn such a policy network to achieve the goal, as it cannot be learned using prior supervised learning objectives. In particular, to answer your question on \\u201chow much gain can we get by fixing label inconsistency issue\\u201d, we further conducted experiments to check the percentage of test documents that are predicted with inconsistent labels. For example, we found 29186/781265 (3.7%) predictions have inconsistent labels for TextCNN on RCV1. In contrast, our method HiLAP always ensures 0% label inconsistency. We are aware that such error rate does not reflect the F1 metrics, but it can provide a rough picture of how severe the issue is in existing methods. More details of label inconsistencies and ways of correcting them can be found in [1].\\n\\nSecond, sample F1 is a non-differentiable metric and thus we apply RL to optimize it. Loss function in some prior work can be seen as optimizing the multi-label classification accuracy (or \\u201csubset 0/1 accuracy\\u201d) [2], and is thus relatively more sensitive to label bias. To the best of our knowledge, we are the first to propose optimizing F1 metric for hierarchical classification. \\n\\n[1] A Survey of Hierarchical Classification Across Different Application Domains\\n[2] SeCSeq: Semantic Coding for Sequence-to-Sequence based Extreme Multi-label Classification\"}",
"{\"title\": \"Thank you very much for the reply!\", \"comment\": \"Thank you very much for the reply!\\n\\nFirst, we want to clarify that the test set used in all experiments reported in Table 2 is *identical*--we adopted the test set given by the RCV1 dataset and so did the other compared methods in Table 2. \\n\\nSecond, we understand that methods compared in Table 2 have different model capacity (due to differences in model architecture, etc.). However, Table 2 aims to present our validation and justification on two main things: (1) our proposed framework can build on top of existing base models (e.g., TextCNN, HAN and bow-CNN) to improve *in the setting of hierarchical classification* (thus, we show different variants of our method using different base models). These comparisons, in particular, mitigate the differences in model capacity; and (2) our best-performing method, HiLAP (bow-CNN), can advance the state-of-the-art method reported on RCV1 (thus, the set of various methods we compared).\\n\\nWe will emphasize this detail in our final version, and really hope you could consider our clarification on this problem.\"}",
"{\"title\": \"thanks for clarifying, one q still remains\", \"comment\": \"Q5/Q6:\\n\\n\\\"most of the rows in Table 2 does not seem comparable with each other \\\"\\n\\nI am not able to find a good answer to this question. Different models seems to have different embedding size and #parameters and hence different model capacity. This will most definitely cause different performances on the test set. \\n\\nOne way to fix this issue - which is not applicable here, nor in any work mentioned in Table 2, is to use the exact same identical test across all works; for e.g. most of the works results on imagenet are comparable with each other despite using different parameters since they use an identical test set.\"}",
"{\"title\": \"Clarifying my questions since I dont feel like it was answered.\", \"comment\": \"Q1/Q2: Going back to my questions,\\n\\\"Is it optimizing F1 metric or is it the ability to fix inconsistent labeling problem ? \\\"\\nBased on the authors response it looks like the answer is both. If yes, it is natural to tease out the effect of each, for e.g. without the hierarchy part, how much is rl helping improve per example f1.\\n\\n\\\"If it is the latter, what is an example of inconsistent labeling, what fraction of errors (in table 2/3) are inconsistent errors. Are we really seeing the inconsistent errors drop ?\\\"\\nI did not get the author's response to what fraction of errors in other methods are due to inconsistent labeling.\\n\\n- If it is the former, how does this compare to existing approaches for optimizing F1 metric.\\nI think this question still remains.\", \"q3\": \"Thanks for clarifying !\"}",
"{\"title\": \"[Response to Review 3] - Experiment\", \"comment\": \"-------------------------------\", \"q4\": \"\\u201cWith 10 roll-outs per training sample, imho, it seems unrealistic that the expected reward can be computed correctly. Would'nt most of the reward just be zero ? Or is it the case the model is initialized with an MLE pretrained parameters (which seems like it, but im not too sure).\\u201d\", \"a4\": \"Yes, the model is initialized with pre-trained parameters. The 10 roll-outs are independently performed and should be similar to each other with slight variance (when the distribution is pretrained). The same approach is used in [5,6].\\n\\n[5] End-to-End Reinforcement Learning for Automatic Taxonomy Induction ACL 2018\\n[6] Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning ICLR 2018\\n\\n\\n-------------------------------\", \"q5\": \"\\u201c..., most of the rows in Table 2 does not seem comparable with each other due to pretrained word-embeddings and dataset filtering, e.g. SVM-variants, HLSTM.\\u201d\", \"a5\": \"There is no data filtering for RCV1. We follow the original training/test split, which is commonly used in most previous work, including those reported by [9]. The dimension sizes of word-embeddings are both 50 in our paper and [9]. In fact, [9] retrained the word-embeddings on the stemmed corpus, which they claimed to be better than original words and we reported their performance on the stemmed version. In that sense, their performance is expected to be even lower under the same setup.\\n\\n-------------------------------\", \"q6\": \"\\u201cin addition to above, there is the standard issue of using different #parameters across models which increases/decreases model capacity. This is ok as long as all parameters were tuned on held out set, or using a common well established unfiltered test set - neither of which is clear to me.\\u201d\", \"a6\": \"Sorry about the unclear description. Since there is no well-established validation set, we randomly sample a portion of the training set as the held-out set, as also adopted by prior work [7,8,9]. In particular, HiLAP and the base model use exactly the same hyperparameters all the time. We have updated the paper to include these details.\\n\\n[7] Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding NIPS 2015\\n[8] Effective Use of Word Order for Text Categorization with Convolutional Neural Networks NAACL 2015\\n[9] Large-Scale Hierarchical Text Classification with Deep Graph-CNN WWW 2018\\n\\n-------------------------------\", \"q7\": \"\\u201cIt is not clear how the F1 metric captures inconsistent labeling, which seems to be the main selling point for hi-lap.\\u201d\", \"a7\": \"Please refer to our answer in A2.\\n\\n\\n-------------------------------\", \"q8\": \"\\u201cregarding text-CNN performance, could it be that dropout is too high ? (the code was set to 0.5)\\u201d\", \"a8\": \"We tried reducing the dropout but did not observe clear performance changes. We note that [10] also sets the dropout to 0.5. That being said, HiLAP is built on top of the base model and reuse all the parameters of the base model. HiLAP (textcnn), for example, uses the same dropout rate (0.5) and should be lower/higher in performance as well.\\n\\n[10] Deep Learning for Extreme Multi-label Text Classification SIGIR 2017\"}",
"{\"title\": \"[Response to Review 3] - Method\", \"comment\": \"We really appreciate your detailed comments and valuable feedback!\\n\\n-------------------------------\", \"q1\": \"Regarding \\u201cwhat reinforcement learning gets us.\\u201d\", \"a1\": \"Existing works largely maximize the accuracy of labels and have a gap between training/test (e.g., top-down approaches train a set of classifiers independently and only consider the label dependencies during inference). In our method, training and inference are consistent and label dependencies are captured at both phases. We use RL as a tool to achieve such goals by designing a label assignment policy and rewarding the agent with the per-sample f1 score (please see definition in A3). We show that the performance can be improved through the exploration of the label hierarchy and model of label dependencies. In particular, reward shaping allows us to emphasize the importance of coarse-grained labels (since the label assignment begins at the root node) while prior works treat each label equally (by measuring accuracy).\\n\\n\\u201cproviding the policy network with holistic rewards\\u201d indicates the former, i.e., we can explicitly optimize the per-sample f1 score, which reflects the overall quality of the label assignment of one sample. As far as we know, there are no existing approaches that explicitly optimize F1 metric for hierarchical classification.\\n\\n\\n-------------------------------\", \"q2\": \"Regarding fixing inconsistent labeling issue.\", \"a2\": \"Sorry for the confusion. The ability to fix the inconsistent labeling problem is inherently built in the label assignment policy. By following the policy (Figure 1 and Sec 2.1), there will never be inconsistent labels. In fact, the label inconsistency is more of a practical issue. In reality, one doesn\\u2019t want to tell the users that an instance is an apple but in the meantime, not a fruit. In terms of performance, correcting such issues naively doesn\\u2019t always provide much performance gain. E.g., if we simply add all the ancestors of predicted nodes (if these ancestors haven\\u2019t been predicted) or remove those nodes (if one of their ancestors hasn\\u2019t been predicted), there would be about ~1 F1 change (on a 100 scale). And such correction could either lead to performance gain or drop. On the contrary, we solved this problem from the root.\\n\\n-------------------------------\\nQ3. Regarding the definition of \\u201csample F1\\u201d and the optimization of sample F1.\\n\\nA3. Sorry about the confusion caused by the \\u201csample F1\\u201d metric. The sample F1 is defined for each instance in multi-label classification setting. For example, if the gold labels of instance xi are (1,2,3) and the predicted labels are (2, 3, 4, 5), its precision and recall would be 2/4 and 2/3, respectively. We adopted this name based on its use in sklearn [1]. The same metric is referred to as EBF in [2,3], and example-based F1 in LSHTC [4]. The RL reward is designed in a way that aims to capture the sample F1 metric. We have updated the description of this evaluation metric to make it more clear.\\n\\n[1] http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html\\n[2] MeSHLabeler: Improving the accuracy of large-scale MeSH indexing by integrating diverse evidence Bioinformatics 2015\\n[3] DeepMeSH: Deep semantic representation for improving large-scale MeSH indexing Bioinformatics 2016\\n[4] LSHTC: A Benchmark for Large-Scale Text Classification CoRR 2015\"}",
"{\"title\": \"[Response to Review 2]\", \"comment\": \"We very much appreciate your comments and valuable suggestions for improving the work!\\n\\n-------------------------------\", \"q1\": \"Regarding the problem that the scope is too narrow and the potential impact.\", \"a1\": \"Thank you for the feedback! We would like to highlight that hierarchical text classification is an important task with a wide range of downstream applications in natural language processing and data mining (e.g., Mesh Indexing, News categorization, Law/Patent categorization)[1, 2]. In addition, the techniques developed in this work can be naturally applied to various structured prediction problems in other related domains (e.g., image categorization, user profiling, fine-grained entity typing).\\n\\n[1] DeepMeSH: Deep semantic representation for improving large-scale MeSH indexing Bioinformatics 2016\\n[2] A Survey of Hierarchical Classification Across Different Application Domains DMKD 2011\\n\\n-------------------------------\", \"q2\": \"Scale of the experimental datasets.\", \"a2\": \"Thank you for suggesting the LSHTC dataset. In terms of method scalability, HiLAP is at the same complexity as the base model (the document representation generated by the base model is reused at each step; only several extra matrix multiplications are needed) so our proposed HiLAP framework scale as long as the base model is scalable to larger datasets. We did not include comparison on LSHTC in our experiments due to two main reasons. First, we found that label hierarchy used in LSHTC is inherited from Wikipedia category system, which contains noisy information due to its crowdsourcing nature. For example, one path in Wikipedia is \\u201cArts -> Books -> Bookselling -> Booktown -> \\u2026\\u201d, where \\u201cBookselling\\u201d already has nothing to do with its ancestor \\u201cArts\\u201d. Second, authors of LSHTC modified the Wikipedia hierarchy by creating leaf node copies for many internal nodes in the original hierarchy, making it actually a \\u201cflat\\u201d label space. For example, for one page named \\u201cBook tour\\u201d originally at internal node \\u201cBookselling\\u201d, a pseudo-node \\u201cBookselling*\\u201d would be added under \\u201cBookselling\\u201d, and \\u201cBook tour\\u201d will be moved to the leaf \\u201cBookselling*\\u201d.\\n\\n-------------------------------\", \"q3\": \"Regarding experiment results of HRSVM on the RCV1 dataset.\", \"a3\": \"Thank you for bringing out this question and sorry about the confusion here. The RCV1 dataset used in HRSVM work [1] is not the same version as the one that was used in our work and most existing work. The numbers reported in [1] are thus not comparable. The 103 labels in the original RCV1 dataset were extended to 137 labels in [1]. Experiments reported in [2] show that same model can obtain up to 23 Macro F1 improvement on the 137-label version, compared to the 103-label version. We have clarified the details about the dataset in our updated version.\\n\\n[1]Recursive Regularization for Large-scale Classification with Hierarchical and Graphical Dependencies KDD 2013\\n[2]Large-Scale Hierarchical Text Classification with Deep Graph-CNN WWW 2018\\n\\n-------------------------------\", \"q4\": \"Some of the references related to taxonomy adaptation, such as [3] and reference therein, which are also based on modifying the given taxonomy for better classification are missing.\", \"a4\": \"Thanks for pointing out these relevant work. The line of research on taxonomy adaptation is related but has a different goal as compared to our problem setting---they aim to modify the given label hierarchy (by pruning nodes in the tree) to output a new label hierarchy which is better suited for classification. Our work, however, deals with a fixed label hierarchy and focuses on designing a label assignment policy to traverse it in a smart way to predict labels for a document. We updated the paper to clarify this and added these references to discuss the commonalities and differences.\\n\\n-------------------------------\", \"q5\": \"Comparison with label embedding methods such as [1,2] are missing. For the scale of datasets discussed, where SVM based methods seem to be working well, it is possible that approaches [1,2] which can exploit label correlations can do even better.\", \"a5\": \"Thank you for pointing out this line of work. However, these work focus on the scenario where there are a large number of \\u201cindependent\\u201d labels (i.e., a flat label space). They do not leverage label hierarchy or any information about the structured dependencies between the labels, which is the focus of our work. For example, [2] assumes that such a label hierarchy is not available in their setup. We have included them in our related work discussion and clarified the distinctions.\"}",
"{\"title\": \"[Response to Reviewer 1]\", \"comment\": \"Thank you for your suggestions on improving the presentation of our work! We would like to conduct experiments on other structured prediction tasks in the future using the same philosophy.\\n\\n-------------------------------\", \"q1\": \"About experiment datasets and our reported performance for baseline methods.\", \"a1\": \"RCV1 is one of the few well-used datasets for hierarchical classification and we followed the original training/test split. We are not sure what 76.6% is referred to in \\u201cfar better than the 76.6%\\u201d. Threshold tuning does affect the performance but is time-consuming in the meantime. [1] avoids this issue (but didn\\u2019t solve it) by evaluating AUC instead. We couldn\\u2019t reproduce an 84% micro-F1 of bow-CNN without threshold tuning and the best we could get is 82.7%. Based on the 82.7% base model, HiLAP (bow-cnn) then improves its performance to 83.3%.\\n\\nThe main aim of comparing HiLAP with the base models is to show that we can improve upon them by exploring the label hierarchy. For example, equipping HAN with HiLAP achieves similar performance to HR-DGCNN [2] (which also models the hierarchy) even though the original HAN is worse than HR-DGCNN. Similarly, for other base models, we can constantly improve their performance although the architecture and hyper-parameters for document representation are unchanged.\\n\\nThe \\u201capple-to-apple\\u201d comparison is mainly between HMCN [1], HR-DGCNN[2] (the most recent state-of-the-art methods on hierarchical classification) and HiLAP. We updated the description of baselines to make it more clear.\\n\\n[1] Hierarchical Multi-Label Classification Networks ICML 2018\\n[2] Large-Scale Hierarchical Text Classification with Deep Graph-CNN WWW 2018\\n\\n-------------------------------\", \"q2\": \"Have the network architecture been properly optimized in terms of hyper-parameters?\", \"a2\": \"Thank you very much for your valuable suggestions. We tuned the hyper-parameters on a held-out development set (such as learning rate, regularization, and vector dimensions). We believe the comparison between HiLAP and corresponding base model is relatively fair because they use exactly the same hyper-parameters throughout the experiment comparison. If we add an additional hidden layer to Kim CNN (which more or less turns it into XML-CNN[3] already, except for the dynamic pooling), the same hidden layer would also be added to HiLAP (Kim CNN). The same rule applies to the batch size. We leave such changes as further exploration.\\n\\nPerformance change w.r.t learning rate and regularization\\n-------------------------------------------------------------------------------------------------------------\\nlearning rate\\t | 1e-3\\t| 5e-4\\t| 1e-4\\t| 2e-3\\t| 1e-3\\t| 1e-3\\t | 1e-3\\t|\\nweight_decay\\t| 1e-6\\t| 1e-6\\t| 1e-6\\t| 1e-6\\t| 5e-6\\t| 5e-7\\t | 1e-7\\t|\\nMicro-F1\\t\\t| 82.73\\t| 82.51\\t| 80.86\\t| 82.44\\t| 82.48\\t| 82.6\\t | 82.2\\t|\\n\\n\\n\\n[3]Deep Learning for Extreme Multi-label Text Classification SIGIR 2017\"}",
"{\"title\": \"Clever and promising techniques to force the inference process in structured classification to converge, but experiments seem to lack apple-to-apple comparisons\", \"review\": \"This papers uses the label hierarchy to drive the search process over a set of labels using reinforcement learning. The approach offers clever and promising techniques to force the inference process in structured classification to converge, but experiments seem to lack apple-to-apple comparisons.\\n\\nHowever, I think the authors should rather present this work as structured classification, as labels dependencies not modeled by the hierarchy are exploited, and as other graph structure could be exploited to drive the RL search.\\nI tend to see hierarchical classification as an approach to multi-label classification justified by a greedy decomposition that reduced both training and test time. This view has been outmoded for more than an decade, first as flat approaches became feasible, and now as end-to-end structured classification is implementable with DNNs (see for instance David Belanger work with McCallum)\\n\\nCompared to other structured classification approaches whose scope is limited by the complexity of the inference process, this approaches is very attractive. The authors open the optimization black box of the inference process by adding a few very clever tricks that facilitate convergence:\\n- Intermediate rewards based on the gain on F1 score\\n- Self critical training approach\\n- \\\"Clamped\\\" pre-training enabled by the use of state embeddings that are multiplied my a transition to any state in the free mode, and just the next states in the hierarchy in the clamped mode\\n- Addition of a flat loss to improve the quality of the document representation\\n\\nWhile those tricks may have been used for other applications, they seem new in the context of hierarchical/multi-label/structured classification.\\n\\nWhile the experiments appear thorough, they could be the major weakness of this paper. The results the authors quote as representative of other approaches seem in fact entirely reproduced on datasets that were not used on the original papers, and the authors do not try an apple-to-apple comparison to determine if this 'reproduction' is fair. None of the quoted work used the 2018 version of Yelp, and I could only find RCV1 Micro-F1 experiments in Johnson and Yang, who report a 84% micro-F1, far better than the 76.6% reported on their behalf here, and better than the 82.7% reported by the authors. I read note 4 about the difference in the way the threshold is computed, but I doubt it can explain such a large difference. I did not check everything, but could not find and apple-to-apple comparison?\\n\\nHave the network architecture been properly optimized in terms of hyper-parameters?\\nIn particular, having tried Kim CNN on large label sets, I suspect the author settings using a single layer after the convolution is sub-optimal. I concur with the following paper than an additional hidden layer is essential: Liu et al \\\"Deep Learning for Extreme Multi-label Text Classification\\\". I also note the 32 batch size could be way too small for sparse label sets (I tend to use a batch size of 512 on this type of data).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, but would like to see clearer set of claims with appropriate evaluation.\", \"review\": \"This work proposes an RL approach for hierarchical text classification by learning to navigating the hierarchy given a document. Experiments on 3 datasets show better performance. I'm happy to see that it was possible to\\n\\n1. \\\"we optimize the holistic metrics over the hierarchy by providing the policy network with holistic rewards\\\"\\n\\nI don't quite understand what are the \\\"holistic metrics\\\" and \\\"holistic rewards\\\". I would like the authors to answer \\\"what exactly does reinforcement learning get us ?\\\"\\n - Is it optimizing F1 metric or is it the ability to fix inconsistent labeling problem ? \\n- If it is the latter, what is an example of inconsistent labeling, what fraction of errors (in table 2/3) are inconsistent errors. Are we really seeing the inconsistent errors drop ?\\n- If it is the former, how does this compare to existing approaches for optimizing F1 metric.\\n\\n2. \\\"the F1 score of each sample xi\\\"\\n\\na. F1 is a population metric, what does it mean to have F1 for a single sample ?\\nb. I'm not aware of any work that shows optimizing per-example f_1 minimizes f_1 metric over a sample.\\n\\n3. with 10 roll-outs per training sample, imho, it seems unrealistic that the expected reward can be computed correctly. Would'nt most of the reward just be zero ? Or is it the case the model is initialized with an MLE pretrained parameters (which seems like it, but im not too sure).\\n\\nResults analysis,\\n- imho, most of the rows in Table 2 does not seem comparable with each other due to pretrained word-embeddings and dataset filtering, e.g. SVM-variants, HLSTM.\\n- in addition to above, there is the standard issue of using different #parameters across models which increases/decreases model capacity. This is ok as long as all parameters were tuned on held out set, or using a common well established unfiltered test set - neither of which is clear to me.\\n- it is not clear how the F1 metric captures inconsistent labeling, which seems to be the main selling point for hi-lap. \\n\\nside comment\\n- reg textcnn performance, could it be that dropout is too high ? (the code was set to 0.5)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"reinforcement learning approach for hierarchical text classification\", \"review\": \"This paper presents an end to end rl approach for hierarchical text classification. The paper proposes a label assignment policy for determining the appropropriate positioning of a document in a hierarchy. It is based on capturing the global hierachical structure during training and prediction phases as against most methods which either exploit the local information or neural net approaches which ignore the hierarchical structure. It is demonstrated the method particularly works well compared to sota methods especially for macro-f1 measure which captures the label weighted performance. The approach seems original, and a detailed experimental analysis is carried out on various datasets.\", \"some_of_the_concerns_that_i_have_regarding_this_work_are\": [\"The problem of hierarchical text classification is too specific, and in this regard the impact of the work seems quite limited.\", \"The significance is further limited by the scale of the datasets of considered in this paper. The paper needs to evaluate against on much bigger datasets such as LSHTC datasets http://lshtc.iit.demokritos.gr/. For instance, the dataset available under LSHTC3 is in the raw format, and it would be really competitive to evaluate this method against other such as Flat SVM, and HRSVM[4] on this dataset, and those from the challenge.\", \"The experimental evaluation seems less convincing such as the results for HRSVM for RCV1 dataset are quite different in this paper, and that given HRSVM paper. It is 81.66/56.56 vs 72.8/38.6 reported in this paper. Given that 81.66/56.56 is not too far from that given by HiLAP, it remains a question if the extra computational complexity, and lack of scalability (?) of the proposed method is really a significant advantage over existing methods.\", \"Some of the references related to taxonomy adaptation, such as [3] and reference therein, which are also based on modifying the given taxonomy for better classification are missing.\", \"Comparison with label embedding methods such as [1,2] are missing. For the scale of datasets discussed, where SVM based methods seem to be working well, it is possible that approaches [1,2] which can exploit label correlations can do even better.\", \"[1] K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain, Sparse Local Embeddings for Extreme Multi-label Classification, in NIPS, 2015.\", \"[2] H. Yu, P. Jain, P. Kar, and I. Dhillon, Large-scale Multi-label Learning with Missing Labels, in ICML, 2014.\", \"[3] Learning Taxonomy Adaptation in Large-scale Classification, JMLR 2016.\", \"[4] Recursive regularization for large-scale classification with hierarchical and graphical dependencies, https://dl.acm.org/citation.cfm?id=2487644\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Code released\", \"comment\": \"We released our code in an anonymized repo: https://github.com/hi-label-assignment-policy/HiLAP\"}"
]
} |
|
HyefgnCqFm | Learning Partially Observed PDE Dynamics with Neural Networks | [
"Ibrahim Ayed",
"Emmanuel De Bézenac",
"Arthur Pajot",
"Patrick Gallinari"
] | Spatio-Temporal processes bear a central importance in many applied scientific fields. Generally, differential equations are used to describe these processes. In this work, we address the problem of learning spatio-temporal dynamics with neural networks when only partial information on the system's state is available. Taking inspiration from the dynamical system approach, we outline a general framework in which complex dynamics generated by families of differential equations can be learned in a principled way. Two models are derived from this framework. We demonstrate how they can be applied in practice by considering the problem of forecasting fluid flows. We show how the underlying equations fit into our formalism and evaluate our method by comparing with standard baselines. | [
"deep learning",
"spatio-temporal dynamics",
"physical processes",
"differential equations",
"dynamical systems"
] | https://openreview.net/pdf?id=HyefgnCqFm | https://openreview.net/forum?id=HyefgnCqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxbu1On1N",
"SyxyYr0YCX",
"H1giIr0K0Q",
"S1epUNRtRm",
"HJxtGVCKAm",
"S1xxl4RtRm",
"rJgypGCt0X",
"SkeVKA1C27",
"BylmOXd6hm",
"rye2Nl8phQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544482664913,
1543263607123,
1543263571469,
1543263316886,
1543263249180,
1543263208343,
1543262903003,
1541435004320,
1541403499483,
1541394484450
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1054/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1054/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1054/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1054/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1054/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1054/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1054/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1054/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1054/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1054/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduces a few training methods to fit the dynamics of a PDE based on observations.\", \"quality\": \"Not great. The authors seem unaware of much related work both in the numerics and deep learning communities. The experiments aren't very illuminating, and the connections between the different methods are never clearly and explicitly laid out in one place.\", \"clarity\": \"Poor. The intro is long and rambly, and the main contributions aren't clearly motivated. A lot of time is spent mentioning things that could be done, without saying when this would be important or useful to do. An algorithm box or two would be a big improvement over the many long english explanations of the methods, and the diagrams with cycles in them.\", \"originality\": \"Not great. There has been a lot of work on fitting dynamics models using NNs, and also attempting to optimize PDE solvers, which is hardly engaged with.\", \"significance\": \"This work fails to make its own significance clear, by not exploring or explaining the scope and limitations of their proposed approach, or comparing against more baselines from the large set of related literature.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting area but doesn't meet quality or clarity standards\"}",
"{\"title\": \"Answer to Reviewer3 (part 2/2)\", \"comment\": \"Obviously, it would be very interesting to merge the two approaches. For instance, one aspect we will research in the coming times is to study the interplay between constraining the forecasting operator f, for example by adding constraints to the filters or by replacing them with ODE solvers and see how this affects the estimator. This can be seen as injecting prior information into f and it is an important question to see when to do this given a certain level of knowledge about the dynamics, how to do it efficiently and if additional knowledge improves the forecasting accuracy.\\n\\nOn the other hand, it is important to note that we have showed how our method actually gives us a way to easily inject physical priors into the estimator e in the partially observable setting. There is also the question of imposing more principled structural constraints in its architecture but we are convinced that this depends on the studied problem and thus one would have to study different classes of PDEs to look for appropriate NN models. This is also a direction of research which is interesting for us. \\n\\nWe hope those few points clarify our endeavors and we have tried to improve our presentation in the revised version of the paper so that this can be understood more clearly. We have also added more experiments analyzing the performance of the models at different levels of data scarcity showing how injecting prior knowledge can improve forecasting.\"}",
"{\"title\": \"Answer to Reviewer3 (part 1/2)\", \"comment\": \"We appreciate very much your approach, we are sorry that the paper has not been sufficient to clearly explain our overall approach. We will try to make up for this in this answer and clarify the revised version of the paper in accordance.\\n\\nFirst of all, we agree that the title might have been misleading. Taking into account the reviews, we have decided to change it to \\u201cLearning partially observable PDEs with neural networks\\u201d which states more explicitly what is done. In other words, what we are trying to do is to forecast space time processes which are driven by unknown PDEs having access only to partial state measurements. \\n\\nThe direction that you are suggesting which consists, as we understand it, in designing a specific architecture with explicit differential terms which might appear in the studied PDEs has actually been investigated by several recent papers. There has been different approaches, the main ones being :\\n\\n-Those which consist in numerically calculating (with finite difference schemes) candidate differential terms then regressing against measured data. Here complete states are supposed to be available and the goal is mainly to retrieve the form of the underlying PDE. Schaeffer\\u2019s \\u201cLearning partial differential equations via data discovery and sparse optimization\\u201d or Rudy et al. \\u2018s \\u201cData-driven discovery of partial differential equations\\u201d are good examples of this approach. In Raissi et al \\u2018s \\u201cPhysics Informed Deep Learning\\u201d, a similar view is taken but with automatic differentiation instead of explicit numerical schemes, still constructing a dictionary of differential terms.\\n\\n-A more hybrid approach was the one followed by Long et al \\u2018s \\u201cPDE-NET\\u201d paper where convolution filters are constrained to approximate differential operators of a certain order but still have learnable parameters. Again, the main goal here is to find the terms of the underlying equation with the complete states supposed to be available.\\n\\nThe work above is indeed very interesting and promising. We are actually convinced that many of their ideas will be relevant to our future research.\\n\\nHowever, in this paper, we take a different point of view :\\n\\n-First of all, our goal is not to recover the underlying equation but rather to find efficient methods for forecasting. In this regard, we have found that standard non-constrained ResNets, when supervised and evaluated with complete states, are very powerful at forecasting without overfitting training data. Thus, additional explicit constraints in the forecasting operator didn\\u2019t seem necessary, especially as we don\\u2019t want to make any general hypothesis regarding the differential terms which might intervene in the underlying unknown PDE or the way those terms can be computed. Moreover, from early experiments, those constraints didn\\u2019t lead to improvements in predictive performance, while adding numerical instabilities and sometimes difficulties in training.\\n\\n-The most important point is that we place ourselves in the more realistic setting of having access only to partially observable states of the underlying PDEs, where most of the variables cannot be directly measured. One has then to estimate a state making the forecast possible and our goal is to see whether it is still possible to build a system which consistently succeeds in doing so, with few or no prior information over the dynamics governing the studied data. Ideally, we would like the approach to work for any dynamical system.\", \"in_order_to_solve_this_problem\": \"-We present a general and flexible framework where forecasting is decomposed into two steps, which closely follows the way applied physicists work with this kind of PDEs translated into a NN architecture.\\n\\n-While keeping the models very generic, as you rightfully pointed out, we study different variants of algorithms obtained through this architecture (SSE and MSRE), with different levels of prior injection into the estimator (pretrained and joint training) and apply this to an important class of PDEs, with promising results up to a relatively long horizon.\\n\\n-Our results show that the unsupervised version works surprisingly well while pretraining with a simplified model (which we view as injecting a structural prior in the state), here Euler as a prior to Navier-Stokes, is interesting when data is scarce. There is more empirical evidence for this last observation in the revised version of the paper.\"}",
"{\"title\": \"Answer to Reviewer2\", \"comment\": \"Thank you very much for your review and comments. Let us address some of your concerns.\\n\\nIn this work, we present a general formulation for forecasting dynamical systems using neural networks, in a setting where we do not fully observe its state. The aim of our work is to conceive a framework that is applicable to a wide range of dynamical systems, not to focus solely on the problem of fluid dynamics prediction which was merely an example taken as application. For this reason, we have used standard neural network architectures (Resnet and UNet), that may not be novel but have proven to work well for a large range of different tasks. Nonetheless, conceiving task-specific architectures and integrating them into our formulation is possible, and indeed an interesting research direction. We wanted this work to focus on the generic framework applied to the partially observable setting and not on any problem specific issues.\\n\\nRegarding the generalization of our models, in our datasets, in the training, the validation as well as in the test sets, all sequences are generated randomly and independently, meaning the location of densities and the intensity and direction of initial fluid flows are sampled at random and independently for each sequence (there are 200 of those in each test set which were produced after the training of the models). The only parameters that are fixed through all experiments are the boundary conditions, for which generalization could be interesting to study but this was not the scope of this work and for which we have added an additional figure (in the additional figures section of the appendix) showing how models learn them, and, of course, the dynamics within each dataset. In other words, in the test phase (MSE results and figures), our system successfully forecasts starting with initial conditions which it has never seen during training.\\n\\nAs for the comparison with the TempoGAN paper, while it is indeed a very interesting work in the area of physics-aware neural networks, we do not see it as relevant for what we propose in this work as the authors solve a different task : while having access to the complete state of the system (including the velocity flow and vorticity which they find to have a regularizing effect) at all times, their goal is to solve the super-resolution problem where a coherent high-resolution flow is obtained from a low-resolution dynamic whereas we solve the forecasting problem at a fixed resolution with access to only a projection of the complete state of past times. However, it is an interesting direction of research to see whether it is possible to improve forecasting results by using generative networks, for example in the estimation step. At this stage, it is still not very clear how this could be implemented but we think it is worth exploring.\\n\\nYour remarks regarding the very good performance of ResNets are indeed an important point. This standard architecture can actually be seen as an instance of our framework, by simply considering the sequence of k observations as the system\\u2019s state, and setting e to be equal to Id, the identity operator. Those good performances show that the generic ResNet architecture is a particularly well suited one for dynamical systems : Actually, when we started experimenting in the fully observable situation (H=Id), just using this architecture allowed obtaining near perfect test results (we will include those experiments in the revised version of the paper). However, while this naive state representation seems sufficient for forecasting, and there are indeed classical theoretical arguments for this to be true, it is not necessarily the most efficient one : With the other proposed architectures, we wish to find alternative state representations better suited for forecasting where structural priors on the state can be enforced. As systems grow in complexity and data is scarcer, a good representation becomes more important. This can be seen for example with the comparative advantage of the PT model as compared to JT and the ResNet when diminishing the dataset size in the NS experiments we have added in the revised version of the paper. And, while we have only showed this on a single example, one has to keep in mind the fact that equations in real-world systems are much more complex with data in scarcer quantities, as it is costlier to obtain, so that this kind of prior can prove useful in many situations.\"}",
"{\"title\": \"Answer to Reviewer1 (part 2/2)\", \"comment\": \"We have also explored another direction in this paper, regarding the injection of structural prior through pretraining : this gives us the pretrained (PT) and jointly-trained (JT) alternatives. This allows us to explore whether we are able to constrain the structure of the estimated state with a simplified model for example, which can be a way to use prior knowledge on the governing PDE like its general form.\\n\\nThus, our framework is PDE-guided in the sense of its general construction separating clearly estimation from forecasting, in the use of the ResNet architecture, which arguably implements learned finite difference schemes, used as forecasting operator and, more importantly, in the PT case where there is indeed some knowledge of the complete state structure injected through pretraining. However, we do agree with you about the fact that the title might be a little misleading as we never explicitly input any analytical equation into our system. Thus, we have decided to modify the title to make it more explicit. This is precisely our goal : building a generic system, more or less constrained depending on the available knowledge, which is able to learn dynamics through measured observations only.\\n\\nWe present results for all those four different alternatives, weighing the strengths and weaknesses of each and trying to explain them intuitively. We will try to make this part of the presentation more palatable and clearer in the revised version as it was obviously confusing for many reviewers.\\n\\nRegarding the simplifying assumption of page 3, it is actually a subtle question to know whether the measure function is enough to reconstruct a state which allows to consistently make the forecast. There are at least two different points of view. One is the probabilistic one, where the estimated state is seen as the conditional expectation of the state given the k observations. The second one comes from the theory of dynamical systems and uses Takens\\u2019s embedding theorem which proves that a state can be reconstructed for a dense class of observation functions, as long as k is big enough. We didn\\u2019t want to complexify the presentation but we will add a paragraph in the appendix expanding on this question : our general opinion is that, in most cases, as long as the observations give a meaningful signal, there should be a minimal value of k which works. It is even more difficult to know how an error in estimating the state would propagate to the resulting forecast. This would depend on the chaoticity of the studied system and how sensitive it is to initial conditions so it should be considered very carefully on a case by case basis.\", \"for_other_remarks\": \"The section 8.1 is for readers who don\\u2019t have prior knowledge of fluid dynamics so that it helps build some intuition of the studied equations. The projection trick in section 8.2 is actually classical in computational fluid dynamics but we had never seen it applied in the deep learning community so we felt it might be interesting to mention it in the paper. We also agree with the link you make between our work and that of the multi-fidelity community but we are still unsure about how such a link could be implemented, it is one of the interesting future research directions we want to pursue. Again, if you have in mind specific papers which might be relevant to our work, we welcome any suggestion.\\n\\nFinally, thank you for pointing those typos, we will do our best in correcting them for the revised version of this paper.\\n\\n[1]: Hidden physics models: Machine learning of nonlinear partial differential equations, https://www.sciencedirect.com/science/article/pii/S0021999117309014\\n[2]: Linear Latent Force Models using Gaussian Processes, https://arxiv.org/abs/1107.2699\\n[3]: Machine learning of linear differential equations using Gaussian processes, https://www.sciencedirect.com/science/article/pii/S0021999117305582\\n[4] Data-driven discovery of partial differential equations\\u201d, Rudy et al., http://advances.sciencemag.org/content/3/4/e1602614\"}",
"{\"title\": \"Answer to Reviewer1 (part 1/2)\", \"comment\": \"Thank you very much for your extensive and detailed comments. We will try to address your remarks and concerns in this answer. We will also try to make clearer some points that we might have gone through too quickly in the paper.\\n\\nYou are right to mention the GP community, they are very active and pioneering in the area of learning dynamical systems governed by PDEs. We have added more references to the works we have knowledge of to the revised version of our paper (such as [1, 2, 3]). Please feel free to mention any paper that you think would be relevant to add. However, it seems for us that these methods, although promising and useful in many cases, cannot directly be applied to our problem :\\n\\n-These methods have access to knowledge about the underlying PDE. Typically, in [1, 2, 4], a dictionary of differential terms intervening in the PDE is supposed to be known. In [3], the PDE is known up to a linear forcing term. In [1], in the non-linear setting, only a few unknown parameters are learned. We place ourselves in a more prior-agnostic context, where we don\\u2019t make such a constraining hypothesis.\\n\\n-These methods all rely on numerical schemes to discretize the PDE. Specifically, in [3], they use backward Euler method for time discretization for all recovered PDEs. In [4], they add a polynomial interpolation to smooth the discrete numerical scheme. For long-term forecasting, designing and selecting these numerical schemes should be highly dependant on the underlying PDE, using generic discretization may lead to large forecast errors and even to numerical instability. Our formulation does not suffer from this problem : selecting the appropriate discretization scheme is directly incorporated in the learning problem and ResNets have proven to be quite robust.\\n\\n-Another very important difference, and it is the central issue which is addressed in this work, is the fact that we do not have access to complete states, only to partial state observations. This setting is very common when tackling real world problems in applied physics but we have no knowledge of other approaches which tackle it in the machine learning community. In particular, we don\\u2019t know of such in the GP community and it is thus difficult to compare our work with their results.\\n \\nPrecisely for the reason that you mentioned regarding the subject being a new one in the deep learning community, we have struggled to find a strong baseline that we can compare our models to. We have chosen the ConvLSTM model, which is now a classical one in statistical forecasting, and a standard standalone ResNet, which can be seen as an instance of our framework and proved to be a strong baseline. All of the few other works on using deep learning to solve differential equations that we have knowledge of assume the state is fully observable, which is not the case in our setting. On the other hand, data assimilation algorithms used by physicists assume the equations are known and use them to estimate the true state while our system implicitly learns them through training data. Ultimately, one would want to test our learned models against those algorithms in real-world settings where no explicit exact equations are known to show that they work better than the hand-crafted approximations currently used. We will be working in this direction and this paper is a first step paving the way towards this objective. \\n\\nThus, what we have tried to build is a framework which allows to perform this task in two different ways : one which is classical in dynamical systems (SSE) and the second one (MSRE) which seemed natural to derive from our framework (so natural that there are certainly variants similar to it available in the time series forecasting literature but we couldn\\u2019t find a precise reference or work using it). Intuitively, while the SSE is constrained to compress all relevant information for any time horizon into the estimated state, which is difficult, the MSRE works in a fully auto-regressive way and goes back to the observation space at each time-step which we feel should help, especially when the number of time-steps is greater.\"}",
"{\"title\": \"Modifications in the revised version of the paper\", \"comment\": \"In the revised version we have now uploaded, we have tried to take into account the different remarks and concerns of reviewers. This included:\\n-Revising the second part of the introduction to describe more clearly our contributions;\\n-Restructuring the experiments section which was confusing for many reviewers;\\n-Adding standard ResNet experiments for all datasets, as it is indeed a strong model in our setting;\\n-Adding experiments with even smaller datasets to study generalization;\\n-Completing the related works sections with some of the references suggested by the reviewers;\\n-Adding a section to the appendix expanding on the simplifying assumption stated in section 3;\\n-Adding a figure with an additional sample from the test set showing how boundary conditions are dealt with by the different models;\\n-Changing the title to make it more explicit;\\n-Correcting differents typos in the paper.\\n\\nWe have also added clarifications and corrected mistakes throughout the text.\"}",
"{\"title\": \"Review of \\\"Learning space time dynamics with PDE guided neural networks\\\"\", \"review\": \"I very much like the aim of this work. This is a problem of interest to a wide community, which as far as I'm aware hasn't yet had much focus from the deep learning community. However, perhaps in part because of this, the paper reads as naive in places. Pages 1-4 are all background saying nothing new, but ignoring the effort made on this problem by other communities. There has been some work done o this problem within statistics, and within the Gaussian process community, to which no reference is made at all by the paper.\\n\\nThere are two novelties as far as I can see (these may or may not be novel - but they were novel to me). The first is use of NNs to model the system. The second is the multiple state restimation (MRSE) on page 5. I struggled to get a feeling about how successful these two aspects of the work are. The results section is difficult to follow, and doesn't compare the method to existing methods and so there is no baseline to say that this is successful or not. Thus I find it hard to judge the execution of the idea. What I really want to know reading a paper like this is should I use this approach? Because there is no comparison to existing methods, it leaves me unsure.\", \"other_comments\": [\"Is the title correct? I don't see how these are PDE guided NNs? You've used data from a PDE to train the network and as a test problem. A PDE guided NN would, for me, know something about the dynamics (compare with recently work in the GP community where kernels are derived that lead to GPs that analytically obey simple PDEs).\", \"There is an obvious link to work in the uncertainty quantification community, particularly around the use of multi-fidelity/ multi-level simulation. This paper is likely to be of interest to them and the link could be more explicit.\", \"Page 3, after eq 2 - there is notation used here that is undefined Y_{t-k}^t\", \"The simplifying assumption on page 3 is very strong and unlikely to hold for many systems. But it isn't clear to me whether this is necessary or not? Presumably if it doesn't hold then we may still get an approximation that could be useful, but it is just that we lose any guarantee the method will work.\", \"I thought the MSRE idea was interesting. It wasn't very well explained or motivated, and it was unclear to me whether it works well or not from the results, or whether it is novel to this paper or not. But I'd like to have read more about it.\", \"Is the trick in Section 8.2 original to this paper? If so, it seems a nice idea (I've not checked the detail).\", \"Most of section 8.1 strikes me as unnecessary.\", \"There are quite a few typos. In particular, words such as Markovian, Newtonian should be capitalised.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A good step in learning Navier-Stokes equations, but lack compelling results\", \"review\": [\"An interesting idea to learn the hidden state evolution and the state-observation mapping jointly\", \"The experiments on Euler's equation are slightly better than ResNet for 30 steps ahead forecasting in terms of MSE\", \"The paper is clearly written and well-explained\", \"The model is not new: ResNet for state evolution and Conv-Deconv for state-observation mapping\", \"The difference between ResNet and the proposed framework is not significant, ResNet is even better in Figure 2\", \"Missing an important experiment: test whether the model can generalize, that is to forecast on different initial conditions than the training dataset\", \"How does the model compare with GANs (Y. Xie* , E. Franz* and M. Chu* and N. Thuereyy, \\u201ctempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow\\u201d)?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Some questions in lieu of a review, for now\", \"review\": \"I feel like I am missing something about this paper, so rather than a review, this is just mainly a long question making sure I understand things properly. Ignore the score for now, I'll change once I get a clearer picture of what's happening here.\\n\\nThe network you propose in this paper is motivated by solving PDEs where, as in (1), the actual solution as they are computed numerically depends on the current spatial field of the state, as well as difference operators over this field (e.g., both the gradients and the Laplacian terms). So, I naturally was assuming that you'd be designing a network that actually represented state as a spatial field, and used these difference operators in computing the next state. But instead, it seems like you reverted to the notion of \\\"because difference operators can be expressed as convolutions, we use a convolutional network\\\", and I don't really see anything specific to PDEs thereafter, just general statements about state-space models.\\n\\nAm I understanding this correctly? Why not just actually use the PDE-based terms in the dynamics model of an architecture? Why bother with a generic ResNet? (And I presume you're using a fully convolutional ResNet here?) Wouldn't the former work much better, and be a significantly more interesting contribution that just applying a ResNet and a generic U-Net as a state estimator? I'm not understanding why the current proposed architecture (assuming I understand it correctly) could be seen as \\\"PDE guided\\\" in all but the loosest possible sense. Can you correct me if I'm misunderstanding some element here?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJgMlhRctm | The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision | [
"Jiayuan Mao",
"Chuang Gan",
"Pushmeet Kohli",
"Joshua B. Tenenbaum",
"Jiajun Wu"
] | We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval. | [
"Neuro-Symbolic Representations",
"Concept Learning",
"Visual Reasoning"
] | https://openreview.net/pdf?id=rJgMlhRctm | https://openreview.net/forum?id=rJgMlhRctm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkl972bf9E",
"r1lTyZCSxN",
"BylnXIUC0Q",
"r1xJIbv5A7",
"ryxKPxw9Rm",
"SJgbnnCgRX",
"rJxoZ-_ipQ",
"SyxhAx_jpm",
"rJl4slOsa7",
"Bkx7KxOjpX",
"rJx2mlOjTQ",
"HJeHGTF5nX",
"r1g6tF8F3X",
"Sklo1V_znQ"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1555336226405,
1545097444604,
1543558692139,
1543299399305,
1543299168949,
1542675625403,
1542320386922,
1542320340336,
1542320284111,
1542320250822,
1542320164421,
1541213453202,
1541134724770,
1540682723099
],
"note_signatures": [
[
"~Dzmitry_Bahdanau1"
],
[
"ICLR.cc/2019/Conference/Paper1053/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1053/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1053/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1053/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1053/AnonReviewer1"
]
],
"structured_content_str": [
"{\"comment\": \"Hi, nice paper!\", \"two_quick_questions\": \"1) Can you elaborate how (if) the models learn how many attributes/concepts are there? E.g. in CLEVR there are 4 attributes that take 3, 4, 4, 8 values. Are these numbers learn by the model, or are they given? I read the appendix but I am still not sure I understand.\\n2) Do you by any chance plan to release the code?\", \"title\": \"a few questions\"}",
"{\"metareview\": \"Strong paper in an interesting new direction.\\nMore work should be done in this area.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Strong paper in an interesting new direction\"}",
"{\"title\": \"No reason not to raise the score\", \"comment\": \"The authors sufficiently clarified the experimental procedures for fair comparisons what I had concerned. Although the work seems to be limited in natural images and language (VQS), I appreciate the authors to include in the paper for the future works.\\n\\nI decide to increase my rating by 1.\"}",
"{\"title\": \"Thanks for your patience. Revision uploaded.\", \"comment\": \"Dear reviewer, we have updated our paper with the promised results.\\n\\n1. Train/Val/Test split:\\nWe have included the new results of NS-CL using 100% of the CLEVR training images. We use 95% of the training images for learning, and the remaining 5% for validation, hyper-parameter tuning, and model selection. Validation images are used only in testing. This further pushes the overall accuracy of NS-CL to 99.2% on the validation split. Please refer to Section 4.2 for the new results. We adopt the same strategy in all newly added experiments, including those on the Minecraft dataset and the VQS dataset. For the results using only 10% of the CLEVR training images, we simply used the training set accuracy for model selection. \\n\\nWe have tried to contact the authors of the CLEVR dataset, and will be pleased to share further information regarding the test split upon receiving any responses.\\n\\n2. Object-based Representations and Data Efficiency.\\nThank for suggesting the related work and the additional baselines. We have added additional experiments that incorporate object-based representations into TbD/MAC (Section 4.2). NS-CL achieves higher data efficiency. We believe that this comes from the full disentanglement of visual concept learning and symbolic reasoning: how to execute program instructions based on the learned concepts is programmed in NS-CL.\\n\\nCompared with the attention-based baselines, our use of symbolic programs enables better integration with object-based representations, e.g., in modelling relations and quantities. For the detailed implementation of the baselines, please refer to Appendix E.3.\\n\\nThanks again for your comments.\"}",
"{\"title\": \"General Response: Revision Uploaded\", \"comment\": \"We thank all reviewers for their constructive comments and have updated our paper accordingly. Please check out the new version!\\n\\nSpecific changes include\\n\\n1) We have compared with additional baselines that incorporate object-based representation with attention-based methods (MAC/TbD). The results are in Section 4.2 and the implementation details are in Appendix E.3. The symbolic program execution module in NS-CL shows better utilization of object-based representations. \\n\\n2) We provided a systematic analysis of data efficiency in Section 4.2. NS-CL achieves higher data efficiency by disentangling visual concept learning and program-based symbolic reasoning.\\n\\n3) We added the results on a new visual reasoning testbed --- the Minecraft dataset. Results can be found in Appendix F.1.\\n\\n4) We added both quantitative and qualitative results on the VQS dataset, composed of natural images from the COCO dataset and human-annotated question-answering pairs. Please kindly find these results in Section 4.6 and the implementation details in Appendix F.2. NS-CL achieves a comparable results with the baselines and learns visual concepts from the noisy inputs.\\n\\n5) We have cited and discussed the suggested related work.\\n\\n6) We have also included more discussions on future work.\\n\\nPlease don\\u2019t hesitate to let us know for any additional comments on the paper.\"}",
"{\"title\": \"Still waiting for the results with fair comparisons w.r.t Author Feedback 2.\", \"comment\": \"Sincerely thank you for the detailed explanations and comments for a constructive rebuttal.\", \"re\": \"2. Object-based representations and baselines\\nR2) With the positive results, I would like to consider increasing my rating considering the authors' argument of fair comparison.\"}",
"{\"title\": \"Our General Response\", \"comment\": \"We thank all reviewers for their comments. In addition to the specific response below, here we summarize our goal and the changes planned to be included in the revision.\\n\\nWe study concept learning---discovering both visual concepts and language concepts from natural supervision (unannotated images and question-answer pairs). With these learned concepts, our model can solve many problems, such as image captioning, retrieval, as well as VQA. But here the ability to solve VQA is really a by-product, not our end goal---learning accurate (Sec. 4.1), interpretable (Sec. 4.2), and transferrable (Sec. 4.5) concepts. \\n\\nWe agree with the reviewers that it\\u2019s important to demonstrate how our model works on real images with more complex visual appearance. As suggested, we plan to include the following changes in the revision by Nov. 26 (the new official revision deadline, extended from Nov. 23):\\n- We will include quantitative and qualitative results on new datasets: the VQA dataset of real-world images [1] and the Minecraft dataset used by Yi et al. [2].\\n- We will add a systematic study regarding the data efficiency of our model, compared with other VQA baselines in Sec. 4.2.\\n- We will compare our model with other baselines (TbD and MAC) built upon the object-based representations.\\n- We will include additional discussions on limitation and future work.\\n\\nPlease don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n[1] Antol, Stanislaw, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. \\\"Vqa: Visual question answering.\\\" In ICCV, 2015.\\n[2] Yi, Kexin, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B. Tenenbaum. \\\"Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding.\\\" In NIPS, 2018.\"}",
"{\"title\": \"Our Response to Reviewer 1\", \"comment\": \"Thank you very much for the constructive comments.\\n\\n1. Semantic parsing.\\nIn short, the semantic parsing module is a neural sequence-to-tree model. Given a natural language question, the module translates into an executable program with a hierarchy of primitive operation. We present an overview in Sec. 3.1 (last paragraph of Page 4), with more implementation details in Appendix B. We\\u2019ll revise the text for better clarity.\\n\\nThe module begins with encoding the question into a fixed-length embedding vector using a bidirectional GRU. The decoder, taking the sentence embedding as input, recovers the hierarchy of the operations in a top-down manner: It first predicts the root token (the question type: query/count/\\u2026 in the VQA case); then, conditioned on the root token, it predicts the tokens of the root\\u2019s children. The decoding algorithm runs recursively.\\n\\n2. Counting.\\nWe perform counting in a quasi-symbolic manner, based on the object-based scene representation. As an example, consider a simple program: Count(Filter(Red)), which counts the number of red objects in the scene. The operation Filter(Red) assigns each object with a value p_i, as the confidence of classifying this object as a red one. Counting is performed as: $\\\\sum_i p_i$. During inference, we round this value to the nearest integer. More details can be found in Sec. 3,1. (Page 5) and Appendix C. We will also revise the text for better clarity.\\n\\nCompared with alternatives, our method enjoys combinatorial generalization with the notion of `objects\\u2019: for example, trained on scenes with <= 6 objects, our model can also perform counting on scenes with 10 objects.\\n\\n3. Future direction\", \"we_thank_the_reviewer_for_the_suggestions_on_future_directions_and_will_include_the_following_discussions_in_the_revision\": \"Compositionality. We currently view the scene as a collection of objects with latent representations. Building scene (or video) representations that also reflects the compositional nature of objects (e.g., an object is a combination of multiple primitives) will be an interesting research direction. \\n\\nInfer relations from words and behavior. Modelling actions (e.g., push and pull) as concepts is another interesting direction. People have studied the symbolic representation of skills [1] and learning word (instruction) meanings from interaction [2].\\n\\nVideos and words. Our framework can also be extended to the video domain. Video techniques such as detection and tracking are needed to build the object-based representation [3]. Also, the semantic representation of sentences should be extended to include actions / interactions besides static spatial relations. \\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n\\n[1] Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. \\\"From skills to symbols: Learning symbolic representations for abstract high-level planning.\\\" Journal of Artificial Intelligence Research 61 (2018): 215-289.\\n[2] Oh, Junhyuk, Satinder Singh, Honglak Lee, and Pushmeet Kohli. \\\"Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning.\\\" In ICML, 2017.\\n[3] Baradel, Fabien, Natalia Neverova, Christian Wolf, Julien Mille, and Greg Mori. \\\"Object Level Visual Reasoning in Videos.\\\" In ECCV, 2018.\"}",
"{\"title\": \"Our Response to Reviewer 3 (Part 1)\", \"comment\": \"Thank you very much for the constructive comments.\\n\\n1. Train/test split\\nOur evaluation is valid and fair, because all previous papers have also reported results only on the validation set, and we follow the tradition in this paper. They did this because there are no ground-truth labels or evaluation servers provided for the CLEVR test split. Evaluation on the test split is therefore impossible. We agree that it\\u2019s important to ensure all evaluation valid, and we\\u2019ll include this clarification into the revision.\\n\\n2. Object-based representations and baselines\\nThanks for the suggestion. We\\u2019ll cite and discuss the paper that used object-based visual representation. We will also add additional experiments that incorporate object-based representations into TbD/MAC: Instead of the image feature extracted from a ResNet, we change the input visual feature to the reasoning neural architecture to be an object-based representation as in [1]. Please let us know if you have any suggestion regarding the comparison.\\n\\nWe also want to clarify that the object-based representation alone is not the main contribution of the paper. Instead, our key contribution is the integration of object-based representations and symbolic reasoning. Such combination helps us disentangle visual concept learning and language understanding, and has three advantages over alternatives, as explored in the paper:\\n\\n1) Executing symbolic programs on object-based representations naturally facilitates complex reasoning that includes quantities (counting), comparisons, and relations. It also brings combinatorial generalization by design (Sec. 4.4): for example, trained on scenes with <= 6 objects, our model (but not the baselines) can also perform counting on scenes with 10 objects.\\n\\n2) It fully disentangles the visual concept learning and reasoning: once the visual concepts are learned, they can be systematically evaluated (Sec. 4.1) and deployed in any visual-semantic applications (such as image caption retrieval, as shown in Sec. 4.5). In contrast, earlier methods like IEP, TbD, and MAC learn visual concepts and reasoning in an entangled manner and cannot be easily adapted to new problem domains (e.g., show in Table 6, VQA baselines are only able to infer the result on a partial set of the image-caption data).\\n\\n3) Symbolic execution over the object space brings full transparency. One can easily trace back the error answer and even detect adversarial (ambiguous or wrong) questions (please refer to Appendix. E for some examples).\\n\\n3. Limitation and future work\\nWe\\u2019d like to clarify that we are not targeting at a specific application such as VQA; instead, we want to build a system that learns accurate (Sec. 4.1), interpretable (Sec. 4.2), and transferrable (Sec. 4.5) concepts from natural supervision: images and question-answer pairs. To achieve this, we propose a novel framework that 1) disentangles the learning of both, but 2) bridges them with a reasoning module and 3) lets them bootstrap the learning of each other.\\n\\nToward concept learning from realistic images and complex language, the current model design suggest multiple research directions. First, our model relies on object-based representations; constructing 3D object-based representations for realistic scenes (or videos) needs further exploration [1,2]. Second, our model assumes a domain-specific language for a formal description of semantics. The integration of formal semantics into the processing of complex natural language would be meaningful future work [3,4]. We hope our paper could motivate future research in visual concept learning, language learning, and compositionality.\"}",
"{\"title\": \"Our Response to Reviewer 3 (Part 2)\", \"comment\": \"4. Specific Questions\\n- Choice of hyperparameters.\\nWe use the open-sourced implementation of Mask-RCNN [5] to generate object proposals. For all the training processes described in the rest of the paper, we used learning rate 1e-3 with a weight decay of 5e-4. We decay the learning rate by a factor 0.1 after 60% of the designated training epochs. The REINFORCE optimizer uses a discount factor of 0.95. In the main text, the variance of REINFORCE means the variance of the gradient estimation but not the variance of the performance (accuracy). We will also add the standard deviation of the model performance in the revision. \\n\\n- Data-efficiency\\nThanks for the very nice suggestion. We have conducted a more systematic study on the data efficiency and will include it the revision. The results are\\n\\nTrained on 10% of the images:\", \"tbd\": \"99.1%.\", \"mac\": \"98.9%.\", \"ns_cl\": \"99.2%.\\n\\nThese results demonstrate that our model is more data-efficient.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n[1] Anderson et al. \\\"Bottom-up and top-down attention for image captioning and visual question answering.\\\" In CVPR, 2018.\\n[2] Baradel et al. \\\"Object Level Visual Reasoning in Videos.\\\" In ECCV, 2018.\\n[3] Artzi, Yoav, and Zettlemoyer. \\\"Weakly supervised learning of semantic parsers for mapping instructions to actions.\\\" TACL 1 (2013): 49-62.\\n[4] Oh et al. \\\"Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning.\\\" In ICML, 2017.\\n[5] https://github.com/roytseng-tw/Detectron.pytorch\"}",
"{\"title\": \"Our Response to Reviewer 2\", \"comment\": \"Thank you very much for the encouraging and constructive comments. We agree that generalizing to more complex visual domains would be essential for our task. In the revision, we will include the results of NS-CL on new datasets, including the VQA dataset of real-world images [1] and the Minecraft dataset used by Yi et al. [2].\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n[1] Antol, Stanislaw, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. \\\"Vqa: Visual question answering.\\\" In ICCV, 2015.\\n[2] Yi, Kexin, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B. Tenenbaum. \\\"Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding.\\\" In NIPS, 2018.\"}",
"{\"title\": \"Interesting end-to-end joint learning of visual concepts and semantic parsing but experiments are limiting\", \"review\": \"\", \"summary\": \"=========\\nThe paper proposes a joint learning of visual representation and word and semantic parsing of the sentences given paired images and paired Q/A with a model called neuro-symbolic concept learner using curriculum learning. The paper reads well and is easy to follow. The idea of jointly learning visual concepts and language is an important task. Human reasoning involves learning and recall from multiple moralities. The authors use the CLEVR dataset for evaluation.\", \"strength\": [\"========\", \"Jointly learning the language parsing and visual representations indirectly from paired Q/A and paired images is interesting. Combining the visual learning with the visual questions answers by decomposing them into primitive symbolic operations and reasoning in symbolic space seems interesting.\", \"End-to-end learning of the visual concepts, Q/A decomposition into primitives and program execution was shown to be competitive to baseline methods.\"], \"weakness\": [\"=========\", \"Although, the joint learning and composition is interesting, the visual task is simplistic and it is not obvious how this would generalize into other complex VQA tasks.\", \"Experiments are not as rigorous as the discussion of the methods suggests. Evaluation on more datasets would have made the comparisons and drawn conclusions more stronger. Although CLEVR is suited for learning relational concepts from referential expressions, it is a toy dataset. Applicability of the proposed method on other realistic datasets would have made the paper more stronger.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Concern of invalid evaluation and vague demonstration of the contribution\", \"review\": \"To achieve the state-of-the-art on the CLEVR and the variations of this, the authors propose a method to use object-based visual representations and a differentiable quasi-symbolic executor. Since the semantic parser for a question input is not differentiable, they use REINFORCE algorithm and a technique to reduce its variance.\", \"quality\": \"The issue of invalid evaluation should be addressed. CLEVR dataset has train, validation, and test sets. Since the various hyper-parameters are determined with the validation set, the comparison of state-of-the-art should be done using test set. As the authors mentioned, REINFORCE algorithm may introduce high variance, this notion is critical to report valid results. However, the authors only report on the validation set in Table 2 including the main results, Table 4. For Table 5, they only specify train and test splits. Therefore, I firmly recommend the authors to report on the test set for the fair comparison with the other competitive models, and please describe how to determine the hyperparameters in all experimental settings.\", \"clarity\": \"As mentioned above, please specify the experimental details regarding setting hyperparameters.\\nIn Experiments section, the authors used less than 10% of CLEVR training images. How about to use 100% of the training examples? How about to use the same amount of training examples in the competitive models? The report is incomplete to see the differential evident from the efficient usage of training examples.\", \"originality_and_significance\": \"The authors argue that object-based visual representation and symbolic reasoning are the contributions of this work (excluding the recent work, NS-VQA < 1 month). However, bottom-up and top-down attention work [1] shows that attention networks using object-based visual representation significantly improve VQA and image captioning performances. If the object-based visual representation alone is the primary source of improvement, it severely weakens the argument of the neuro-symbolic concept learner. Since, considering the trend of gains, the contribution of the proposing method seems to be incremental, this concern is inevitable. To defend this critic, the additional experiment to see the improvement of the other attentional model (e.g, TbD, MAC) using object-based visual representations, without any other annotations, is needed.\", \"pros\": [\"To confirm the effective learning of visual concepts, words, and semantic parsing of sentences, they insightfully exploit the nature of the CLEVR dataset for visual reasoning diagnosis.\"], \"cons\": [\"Invalid evaluation to report only on the validation set, not test set.\", \"The unclear significance of the proposed method combining object-based visual representations and symbolic reasoning\", \"In the original CLEVR dataset paper, the authors said \\\"we stress that accuracy on CLEVR is not an end goal in itself\\\" and \\\"..CLEVR should be used in conjunction with other VQA datasets in order to study the reasoning abilities of general VQA systems.\\\" Based on this suggestion, can this work generalize to real-world settings? This paper lacks to discuss its limitation and future direction toward the general problem settings.\"], \"minor_comments\": \"In 4.3, please fix the typos, \\\"born\\\" -> \\\"brown\\\" and \\\"convlutional\\\" -> \\\"convolutional\\\".\\n\\n\\n[1] Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2018). Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. IEEE Computer Vision and Pattern Recognition (CVPR'18).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Excellent paper including many cutting edge techniques\", \"review\": \"The paper is well written and flow well. The only thing I would like to see added is an elaboration of\\n\\\"run a semantic parsing module to translate a question into an executable program\\\". How to do semantic parsing is far from obvious. This topic needs at least a paragraph of its own. \\n\\nThis is not a requirement but an opportunity, can you explain how counting work? I think you have it at the standard level of the magic of DNN but some digging into the mechanism would be appreciated. \\n\\nIn concluding maybe you can speculate how far this method can go. Compositionality? Implicit relations inferred from words and behavior? Application to video with words?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BygfghAcYX | The role of over-parametrization in generalization of neural networks | [
"Behnam Neyshabur",
"Zhiyuan Li",
"Srinadh Bhojanapalli",
"Yann LeCun",
"Nathan Srebro"
] | Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks. | [
"Generalization",
"Over-Parametrization",
"Neural Networks",
"Deep Learning"
] | https://openreview.net/pdf?id=BygfghAcYX | https://openreview.net/forum?id=BygfghAcYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlb5OU4lE",
"rklmVdS_AX",
"BJeOrBBOCm",
"Sygwq4iZC7",
"SkgLfnBiTm",
"SJeStEOcTm",
"HklzMNu9TX",
"SkezHFcITm",
"rkeQQzTK3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545001097472,
1543161899003,
1543161151931,
1542726798996,
1542310926355,
1542255740836,
1542255626461,
1542003002208,
1541161498843
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1052/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1052/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1052/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1052/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1052/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1052/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1052/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1052/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1052/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"I agree with the reviewers that this is a strong contribution and provides new insights, even if it doesn't quite close the problem.\\n\\np.s.: It seems that centering the weight matrices at initialization is a key idea. The authors note that Dziugaite and Roy used bounds that were based on the distance to initialization, but that their reported numerical generalization bounds also increase with the increasing network size. Looking back at that work, they look at networks where the size increases by a very large factor (going from e.g. 400,000 parameters roughly to over 1.2 million, so a factor of 2.5), at the same time the bound increases by a much smaller factor. The type of increase also seems much less severe than those pictured in Figures 3/5. Since Dzugate and Roy's bounds involved optimization, perhaps the increase there is merely apparent.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"strong paper\"}",
"{\"title\": \"Final revision is uploaded - All reviewers' comments are addressed - Thank you for your valuable feedback\", \"comment\": \"We thank all reviewers for their useful feedback. The final revision is uploaded . This version has addressed all reviewers' comments. We believe that the quality of our paper has improved in the discussion process. We again thank all reviewers for their time and effort.\"}",
"{\"title\": \"Thanks for your feedback - All comments are addressed in the revision\", \"comment\": \"Thank you for your valuable feedback. We have uploaded a revision addressing all your comments.\\n\\nIn particular, we have made the following changes:\\n\\n1) Improved the bibliography significantly \\n2) Toned down the claims of the paper \\n3) Fixed typos. \\n\\nThanks again for pointing to these issues.\"}",
"{\"title\": \"Solid paper.\", \"review\": \"The authors aim to shed light on the role of over-parametrization in generalization error. They do so for the special case of 2 layer fully connected ReLU networks, a \\\"simple\\\" setting where one still sees empirically that the test error decreasing as over-parametrization increases.\\n\\nBased on empirical observations of norms (and norms relative to initialization) in trained overparametrized networks, the authors are led to the definition of a new norm-bounded class of neural networks. Write u_i for the vector of weights incoming to hidden node i. Write v_i for the weights outgoing from hidden node i. They study classes where the Euclidean norm of v_i is bounded by a constant alpha_i and where the Euclidean norm of u_i - u^0_i is bounded by beta_i, where u^0_i is the value of u_i after random initialization. Call this class F_{alpha,beta} where alpha,beta are specific vectors of bounds.\\n\\nThe main result is a bound on the empirical Rademacher complexity of F_{alpha,beta}. \\nThe authors also given lower bounds on the empirical Rademacher complexity for carefully chosen data points, showing that the bounds are tight. These Rademacher bounds yield standard bounds on the ramp loss for fixed alpha,beta, and margin, and then a union bound argument extends the bound to data-dependent alpha,beta and margin.\\n\\nThe authors compare the bounds to existing norm-based bounds in the literature. The basic argument is that the terms in other bounds tend to grow as networks get much larger, while their terms shrink. Note that at no point are the bounds in this paper \\\"nonvacuous\\\", ie they are always larger than one.\\n\\nIn summary, I think this is a strong paper. The explanatory power of the results are still oversold in my opinion, even if they use hedged language like \\\"could explain the role...\\\". But the work is definitely pointing the way towards an explanation and deserves publication. The technical results in the appendix will be of interest to the learning theory community.\", \"issues\": \"\\\"could explain role of over-parametrization\\\". Perhaps this work might point the way to an explanation, but it does not yet provide an explanation. It is a big improvement it seems.\\n\\n\\\"bound improves over the existing bounds\\\". From this statement and the discussion comparing the bounds, it is not clear whether this bound formally dominates existing bounds or merely does so empirically (or under empirical conditions).\", \"typos\": \"bigger than the Lipschitz CONSTANT of the network class\\n\\nH undefined\\n\\nRademacher defined for H but must be defined on loss class (or a generic function class, not H)\\n\\n\\\"we need to cover\\\" --> \\\"it suffices to\\\"\\n\\n\\\"the following two inequaliTIES hold by Lemma 8\\\"\", \"bibliography_is_a_mess\": \"half of the arxiv papers are published. typos everywhere, very sloppy.\\n\\n(This review was requested late in the process due to another reviewer dropping out of the process.)\\n\\n[UPDATE]. The authors addressed my concerns stated in my review above. I think the bibliography has improved and I recommend acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"All majors issues fixed\", \"comment\": \"Thank you for the quick reply, at this point I believe both of the major issues are properly addressed, and the proofs are rigorous. As promised, I would recommend accepting this paper.\\n\\nOne more minor typo in Lemma 10 - in the last equation block where we plug in the value of \\\\| alpha \\\\|_p, I believe you initially plugged in the value of p-th power of it. Instead I believe it should be \\n beta D^{1/2 - 1/p} (1 + D/K)^{1/p}\\nOnce again, this is a very minor issue, and I can see the rest of the results follow from this correction.\"}",
"{\"title\": \"Thanks for your positive feedback and suggested improvements.\", \"comment\": \"Thanks for your positive feedback and suggested improvements.\\n\\n1) We have not claimed in the paper that our bound decreases with the network size but rather shows correlation with the test error, which is an empirical observation. To make this very clear, we have updated the abstract to emphasize that the correlation of the bound with the test error is for network sizes within the range reported in the experiments.\\n \\n2) Since the l_gamma loss is (\\\\sqrt{2}/gamma)-Lipschitz, the Rademacher complexity of l_gamma o F is (\\\\sqrt{2}/gamma) times Rademacher complexity of F so the important object to calculate the complexity measure is F and our lower bound is given for F. We will clarify this confusion in the final version.\"}",
"{\"title\": \"Clarification of the Proofs\", \"comment\": \"Thanks a lot for reading our paper very carefully and helping us improve the readability and validity of the proofs with your suggestions. We are glad that you found our paper to be a significant contribution to the understanding of over-parameterization in deep learning. We have applied all your suggestions in the revision which is uploaded in the openreview. Here we clarify the two issues you raised regarding the proofs:\\n\\n1) Lemma 10: As you guessed, it is indeed the case that the precise way to state is that \\u201cthere exist such \\\\alpha\\u2019\\u2019 \\u201c. This \\\\alpha\\u2019\\u2019 can be constructed by simply increasing the value along the last dimension of the \\\\alpha\\u2019 to get the desired norm. We have updated the paper with the clarification.\\n\\n2) Theorem 3: You are right about the inequality in the proof of Theorem 3. This was a typo which can be fixed by replacing max{ <s, f_i> , <s, -f_i> } by max{ <s, [f_i]_+> , <s, [-f_i]_+> } in the left hand side. And this is indeed the quantity we use in the later part of the proof. We have corrected this typo in the revision.\\n\\nGiven that we have resolved the two issues you raised, we respectfully ask you to increase the score to reflect the significance of this work on understanding the role of over-parameterization in neural networks. We thank you again for your valuable feedback.\"}",
"{\"title\": \"Promising paper, with a couple of clarifications needed\", \"review\": \"Let me start by apologizing for the delayed review - in fact I was asked today to replace an earlier assigned reviewer. Hopefully the clarifications I request won't be too time consuming to meet the deadline coming up.\\n\\n###\\n\\nFirst of all, the problem which the authors are attempting to answer is quite important: the effect of over-parametrization is not well understood on a theoretical level. As the paper illustrate, 2-layer networks are already capable of generalizing while being over-parameterized, therefore justifying their setting. \\n\\nNext this paper motivates the study of complexity quantities that tend to decrease with the number of parameters, in particular figure 3 motivates the conjecture that the complexity measure in Theorem 2 can control generalization error. The paper also does a great job comparing related work, motivating their results. \\n\\n###\\n\\nAt this point, I would like to request a couple of clarifications in the proofs. Perhaps it's due to the fact that I only spent a day reading, but at least I think we could improve on its readability. Regardless, I currently do not yet trust a couple of the proofs, and I believe the acceptance of this paper should be conditioned on confirming the correctness of these proofs.\\n\\n(1) Let's start with Lemma 10. In the middle equation block, we obtain a bound \\n \\\\| alpha^prime \\\\|_p^p <= beta^p ( 1 + D/K )\\nand the proof concludes alpha^prime is in Q. However this cannot be the case for all alpha^prime. \\n\\nConsider x=0 which is in S_{p, beta}^D, then we have alpha^prime = 0 as well. In the definition of Q, we require all the j's to sum up to K+D, which is not met here. \\n\\nAt the same time, the next claim \\n \\\\| alpha \\\\|_2 <= D^{1/2 - 1/p} \\\\| alpha^prime \\\\|_p\\ndoes not seem to follow from the above calculations. In particular, alpha^prime seems to be defined with respect to an x in S_{p, beta}, however in this case we did not specify such an x. Perhaps did you mean there exist such an alpha^prime?\\n\\n(2) In the proof of Theorem 3, there is an important inequality needed to complete the proof \\n max{ <s, f_i> , <s, -f_i> } >= 1/2 * ( <s, [f_i]_+> + <s, [-f_i]_+> )\\n\\nPerhaps I am missing something obvious, but I believe this inequality fails when we choose s as a constant vector, and f_i to have the same number of positive and negative signs (which is possible in a Hadamard matrix). In this case, the left hand side should be equal to zero, where as the right hand side will be positive. \\n\\n###\\n\\nTo summarize, if these proofs can be confirmed, I believe this paper would have made significant contribution to the problem of over-parametrization in deep learning, and of course should be accepted. \\n\\n###\\n\\nI corrected several typos and found minor issues as I read, perhaps this will be useful to improve readability as well.\\n\\nPage 13, proof of Lemma 8\\n - after the V_0 term is separated, there is a sup over \\\\|V_0\\\\|_F <= r in the expectation, which should be \\\\|V-V_0\\\\|_F <= r instead.\\n\\nPage 14, Lemma 9\\n - the lemma did not define rho_{ij} in the statement\\n\\nPage 15, proof of Lemma 9\\n - in equation (12), there is an x_y vector that should x_t\\n\\nPage 15, proof of Theorem 1\\n - while I eventually figured it out, it's unclear how Lemma 8 is applied here. Perhaps one more step identifying the exact matrices in the statement of Lemma 8 will be helpful to future readers, and maybe explain where the sqrt(2) factor come from as well. \\n\\nPage 16, proof of Lemma 10\\n - in the beginning of the proof, to stay consistent with the notation, we should replace S_{p, beta} with S_{p, beta}^D\\n - I believe the cardinality of Q should be (K + D - 1) choose (D - 1), as we need to choose positive j's to sum up to (K+D) in the definition of Q. This reduces down to the problem of choosing natural numbers j's summing K, which is (K+D-1) choose (D-1). Consider the stack exchange post here:\", \"https\": \"//math.stackexchange.com/questions/919676/the-number-of-integer-solutions-of-equations\\n\\nPage 16, proof and statement of Lemma 11\\n - I believe in the first term, the factor should be m instead of sqrt(m). I think the mistake happened when applying the union bound, as it should only affect the term containing delta\\n\\nPage 17, Lemma 12\\n - same as Lemma 11, we should have m instead of sqrt(m)\\n\\nPage 18, proof of Theorem 3\\n - at the bottom the statement \\\"F is orthogonal\\\" does not imply the norm is less than 1, but rather we should say \\\"F is orthonormal\\\"\\n\\nPage 19, proof of Theorem 3\\n - at the top, \\\"we will omit the index epsilon\\\" should be \\\"xi\\\" instead\\n - in the final equation block, we have the Rademacher complexity of F_{W_2}, instead it should be F_{W^prime}\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The authors present a novel bound for the generalization error of 1-layer neural networks with multiple outputs and ReLU activations.\", \"review\": \"It is shown empirically that common algorithms used in supervised learning (SGD) yield networks for which such upper bound decreases as the number of hidden units increases. This might explain why in some cases overparametrized models have better generalization properties.\\n\\nThis paper tackles the important question of why in the context of supervised learning, overparametrized neural networks in practice generalize better. First, the concepts of \\\\textit{capacity} and \\\\textit{impact} of a hidden unit are introduced. Then, {\\\\bf Theorem 1} provides an upper bound for the empirical Rademacher complexity of the class of 1-layer networks with hidden units of bounded \\\\textit{capacity} and \\\\textit{impact}. Next, {\\\\bf Theorem 2} which is the main result, presents a new upper bound for the generalization error of 1-layer networks. An empirical comparison with existing generalization bounds is made and the presented bound is the only one that in practice decreases when the number of hidden units grows. Finally {\\\\bf Theorem 3} is presented, which provides a lower bound for the Rademacher complexity of a class of neural networks, and such bound is compared with existing lower bounds.\\n\\n## Strengths\\n- The paper is theoretically sound, the statement of the theorems\\n are clear and the authors seem knowledgeable when bounding the\\n generalization error via Rademacher complexity estimation.\\n\\n- The paper is readable and the notation is consistent throughout.\\n\\n- The experimental section is well described, provides enough empirical\\n evidence for the claims made, and the plots are readable and well\\n presented, although they are best viewed on a screen.\\n\\n- The appendix provides proofs for the theoretical claims in the\\n paper. However, I cannot certify that they are correct.\\n\\n- The problem studied is not new, but to my knowledge the\\n presented bounds are novel and the concepts of capacity and\\n impact are new. Theorem 3 improves substantially over\\n previous results.\\n\\n- The ideas presented in the paper might be useful for other researchers\\n that could build upon them, and attempt to extend and generalize\\n the results to different network architectures.\\n\\n- The authors acknowledge that there might be other reasons\\n that could also explain the better generalization properties in the\\n over-parameterized regime, and tone down their claims accordingly.\\n\\n## Weaknesses\\n\\\\begin{itemize}\\n- The abstract reads \\\"Our capacity bound correlates with the behavior\\n of test error with increasing network sizes ...\\\", it should\\n be pointed out that the actual bound increases with increasing\\n network size (because of a sqrt(h/m) term), and that such claim\\n holds only in practice.\\n\\n- In page 8 (discussion following Theorem 3) the claim\\n \\\"... all the previous capacity lower bounds for spectral\\n norm bounded classes of neural networks (...) correspond to\\n the Lipschitz constant of the network. Our lower bound strictly\\n improves over this ...\\\", is not clear. Perhaps a more concise\\n presentation of the argument is needed. In particular it is not clear\\n how a lower bound for the Rademacher complexity of F_W translates into a\\n lower bound for the rademacher complexity of l_\\\\gamma F_W. This makes the claim of tightness of Theorem 1 not clear. Also this makes\\n the initial claim about the tightness of Theorem 2 not clear.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
r1l-e3Cqtm | Deep Probabilistic Video Compression | [
"Jun Han",
"Salvator Lombardo",
"Christopher Schroers",
"Stephan Mandt"
] | We propose a variational inference approach to deep probabilistic video compression. Our model uses advances in variational autoencoders (VAEs) for sequential data and combines it with recent work on neural image compression. The approach jointly learns to transform the original video into a lower-dimensional representation as well as to entropy code this representation according to a temporally-conditioned probabilistic model. We split the latent space into local (per frame) and global (per segment) variables, and show that training the VAE to utilize both representations leads to an improved rate-distortion performance. Evaluation on small videos from public data sets with varying complexity and diversity show that our model yields competitive results when trained on generic video content. Extreme compression performance is achieved for videos with specialized content if the model is trained on similar videos. | [
"variational inference",
"video compression",
"deep generative models"
] | https://openreview.net/pdf?id=r1l-e3Cqtm | https://openreview.net/forum?id=r1l-e3Cqtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJxx4F2ZgN",
"ryeVUXMi6X",
"BklJbXzsam",
"BkgUxbGipm",
"r1xDTlMiaQ",
"ByeXX3bo6m",
"S1lLcGX5hm",
"ByltbXgKnQ",
"BJxeCzadhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544829223972,
1542296396229,
1542296311415,
1542295790401,
1542295743214,
1542294555228,
1541186189850,
1541108481414,
1541096136315
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1051/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1051/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1051/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1051/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1051/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1051/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1051/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1051/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1051/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The proposed method is compressing video sequences with an end-to-end approach, by extending a variational approach from images to videos. The problem setting is interesting and somewhat novel. The main limitation, as exposed by the reviewers, is that evaluation was done on very limited and small domains. It is not at all clear that this method scales well to non-toy domains or that it is possible in fact to get good results with an extension of this method beyond small-scale content. There were some concerns about unfair comparisons to classical codes that were optimized for longer sequences (and I share those concerns, though they are somewhat alleviated in the rebuttal).\\n\\nWhile the paper presents an interesting line of work, the reviewers did present a number of issues that make it hard to recommend it for acceptance. However, as R1 points out, most of the problems are fixable and I would advise the authors to take the suggested improvements (especially anything related to modeling longer sequences) and once they are incorporated this will be a much stronger submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"Response to Reviewer 1 (2/2)\", \"comment\": \">4. It is not very clear how the global code is obtained. It is implied that all frames get processed in order to come up with f, but does this mean that they're processed via an LSTM model, or is there a single fully connected layer which takes as input all frames? In terms of modeling f, it sounds like the hyperprior model from Balle et al is employed, but again it's not clear to me how (is it modelling an entire video or a sequence?). I would really like to see a diagram for the network structure that computes f.\\n\\nThe architecture of the encoder is described in Appendix B. All frames in a segment are individually processed by a convolutional network and passed through an LSTM model which infers the global state f. A diagram of the network structure that computes f can be found in Appendix B of the revised version. \\n\\nWe use a generic factorized probability density model from Balle et al. 2018 in order to model f (which, in their work, is used to entropy code the hyperprior latent variables). We do not use the hyperprior model because in our approach, the f latent variables are not spatially correlated. (The latent f\\u2019s do not capture the spatial structure of the video frames because the LSTM decorrelates the convolutional features.)\"}",
"{\"title\": \"Response to Reviewer 1 (1/2)\", \"comment\": \"We would like to thank the reviewer for their time and feedback. Our response is detailed below.\\n\\n>1. The method has only been trained on very small videos due to the fact that fully connected layers are used. I don't really understand why was this necessary, and it's not explained in the paper at all. Just this fact makes it completely infeasible for any \\\"real\\\" application.\\n\\nWe have openly pointed out that scalability is an open problem which we think can be solved with more research. We still believe there are some niche applications for video compression of small-scale content, e.g. thumbnail videos for previews. \\n\\nThe core idea of our method is to probabilistically entropy code according to a deep generative model. The most successful generative models for videos in the current literature typically have fully connected components in the temporal prior (e.g. a fully connected recurrent network or lstm). Our proposed approach splits information into global and local latent states which allows for a very efficient compression in the small bitrate regime. By the usage of fully-connected networks, our method is able to capture non-local motion efficiently, which is hard to do solely with a local, convolutional model. \\n\\n>2. The evaluation was done on very limited domains. Of huge concern to me is the fact that very good results are presented on the sprites dataset. However, that dataset can be literally encoded by providing an index in a lookup table of sprites, so it's absolutely ludicrous to compare learned methods on that set to general video compression methods. The results look a lot less exciting when looking at the Kinetics 64x64 dataset. \\n\\nWe respectfully disagree, our method was evaluated on an unseen test set of Sprites videos, so the test videos could not be simply encoded as a lookup table. We agree that the Sprites data set is a toy dataset since it has a low-dimensional description compared to a typical real-world video. However, there may be applications for which the video to be compressed lives on a lower-dimensional manifold, e.g. teleconferencing or sports videos. For such an application, it is beneficial for the video codec to learn such a manifold in order to more efficiently compress the video (we showed this for the low-dimensional but real-world BAIR data set). Since existing codecs are hand designed they do not possess this ability.\\n\\n>3. The evaluation (again) is problematic because the results refer to PSNR. PSNR for video is a very overloaded term. In fact, just the way to compute PSNR is not very clear for video. Video compression papers in general compute it in one of two ways: take the mean squared error over all the pixels in the video, then compute PSNR; or compute per frame PSNR then average. Additionally, none of the papers in this domain use RGB, because the human visual system is much more sensitive to detail preservation (the Y/luminance channel) than they are to chroma (color) changes. When attempting to present results for video, I would recommend to use PSNR-Y (and explain which type it is!), while also mentioning which ITU recommendation is used for defining the Y channel (there are multiple recommendations). \\n\\nWe use the average per frame PSNR in RGB space in our work. Accordingly, our loss is phrased in RGB space and the video codecs are configured to operate in RGB mode (4:4:4 chroma sampling). We have clarified this point in the manuscript. While we acknowledge that it could be a good idea to use PSNR-Y as our performance metric, we have chosen to use PSNR-RGB to be consistent with the most closely-related papers from neural image compression (Balle et al. 2018, Minnen et al. 2018) which minimize RGB errors and report results in PSNR-RGB and neural video compression (Wu et al. 2018) (We contacted them and found that they also used average frame PSNR-RGB).\"}",
"{\"title\": \"Response to Reviewer 2 (2/2)\", \"comment\": \">The authors observe that the Kalman prior performs worse than the LSTM prior. This may be due to limitations of the encoder, which processes images frame-by-frame, which makes it hard to decorrelate frames while preserving information. I am wondering why the frame encoder is not at least processing one neighboring frame. (Note: A sufficiently powerful encoder could represent information in a fully factorial way; e.g. Chen & Gopinath, 2001).\\n\\nThe full encoder, with both local and global state, is processing an entire segment of video, which includes neighboring frames. Moreover, it is not necessary for the per-frame latent variables to be completely decorrelated temporally because the temporal correlation is taken into account by the learned temporal prior (which conditions on the previous frame(s)). The temporal redundancy is removed from the bit stream by entropy coding the per-frame latents according to the learned temporal prior distribution.\", \"reference\": \"Chao-Yuan Wu, Nayan Singhal, and Philipp Krahenbuhl. Video compression through image interpolation. European Conference on Computer Vision, 2018.\"}",
"{\"title\": \"Response to Reviewer 2 (1/2)\", \"comment\": \"Thank you for your detailed response. We address each point individually below.\\n\\n>What's missing from the paper is a discussion of how the proposed model would be applied to model video sequences longer than a few frames. In particular, the global latent state will be less and less useful as videos get longer. Should the video be split into multiple sequences treated separately? If yes, how should they be split and what is the impact on performance?\\n\\nCorrect, the video could be divided up into segments, with global states specific to these segments. Especially for longer sequences, choosing optimal segments - i.e. pieces of the video that are well described by a single global state - is an important problem to solve for future work towards a practical deep learning based video codec. \\n\\n>Unfortunately, the experiments focus too much on trying to make the algorithm look good at the expense of being less informative and potentially misleading.\\n\\n>Existing video codecs such as H.265 and software like ffmpeg are optimized for longer, high-resolution videos, but even the most realistic dataset used here (Kinetics600) only contains short (10 frames) low-resolution videos. I suggest the authors at least add the performance of classical codecs evaluated on the entire video sequence to their plots. The current reported performance can be viewed as splitting the videos into chunks of 64x64x10, which makes sense for an autoencoder which has been trained to learn a global representation of short videos, but is clearly not necessary and detrimental to the performance of the classical codecs. I think adding these graphs would provide a more realistic view of the current state of video compression using deep neural nets.\\n\\nOur approach is based on preprocessing a longer video by dividing it into segments of length T. Every segment has a unique global state, and T local states. In our paper, we used T=10 frames per segment for computational convenience and memory limitations of our available hardware. This choice is comparable to the sequence length (GOP size) chosen in (Wu et al., 2018). \\n\\nThe optimal segment length depends on the input data. When comparing our approach with classical codecs, we used the same short video segments across all methods. While using 10 frames per segment may be short, we expect both classical codecs and neural network approaches to benefit in similar ways from longer segments. \\n\\nTo address the concerns about potentially misrepresenting codec performance, we have added the codec performance curves for different segment lengths to Fig. 8 in Appendix D. For typical footage, we found that the performance of classical codecs increases with longer sequences and saturates before 100 frames. We note that our method, trained and evaluated on T = 10 frames, remains comparable to H.264/H.265 tested on T = 100 frames even though T=100 segments typically contain less information per pixel than T=10 segments. VP9 outperforms our method when tested on T=100 frames.\\n\\n>For the classical codecs, were the binary files stripped of any file format container and headers before counting bits? This would be crucial for a fair comparison, especially for small videos where the overhead might be significant.\\n\\nWe were unable to find a way to easily determine the size of the header information. If the reviewer could share the command to find out such information with us, we would be very glad to analyze this aspect. To address the concerns about header size, we have added codec performance curves for longer sequences (T=100) frames to Fig. 8 in Appendix D where header information is expected to be a much smaller fraction of the file size. \\n\\n>More work could be done to ensure the reader that the hyperparameters of the classical codecs such as GOP or block size have been sufficiently tuned.\\n\\nWe used the video codec in the standard way, without excessive parameter tuning. Our main objective was to show that deep learning architectures, designed from scratch, can be comparable to standard codecs in certain regimes.\\n\\n>What is the frame rate of the videos used? I.e., how much time do 10 frames correspond to?\\n\\nThe Sprites dataset does not have a physical time since it is generated video. The Kinetics dataset contains YouTube videos with variable frame rates, ranging between 24-60 fps. \\n\\n>The videos were downsampled before cropping them to 64x64 pixels. What was the resolution before cropping?\", \"the_videos_were_cropped_to_1\": \"1 aspect ratio and then downsampled to 64x64. The original videos were various resolutions ranging from 426 x 240 to 1920 x 1080.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for their time and feedback. Below is our response to each point made by the review.\\n\\n>In Figure 1, it is not clear at all how the bitstream is formed; frames 1 to T are compressed jointly with frame t; but frame t is part of the set of frames from 1 to T. How the global state updated when compressing frame t+1? Using frames 2 to T+1?\\n\\nWe assume that the video gets divided up into segments of T frames. Every segment has exactly one global state - which is jointly computed and not updated incrementally - and T local states. When forming the bitstream, the global state is formed by running a bi-directional LSTM over all T frames while the local states are formed on a per frame basis. When decoding, each frame can be reconstructed using the global state and its specific local state. We will add a corresponding explanation to the revised version.\\n\\n>Writing that you use a Laplacian distribution because l1 regularized loss typically outperforms the l2 loss for autoencoding images is clearly an insufficient justification, if not backed by experiments or references. Moreover, the authors seem to confuse regularization with loss; by using a Laplace density for the generative model, they are using a l1 loss, not an l1 regularizer. \\n\\nIn our experiments, we found that an L1 loss tended to produce sharper image reconstructions than an L2 loss. We refer to references (Isola et al., 2016; Zhao et al., 2015) which suggest the benefits of using the L1 loss for image reconstruction. We acknowledge that the Laplacian corresponds to using an L1 loss, not an L1 regularizer, and this statement was a typo in the manuscript which will be corrected. We have also added these references to the manuscript in order to justify the use of the L1 loss.\\n\\n>There is absolutely no information about implementation details.\\n\\nWe have added information about implementation details in Appendix B of the revised version.\\n\\n>The video sequences used in the experiments are extremely small, both in spatial and temporal terms. A collection of 10 64*64 frames has fewer pixels than even a moderately sized still image. As the authors acknowledge, standard video codecs are far from optimized for video sequences of this size, making the comparisons unfair. The extreme compression results on the sprites and BAIR datasets may be quite misleading, since the data lives in a very low dimensional manifold, due to the simplicity of the scenes. For the more realistic Kinetics dataset, the proposed method is competitive with H264 and H265, but only in a very limited range of bit rates. In fact, the authors do not explain why they have not show results for wider ranges of bitrates.\\n\\nWe acknowledge the fact that practical video compression for high resolution content is an extremely complex problem that requires solving many sub problems. In this work, which arguably is the first one to use deep probabilistic modeling for end-to-end video compression, we have used small video data in order to focus on exploring new ways to model temporal redundancy. \\n\\nStandard codecs are indeed not optimized for such small videos. However, standard video codecs drastically outperform other baselines, such as neural image compression or JPEG compression per frame, on such data. In lack of other baselines on this type of data, we provide standard codec results in order to demonstrate that our method is efficiently capturing temporal correlations. In our conclusions, we already acknowledged the fact (but will highlight this even more) that more work needs to be done in order to outperform standard codecs on all resolutions.\\n\\nThe reviewer correctly points out that certain data (such as Sprites and BAIR) may live on a lower-dimensional manifold as compared to general content video. We emphasize that this may also be true for other types of data such as video conferencing or sports broadcasting, so it may be beneficial for a specialized-content codec to learn such a lower-dimensional manifold in a data-driven approach. \\n\\nRegarding the range of bitrates, the reason for the limited range of bitrates on the kinetic dataset is due to limited GPU memory. We have made this point more clear in the limitations section (before the conclusions) of the revised manuscript. The highest quality setting is limited by the size of the latent space. General content video requires a larger latent dimension than specialized video, and the latent space dimension could not be increased any further due to such hardware limitations. Resolving this problem is an interesting and important avenue of further research.\\n\\nIsola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2016). Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004.\\n\\nZhao, Hang, et al. \\\"Loss functions for image restoration with neural networks.\\\" IEEE Transactions on Computational Imaging3.1 (2017): 47-57.\"}",
"{\"title\": \"An interesting proposal for deep-learning-based video compression, but somewhat limited experimental results and unclear applicability.\", \"review\": \"The paper is well written and the basic ideas are reasonably well explained and supported. However, several aspects are insufficiently explained. Several examples follow.\\n\\nIn Figure 1, it is not clear at all how the bitstream is formed; frames 1 to T are compressed jointly with frame t; but frame t is part of the set of frames from 1 to T. How the global state updated when compressing frame t+1? Using frames 2 to T+1?\\n\\nWriting that you use a Laplacian distribution because l1 regularized loss typically outperforms the l`2 loss for autoencoding images is clearly an insufficient justification, if not backed by experiments or references. Moreover, the authors seem to confuse regularization with loss; by using a Laplace density for the generative model, they are using a l1 loss, not an l1 regularizer. \\n\\nThere is absolutely no information about implementation details.\\n\\nThe video sequences used in the experiments are extremely small, both in spatial and temporal terms. A collection of 10 64*64 frames has fewer pixels than even a moderately sized still image. As the authors acknowledge, standard video codecs are far from optimized for video sequences of this size, making the comparisons unfair. The extreme compression results on the sprites and BAIR datasets may be quite misleading, since the data lives in a very low dimensional manifold, due to the simplicity of the scenes. For the more realistic Kinetics dataset, the proposed method is competitive with H264 and H265, but only in a very limited range of bit rates. In fact, the authors do not explain why they have not show results for wider ranges of bitrates.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good approach to deep learning based video compression, but empirical section needs work\", \"review\": \"Summary\\n=======\\nThis work on video compression extends the variational autoencoder of Balle et al. (2016; 2018) from images to videos. The latent space consists of a global part encoding information about the entire video, and a local part encoding information about each frame. Correspondingly, the encoder consists of two networks, one processing the entire video and one processing the video on a frame-by-frame basis. The prior over latents factorizes over these two parts, and an LSTM is used to model the coefficients of a sequence of frames. The compression performance of the model is evaluated on three datasets of 64x64 resolution: sprites, BAIR, and Kinetics600. The performance is compared to H.264, H.265, and VP9.\\n\\nReview\\n======\\nRelevance (9/10):\\n-----------------\\nCompression using neural networks is an unsolved problem with potential for huge practical impact. While there has been a lot of research on deep image compression recently, video compression has not yet received much attention.\\n\\nNovelty (6/10):\\n---------------\\nThis approach is a straightforward extension of existing image compression techniques, but it is a reasonable step towards deep video compression. \\n\\nWhat's missing from the paper is a discussion of how the proposed model would be applied to model video sequences longer than a few frames. In particular, the global latent state will be less and less useful as videos get longer. Should the video be split into multiple sequences treated separately? If yes, how should they be split and what is the impact on performance?\\n\\nEmpirical work (2/10):\\n----------------------\\nUnfortunately, the experiments focus too much on trying to make the algorithm look good at the expense of being less informative and potentially misleading.\\n\\nExisting video codecs such as H.265 and software like ffmpeg are optimized for longer, high-resolution videos, but even the most realistic dataset used here (Kinetics600) only contains short (10 frames) low-resolution videos. I suggest the authors at least add the performance of classical codecs evaluated on the entire video sequence to their plots. The current reported performance can be viewed as splitting the videos into chunks of 64x64x10, which makes sense for an autoencoder which has been trained to learn a global representation of short videos, but is clearly not necessary and detrimental to the performance of the classical codecs. I think adding these graphs would provide a more realistic view of the current state of video compression using deep neural nets.\\n\\nFor the classical codecs, were the binary files stripped of any file format container and headers before counting bits? This would be crucial for a fair comparison, especially for small videos where the overhead might be significant.\\n\\nMore work could be done to ensure the reader that the hyperparameters of the classical codecs such as GOP or block size have been sufficiently tuned.\\n\\nWhat is the frame rate of the videos used? I.e., how much time do 10 frames correspond to?\\n\\nThe videos were downsampled before cropping them to 64x64 pixels. What was the resolution before cropping?\\n\\nThe authors observe that the Kalman prior performs worse than the LSTM prior. This may be due to limitations of the encoder, which processes images frame-by-frame, which makes it hard to decorrelate frames while preserving information. I am wondering why the frame encoder is not at least processing one neighboring frame. (Note: A sufficiently powerful encoder could represent information in a fully factorial way; e.g. Chen & Gopinath, 2001).\", \"clarity\": \"--------\\nThe paper is well written and clear.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting, but very limited idea\", \"review\": \"This method deals with compressing tiny videos using an end-to-end learned approach. However, the paper has a significant number of limitations, which I will discuss below.\\n\\n1. The method has only been trained on very small videos due to the fact that fully connected layers are used. I don't really understand why was this necessary, and it's not explained in the paper at all. Just this fact makes it completely infeasible for any \\\"real\\\" application.\\n2. The evaluation was done on very limited domains. Of huge concern to me is the fact that very good results are presented on the sprites dataset. However, that dataset can be literally encoded by providing an index in a lookup table of sprites, so it's absolutely ludicrous to compare learned methods on that set to general video compression methods. The results look a lot less exciting when looking at the Kinetics 64x64 dataset. \\n3. The evaluation (again) is problematic because the results refer to PSNR. PSNR for video is a very overloaded term. In fact, just the way to compute PSNR is not very clear for video. Video compression papers in general compute it in one of two ways: take the mean squared error over all the pixels in the video, then compute PSNR; or compute per frame PSNR then average. Additionally, none of the papers in this domain use RGB, because the human visual system is much more sensitive to detail preservation (the Y/luminance channel) than they are to chroma (color) changes. When attempting to present results for video, I would recommend to use PSNR-Y (and explain which type it is!), while also mentioning which ITU recommendation is used for defining the Y channel (there are multiple recommendations). \\n4. It is not very clear how the global code is obtained. It is implied that all frames get processed in order to come up with f, but does this mean that they're processed via an LSTM model, or is there a single fully connected layer which takes as input all frames? In terms of modeling f, it sounds like the hyperprior model from Balle et al is employed, but again it's not clear to me how (is it modelling an entire video or a sequence?). I would really like to see a diagram for the network structure that computes f.\", \"ont_he_positives_of_the_paper\": \"I applaud the authors with respect to the fact that they made an effort to explain how the classical codecs were configured and being explicit about the chroma sampling that's employed.\\n\\nI think all the problems I mentioned above can be fixed, so I don't want to reject the paper per se. If possible, should the authors address my concerns (i.e., add more details), I think this could be an interesting \\\"toy\\\" method.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BkMWx309FX | Reinforcement Learning with Perturbed Rewards | [
"Jingkang Wang",
"Yang Liu",
"Bo Li"
] | Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a result. Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors. In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix. We call such observed rewards as perturbed rewards. We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards. Our framework draws upon approaches for supervised learning with noisy data. The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively. | [
"robust reinforcement learning",
"noisy reward",
"sample complexity"
] | https://openreview.net/pdf?id=BkMWx309FX | https://openreview.net/forum?id=BkMWx309FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJgAktYZeV",
"r1lp2VU_CX",
"r1eshzTBCm",
"H1gAUgp-AQ",
"rJeoiP3-A7",
"HylRDDhbAX",
"HklMEvnbRm",
"HkewCIhb0X",
"B1l1hShZRm",
"HJxmW5NZ67",
"H1xorCQWp7",
"BJliWdLsh7",
"Bkl75YQ92Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544816869896,
1543165108817,
1542996659508,
1542733910403,
1542731683406,
1542731621803,
1542731562170,
1542731471105,
1542731175327,
1541650939306,
1541647938812,
1541265410947,
1541187978948
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1050/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1050/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1050/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1050/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1050/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1050/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1050/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1050/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1050/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1050/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1050/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1050/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper studies RL with perturbed rewards, where a technical challenge is to revert the perturbation process so that the right policy is learned. Some experiments are used to support the algorithm, which involves learning the reward perturbation process (the confusion matrix) using existing techniques from the supervised learning (and crowdsourcing) literature.\\n\\nReviewers found the problem setting new and worth investigating, but had concerns over the scope/significance of this work, mostly about how the confusion matrix is learned. If this matrix is known, correcting reward perturbation is easy, and standard RL can be applied to the corrected rewards. Specifically, the work seems to be limited in two substantial ways, both related to how the confusion matrix is learned.\\n * The reward function needs to be deterministic.\\n * Majority voting requires the number of states to be finite.\\nThe significance of this work is therefore mostly limited to finite-state problems with deterministic reward, which is quite restricted.\\n\\nAs the authors pointed out, the paper uses discretization to turn a continuous state space into a finite one, which is how the experiment was done. But discretization is likely not robust or efficient in many high-dimensional problems.\\n\\nIt should be noted that the setting studied here, together with a thorough treatment of an (even restricted) case, could make an interesting paper that inspires future work. However, the exact problem setting is not completely clear in the paper, and the limitations of the technical contributions is also somewhat unclear. The authors are strongly advised to revise the paper accordingly to make their contributions clearer.\", \"minor_questions\": [\"In lemma 2, what if C is not invertible.\", \"The sampling oracle assumed in def 1 is not very practical, as opposed to what the paper claims.\", \"There are more recent work at NIPS and STOC on attacking RL (including bandits) algorithms by manipulating the reward signals. The authors may want to cite and discuss.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting direction but contributions and significance somewhat limited\"}",
"{\"title\": \"Response to Detailed Response & Revision\", \"comment\": \"Thank your for your response and the revisions. The revised version of the paper includes many improvements. The results presented in Figures 1, 2, & 3 are much more compelling with the additional trials. I also appreciate the inclusion of the simple MDP example in the appendix, and the editorial changes made; the paper reads much more smoothly now.\\n\\nIn light of the updates, I have changed my score from a 6 to a 7.\"}",
"{\"title\": \"Thanks for the detailed answer\", \"comment\": \"Thank you for your detailed response and the update of your paper.\\n\\nSince you have added a discussion on a state dependent confusion matrix as well as a discussion and experiments for the related work, I can rise my score to 6 from 5. I believe the way the confusion matrix is estimated from data is still among the relatively weak points to really have a widespread use but this paper has nonetheless its merits.\"}",
"{\"title\": \"Revised manuscript has been uploaded\", \"comment\": \"We would like to thank the reviewers and the other anonymous comment again for their thoughtful reviews and valuable comments. We have made our best efforts to improve the paper according to the comments. The key changes include adding experiments on simpler domains (MDP example) and time-variant cases (Appendix B.3), comparing with another baseline (Romoff et al., 2018), utilizing variance reduction technique to further improve our performance (Appendix C.4), adapting EM idea into our estimation algorithm (Appendix C.2), clarifying the proofs, fixing the typos and providing more analysis of \\\"perturbed reward\\\" setting (state-dependent case, Appendix C.3).\\n\\n[1] J. Romoff, A. Pich\\u00e9, P. Henderson, V. Francois-Lavet, and J. Pineau. Reward estimation for variance reduction in deep reinforcement learning. ICLR Workshop, 2018.\"}",
"{\"title\": \"Response to \\\"Related work on robust RL with perturbed state transitions\\\"\", \"comment\": \"Thanks for your interest in our work! It is indeed beneficial for the community to have a more comprehensive view of robust RL results. We now have mentioned the relevant papers on robust RL with model uncertainty (on state transitions) in the related work. Thanks for pointing this out.\"}",
"{\"title\": \"Response to AnonReviewer3 -- Part2\", \"comment\": \"Response to additional comments:\\n\\nThanks for your suggestion. We added the comparisons with previous work (Romoff et al., 2018) and analysis (Appendix C.4). Briefly, Romoff et al. focused more the variance reduction issue, which theoretically doesn\\u2019t resolve the challenge when bias presents in the observed rewards. While our study sets out to deal with a reward model with bias. Indeed the idea from (Romoff et al., 2018) can be used as a second variance reduction step following our surrogate reward operation (which unavoidably introduced higher variance due to the bias removal step). We have conducted experiments to show its further benefits. \\n\\nFor the discount factor gamma, we considered both the discounted ($0 \\\\leq \\\\gamma < 1$) and undiscounted MDP ($\\\\gamma = 1$) setting (Schwartz et al., 1993; Sobel et al., 1994; Kakade, 2003). So we took your and Reviewer 4's suggestion, adjusting the range to be [0, 1].\\n\\n[1] A. Roy, H. Xu, and S. Pokutta. Reinforcement learning under Model Mismatch. 31st Conference on Neural Information Processing Systems, 2017.\\n[2] J. Romoff, A. Pich\\u00e9, P. Henderson, V. Francois-Lavet, and J. Pineau. Reward estimation for variance reduction in deep reinforcement learning. ICLR Workshop, 2018.\\n[3] A. Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In ICML, pp. 298\\u2013305. Morgan Kaufmann, 1993.\\n[4] M. J. Sobel. Mean-variance tradeoffs in an undiscounted MDP. Operations Research, 42(1):175\\u2013183, 1994.\\n[5] S. M. Kakade.On the Sample Complexity of Reinforcement Learning. PhD thesis, University of London, 2003.\"}",
"{\"title\": \"Response to AnonReviewer3 -- Part1\", \"comment\": \"Thanks for your valuable suggestions and comments.\", \"q1\": \"confusion matrix does not take into account the state\", \"a1\": \"For clarity, our updated draft will stay focusing on the state-independent model, but we have added discussions on the more general cases for the state-dependent noises as suggested (Appendix C.3). When the flipping error rates are different across different states, we will maintain different confusion matrices for each state. More specifically, when a noisy copy of rewards at state $s$ is observed, we look up the corresponding confusion matrix at this state and apply surrogate rewards based on it. In this case, Theorems 1, 2, 3 still hold, with replacing $|\\\\mathbf{C}|$ by $min_s|\\\\mathbf{C}|$. This is because keeping separate confusion matrices for each state could keep the unbiasedness of aggregated rewards (converge to true reward expectation) for each state.\", \"q2\": \"tackle a narrow problem (the model is simple)\", \"a2\": \"We kindly remind that our model and algorithms deal with a wide range of problems in RL compared to previous works which lie on prior knowledge (Roy et al., 2017) or constraints on noisy distribution such as Gaussian distribution (Romoff et al., 2018). Besides, it is also the first method to estimate confusion matrices in RL settings. The theorems guarantee the convergence and sample efficiency of the proposed method.\\n\\nWe also want to emphasize that our method could be directly applied to various DRL algorithms/environments as our experimental results suggest. In some cases, the surrogate rewards even lead to faster convergence or better scores than the ideal noise-free settings - we conjecture this is because our surrogate reward introduces implicit exploration via noise (followed by the debiasing ste). We believe that our \\u201cperturbed reward\\u201d setting and the algorithms are of both theoretical and practice values to the RL community.\", \"q3\": \"theorem seems to be variations of existing results; difficult to understand what is the message behind the theorems.\", \"a3\": \"Indeed, building on existing results, our theorems are not surprising. This is also mentioned by ourselves in the paper. The theorems in the paper are established in sequence to provide theory guarantee (convergence, sample complexity, and variance) of the proposed unbiased estimator.\\nMore specially, theorem 1 states our surrogate rewards retain the convergence property of Q-Learning; theorem 2 shows the sample efficiency of Phased Q-Learning under surrogate rewards \\u2013 bounded by an extra constant factor det|\\\\mathbf{C}| than true rewards; theorem 3 discusses the limitation that surrogate rewards introduce larger variance compared to the cases that observe true rewards.\", \"q4\": \"Not clear how confusion matrix can be estimated in practice; how to access predicted true rewards\", \"a4\": \"The procedure of estimating confusion matrix C is given in Algorithm 1 and Equation 4. Briefly speaking, at each time step, we first obtain the predicted rewards (using majority voting, but we have added more discussions on using other inference methods, e.g. EM (Appendix C.2), in our updated version) for each state-action pair. In practice, we would discretize the state and action if it is continuous (e.g., Pendulum). Then we estimate the confusion matrix C using Equation 4 and use surrogate rewards for training. Extensive experiments conducted on various algorithms and environments show that the proposed estimation algorithm can obtain a significant improvement in overall scores (\\\\dot{r} in Table1 and Table2); especially this helps rescue agents from misleading rewards when the noise is high. It could also be observed from Figure 5 that the estimations of the confusion matrices converge to the true ones reasonably fast. Note that our algorithm is generic and efficient in practice, which can be flexibly plugged in any existing RL algorithm.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thanks for your valuable suggestions and positive comments.\\n\\nResponse to high level questions/comments:\", \"q1\": \"Typos and bits of math could be clearer.\", \"a1\": \"Thank you for the meticulous proofreading and valuable comments. We have addressed them carefully in the updated paper.\", \"q2\": \"Some plots are jagged -- more trials seem to be required.\", \"a2\": \"Thanks for the suggestion. It was a problem of visualization -- we plotted all the curves for all repeated experiments so it looks jagged. We changed the plotting style in the revised paper. Besides, the number of trials also matter as you suggest. In our previous sets of experiments, we repeated each experiment for three times under different random seeds. Due to the fact that some algorithms (e.g., Q-Learning, CEM, SARSA) see high variances while playing OpenAI Gym games, the plots sometimes look jagged. To reach higher statistical significance, we now experimented with more trials (10 times for Cartpole and 6 times for Pendulum) and updated the figures (Figure 1, 2) in the paper.\", \"q3\": \"Why use majority voting as the rule? Have you tried others?\", \"a3\": \"Majority voting is only one of the simple but effective methods for inferring the ground truth. Because we are in a sequential setting and the fact that agents could only observe one-copy of noisy rewards based on their own explorations, other more sophisticated inferences algorithms that were proposed in crowdsourcing cannot be directly applied. This is a very interesting topic that merits more rigorous future explorations. Nonetheless, we can adapt standard Expectation-Maximization (EM) idea into our estimation algorithm. We provide the derivation (Appendix C.2) in our updated version. However, it is worth noting that the inference probabilities need to be computed in every iteration, which introduces larger computation costs - this points out an interesting direction to check online EM algorithms for our RL problem.\", \"q4\": \"reward perturbed setting is relatively simple & How might the proposed algorithm(s) respond to this slightly more complex model?\", \"a4\": \"Although the \\u201cperturbed reward\\u201d setting seems relatively simple, it deals with a wider problem in RL compared to previous works which depend on prior knowledge of the RL environment (Roy et al., 2017) or constraints on noisy distribution such as Gaussian distribution (Romoff et al., 2018). Besides, we provide solutions for continuous noises and corresponding estimation algorithm when the confusion matrices are unknown to the agents. We believe that the more complex cases would be state-dependent noise (different confusion matrices for each state), time-variant noise (confusion matrices are time-variant) and adversarial noise. Our algorithm could handle time-variant noise (shown in Appendix B.3) because the estimated confusion matrices are dynamically updated based on the temporal noisy reward sequence.\", \"q5\": \"citation needs fixing & \\u201cPhased Q-Learning\\u201d?\", \"a5\": \"Thanks for pointing it out. There are some typos in the citation and the algorithm\\u2019s name. In the updated paper, we fixed the citation and the algorithm name (as \\u201cPhased Q-Learning\\u201d not \\u201cPhrased Q-Learning\\u201d).\", \"response_to_cons\": \"\", \"reference\": \"[1] A. Roy, H. Xu, and S. Pokutta. Reinforcement learning under Model Mismatch. 31st Conference on Neural Information Processing Systems, 2017.\\n[2] J. Romoff, A. Pich\\u00e9, P. Henderson, V. Francois-Lavet, and J. Pineau. Reward estimation for variance reduction in deep reinforcement learning. ICLR Workshop, 2018.\\n[3] M. J. Kearns, Y. Mansour, and A. Y. Ng. A sparse sampling algorithm for near-optimal planning in large markov decision processes. In IJCAI, pp. 1324\\u20131231. Morgan Kauf-mann, 1999.\\n[4] S. M. Kakade.On the Sample Complexity of Reinforcement Learning. PhD thesis, University of London, 2003.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks for your valuable suggestions and comments.\", \"q1\": \"Compare existing approaches to the same problem; plug it into other algorithms for enhancement.\", \"a1\": \"As also mentioned by the reviewer, our surrogate reward method is the first one that estimates the confusion matrix rather than assuming it is a known prior. Meanwhile, the \\u201cperturbed reward\\u201d setting, where the noise is generated by confusion matrices, is very different from previous settings for RL with noisy environment (e.g., prior of transition matrices (Roy et al., 2017); Gaussian noise (Romoff et al., 2018)); therefore, it is difficult to compare with the above approaches directly (in some sense these methods target different set of questions, and are rather incomparable). However, we found that it is feasible to compare with Romoff et al.\\u2019s work and test their method under the noises generated according to a confusion matrix. We did experiments on Cartpole using various RL algorithms and proved our surrogate estimator based RL algorithms outperform their method consistently (In Appendix C.4).\\n \\nFrom Theorem 3, we know that surrogate rewards, though correct biases suffer high variance cost - this newly introduced variance can be reduced using the idea introduced by Romoff et al.\\u2019s work. We added relevant discussions and new experimental results in Appendix C.4. We\\u2019d like to thank the reviewer for your suggestion!\", \"q2\": \"\\\"weakest part -- majority voting\\\"\", \"a2\": \"Majority voting is only one of the simple but effective methods for inferring the ground truth, and then further the error rates. Because we are in a sequential setting and because of the fact that agents could only observe one copy of the noisy rewards based on their own explorations, other more sophisticated inference algorithms that were proposed in crowdsourcing literature (which often requires the availability of multiple redundant copies simultaneously) cannot be directly applied. This is a very interesting direction that merits more rigorous future explorations.\\n\\nNonetheless, we can adapt standard Expectation-Maximization (EM) idea into our estimation algorithm. We provide the derivation (Appendix C.2) in our updated version. However, it is worthwhile to note that the inference probabilities need to be computed at every iteration, which introduces larger computation costs - this points out an interesting direction of checking online EM algorithms for our RL problem.\", \"q3\": \"adversarial noise -- no evaluations in the paper\", \"a3\": \"In this paper, we did not address more general cases with arbitrary adversarial noises; we have clarified it in the updated version. But we\\u2019d like to emphasize that the proposed estimator deals with a generic setting without any assumptions on the distribution of true reward distribution (which can be fully characterized via the noisy observations and the estimated confusion matrices). Our solution is also robust to time-varying noises. We have added results on this (Figure 4(b) in Appendix B.3). We hope our solution provides a practical baseline for defending adversarial noise in the future work.\", \"q4\": \"originality of confusion matrix estimation in RL; more tractable approach for continuous noise in the future work\", \"a4\": \"Thanks for sharing your ideas. Our work demonstrates a happy marriage between crowdsourcing and RL. For the case with continuous noise, if not utilizing the discretization method, we agree it is probably necessary to involve assumptions of the distributions to extend to this continuous scenario. It is an interesting and a very practical question to explore in the future, but right now we don\\u2019t have more concrete ideas other than using fixed discretization.\", \"response_to_detailed_suggestions\": \"Thanks for your valuable suggestions. We have revised the proofs and explanations (Lemma 1, 2 and Theorem 1) in the updated paper according to your suggestions.\", \"reference\": \"[1] A. Roy, H. Xu, and S. Pokutta. Reinforcement learning under Model Mismatch. 31st Conference on Neural Information Processing Systems, 2017.\\n[2] J. Romoff, A. Pich\\u00e9, P. Henderson, V. Francois-Lavet, and J. Pineau. Reward estimation for variance reduction in deep reinforcement learning. ICLR Workshop, 2018.\"}",
"{\"comment\": \"Thank you for your work! A related work on robust RL with the perturbations on the state transitions (rather than the rewards as in your setting) is [1].\\n\\n[1] https://papers.nips.cc/paper/6897-reinforcement-learning-under-model-mismatch.\", \"title\": \"Related work on robust RL with perturbed state transitions\"}",
"{\"title\": \"An interesting and relatively unexplored variant of RL.\", \"review\": \"This paper investigates reinforcement learning with a perturbed reward signal. In particular, the paper proposes a particular model for adding noise to the reward function via a confusion matrix, which offers a nuanced notion of reward-noise that is not too complicated so-as to make learning impossible. I take this learning setting to be both novel and interesting for opening up areas for future work. The central contributions of the work are to 1) leverage a simple estimator to prove the convergence of Q-Learning under the reward-perturbed setting along with the sample-complexity of a variant of (Phased) Q-Learning which they call \\\"Phrased\\\" Q-Learning, and 2) An algorithmic scheme for learning in the reward-perturbed setting (Algorithm 1), and 3) An expansive set of experiments that explore the impact of various reward models on learning across different environment-algorithm combinations. The sample complexity term extends Phased Q-Learning to incorporate aspects of the reward confusion matrix, and to my knowledge is novel. Further, even though Theorem 1 is unsurprising (as the paper suggests), I take the collection of Theorem 1, 2, and 3 to be collectively novel.\\n\\nIndeed, the paper focuses on an interesting and relatively unexplored direction for RL. Apart from the work cited by the paper (and perhaps work like Krueger et al. (2016), in which agents must pay some cost to observe true rewards), there is little work on learning settings of this kind. This paper represents a first step in gaining clarity on how to formalize and study this problem. I did, however, find the analysis and the experiments to be relatively disjointed -- the main sample complexity result presented by the paper (Theorem 2) was given for Phased Q-Learning, yet no experiments actually evaluate the performance of Phased Q-Learning. I think the paper could benefit from experiments focused on simple domains that showcase how traditional algorithms do in cases where it is easier to understand (and visualize) the impact of the reward perturbations (simple chain MDPs, grid worlds, etc.) -- and specifically experiments including Phased Q-Learning.\", \"pros\": [\"General, interesting new learning setting to study.\", \"Initial convergence and sample complexity results for this new setting.\", \"Depth and breadth of experimentation (in terms of diversity of algorithms and environments), includes lots of detail about the experimental setup.\"], \"cons\": [\"Clarity of writing: lots of typos and bits of math that could be more clear (see detailed comments below) [Fixed]\", \"The plots in Section 4 are all extremely jagged. More trials seem to be required. Moreover, I do think simpler domains might help offer insights into the reward perturbed setting. [Fixed]\", \"The reward perturbation model is relatively simple.\", \"Some high level questions/comments:\", \"Why was Phrased Q-Learning not experimented with?\", \"Why use majority voting as the rule? When this was introduced it sounded like any rule might be used. Have you tried/thought about others?\", \"Your citation to Kakade's thesis needs fixing; it should read:\", \"\\\"Kakade, Sham Machandranath. On the sample complexity of reinforcement learning. Ph.D Thesis. University of London, 2003.\\\"\", \"(right now it is cited as \\\"(Gatsby 2003)\\\" throughout the paper)\", \"You might consider picking a new name for Phrased Q-Learning -- right now the name is too similar to Phased Q-Learning from [Kearns and Singh NIPS 1999].\", \"As mentioned in the \\\"cons\\\" section, the confusion matrix is still a somewhat simple model of reward noise. I was left wondering: what might be the next most complicated form of adding reward noise? How might the proposed algorithm(s) respond to this slightly more complex model? That is, it's unclear how general the results are, or if they are honed too tightly to the specific proposed reward noise model. I was hoping the authors could respond to this point.\", \"Section 0) Abstract:\", \"Not immediately clear what is meant by \\\"vulnerability\\\" or \\\"noisy settings\\\". Might be better to pick a more clear initial sentence (same can be said of the \\\"sources of noise...\\\"\\\")\", \"Section 1) Introduction:\", \"\\\"adversaries in real-world\\\" --> \\\"adversaries in the real-world\\\"\", \"You might consider citing Loftin et al. (2014) regarding the bulleted point about \\\"Application-Specific Noise\\\".\", \"\\\"unbiased reward estimator aided reward robust reinforcement learning framework\\\" --> this was a bit hard to parse. Consider making more concise, like: \\\"unbiased reward estimator for use in reinforcement learning with perturbed rewards\\\".\", \"\\\"Our solution framework builds on existing reinforcement learning algorithms, including the recently developed DRL ones\\\" --> cite these up front So, cite: Q-Learning, CEM, SARSA, DQN, Dueling DQN, DDPG, NAF, and PPO, and spell out the acronym for each the first time you introduce them.\", \"\\\"layer of explorations\\\" --> \\\"layer of exploration\\\"\", \"Section 2) Problem Formulation\", \"\\\"as each shot of our\\\" --> what is 'shot' in this context?\", \"\\\"In what follow,\\\" --> \\\"In what follows,\\\"\", \"\\\"where 0 < \\\\gamma \\\\leq 1\\\" --> Usually, $\\\\gamma \\\\in [0,1)$, or $[0,1]$. Why can't $\\\\gamma = 0$?\", \"The transition notation changes between $\\\\mathbb{P}_a(s_{t+1} | s_t)$ and $\\\\mathbb{P}(s_{t+1} | s_t, a_t)$. I'd suggest picking one and sticking with it to improve clarity.\", \"\\\"to learn a state-action value function, for example the Q-function\\\" --> Why is the Q-function just an example? Isn't is *the* state-action value function? That is, I'd suggest replacing \\\"to learn a state-action value function, for example the Q-function\\\" with \\\"to learn a state-action value function, also called the Q-function\\\"\", \"\\\"Q-function calculates\\\" --> \\\"The Q-function denotes\\\"\", \"\\\"the reward feedbacks perfectly\\\" --> \\\"the reward feedback perfectly\\\"\", \"I prefer that the exposition of the perturbed reward MDP be done with C in the tuple. So: $\\\\tilde{M} = \\\\langle \\\\mathcal{S}, \\\\mathcal{A}, \\\\mathcal{R}, C, \\\\mathcal{P}, \\\\gamma \\\\rangle$. This seems the most appropriate definition, since the observed rewards will be generated by $C$.\", \"The setup of the confusion matrix for reward noise over is very clean. It might be worth pointing out that $C$ need not be Markovian. There are cases where C is not just a function of $\\\\mathcal{S}$ and $\\\\mathcal{R}$, like the adversarial case you describe early on.\", \"Section 3) Learning w/ Perturbed Rewards\", \"Theorem 1 builds straightforwardly on Q-Learning convergence guarantee (it might be worth phrasing the result in those terms? That is: the addition of the perturbed reward does not destroy the convergence guarantees of Q-Learning.)\", \"\\\"we firstly\\\" --> \\\"we first\\\"\", \"\\\"value iteration (using Q function)\\\" --> \\\"value iteration\\\"\", \"\\\"Definition 2. Phased Q-Learning\\\" --> \\\"Definition 2. Phrased Q-Learning\\\". I think? Unless you're talking about Phased Q from the Kearns and Singh '99 work.\", \"\\\"It uses collected m samples\\\" --> \\\"It uses the collected m samples\\\"\", \"Theorem 2: it would be helpful to define $T$ since it appears in the sample complexity term. Also, I would suggest specifying the domain of $\\\\epsilon$, as you do with $\\\\delta$.\", \"\\\"convergence to optimal policy\\\" --> \\\"convergence to the optimal policy\\\"\", \"\\\"The idea of constructing MDP is similar to\\\" --> this seems out of place. The idea of constructing which MDP? Similar to Kakade (2003) in what sense?\", \"\\\"the unbiasedness\\\" --> \\\"the use of unbiased estimators\\\"\", \"\\\"number of state-action pair, which satisfies\\\" --> \\\"number of state-action pairs that satisfy\\\"\", \"\\\"The above procedure continues with more observations arriving.\\\" --> \\\"The above procedure continues indefinitely as more observation arrives.\\\" Also, which procedure? Updating $\\\\tilde{c}_{i,j}$? If so, I would specify.\", \"\\\"is nothing different from Eqn. (2) but with replacing a known reward confusion\\\" --> \\\"replaces a known reward confusion\\\"\", \"4) Experiments:\", \"Diverse experiments! That's great. Lots of algorithms, lots of environment types.\", \"I expected to see Phrased Q-Learning in the experiments. Why was it not included?\", \"The plots are pretty jagged, so I'm left feeling a bit skeptical about some of the results. The results would be strengthened if the experiments were repeated for more trials.\", \"5) Conclusion:\", \"\\\"despite of the fact\\\" --> \\\"despite the fact\\\"\", \"\\\"finite sample complexity of Q-Learning with estimated surrogate rewards are given\\\" --> It's not really Q-Learning, though. It's a variant of Q-Learning. I'd suggest being explicit about that.\"], \"appendix\": [\"\\\"It is easy to validate the unbiasedness of proposed estimator directly.\\\" --> \\\"It is easy to verify that the proposed estimator is unbiased directly.\\\"\", \"\\\"For the simplicity of notations\\\" --> \\\"For simplicity\\\"\", \"\\\"the Phrased Q-Learning could converge to near optimal policy\\\" --> \\\"\\\"the algorithm Phrased Q-Learning can converge to the near optimal policy\\\"\\\"\", \"\\\"Using union bound\\\" --> \\\"Using a union bound\\\"\", \"Same comment regarding $\\\\gamma$: it's typically $0 \\\\leq \\\\gamma < 1$.\", \"Bottom of page 16, the second equation from the bottom, far right term: $c.j$ --> $c,j$.\", \"\\\"Using CauchySchwarz Inequality\\\" --> \\\"Using the Cauchy-Schwarz Inequality\\\"\"], \"references\": \"Loftin, Robert, et al. \\\"Learning something from nothing: Leveraging implicit human feedback strategies.\\\" Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on. IEEE, 2014.\\n\\n\\tKrueger, D., Leike, J., Evans, O., & Salvatier, J. (2016). Active reinforcement learning: Observing rewards at a cost. In Future of Interactive Learning Machines, NIPS Workshop.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting general surrogate reward which has wide applicability, and can be flexibly included alongside a variety of algorithms.\", \"review\": \"## Summary\\n\\nThe authors present work that shows how to deal with noise in reward signals by creating a surrogate reward signal. The work develops a number of results including: showing how the surrogate reward is equal in expectation to the true reward signal, how this doesn't affect the fixed point of the Bellman equation, how to deal with finite and continuous rewards and how the convergence time is affected for different levels of noise. They demonstrate the value of this approach with a variety of early and state-of-the-art algorithms on a variety of domains,, and the results are consistent with the claims.\\n\\nIt would be useful to outline how prior work approached this same problem and also to evaluate the proposed method with existin approaches to the same problem. I realise that this is the first method that estimates the confusion matrix rather than assuming it is known a priori but there are obvious ways around this, e.g. the authors first experiment assumes the confusion matrix is known, so this would be a good place to compare with other competing techniques. Also, the authors have a way of estimating this, so they could plug it into the other algorithms too.\\n\\nI also have some concerns about the clarity and precision of the proofs, although I do not have any reason to doubt the Lemma/Theorem correctness (see below).\\n\\nThe weakest part of the approach is in how the true reward is estimated in order to estiamate the confusion matrix. It uses majority vote (which is only really possible in the case of finite rewards with noise sufficiently low that this will be a robust estimate). Perhaps some other approaches could be explore here too.\\n\\nFinally, there is discussion about adversarial noise in rewards at the beginning but I am not sure the theory really addresses it nor the evaluations.\\n\\nNonetheless, given that I do not know whether the claim of originality is true (in terms of the estimation of the confusion matrix). If it is, then the work is a significant and interesting advance, and is clearly widely applicable in domains with noisy rewards. It would be interesting to see a more tractable approach for continous noise too, but this would probably involve assumptions (smoothness? Gaussianity?), and doesn't impact the value of this work.\\n\\n## Detailed notes\\n\\nThere is a slight sloppiness in notation in equation (1). This uses \\\\tilde{r} as a subscript of e, but r is +1 or -1 and the error variables are e_+ and e_- (not e_{+1} and e_{-1}).\\n\\n\\nThe noise levels in Atari (Figure 3) show something quite interesting which could be commented upon. For noise below 0.5 the surrogate reward works roughly similarly to the noisy reward, but when the noise level goes above this, the surrogate reward clearly exploits the increased information content (similar to a noisy binary channel with over 0.5 noise). This may have implications for adversarial noise.\", \"there_are_also_some_issues_with_the_proofs_which_i_spotted_outlined_below\": \"### Lemma 1 proof\\nThe proof of Lemma 1, I think, fails to achieve its objective. The first pair of equations is not a rewrite of equation (1). I believe that the authors intend for this to be a consequence of Equation (1) but do not really demonstrate this clearly. Also, the authors seem to switch between binary rewards -1 and +1 and two levels of reward r- and r+ leading to some confusion. I would suggest the latter throughout as it is more general but involves no more terms.\\n\\nI suggest the following as an outline for the proof. It would help for them to define what they mean by the different rhats (as they currently do) and explain that these values are therefore:\\n\\n rhat- = [(1 - e+) r- - e- r+ ]/(1 - e+ - e-)\\n rhat+ = [(1 - e-) r+ - e+ r-]/(1- e+ - e-)\\n\\nfrom equation (1). What is left is for them to actually prove the Lemma, namely that the expected value of rhat is:\\n\\n E(rhat ) = p1(rhat=rhat-) rhat- + p(rhat=rhat+) rhat+ = E(r)\\n\\nwhere the probabilities relate to the surrogate reward taking their respective values. And just stylistically, I would avoid writing \\\"we could obtain\\\" and simply write \\\"we obtain\\\".\\n\\nLemma 2 achieves this more clearly with greater generality.\\n\\n\\n### Theorem 1 proof\\nAt the end of p13, the proof of the expected value loses track of the chosen action a. I would suggest the authors replace: $$\\\\mathbb{P}'(s,s',\\\\hat{r})$$ with $$\\\\mathbb{P}'(s,a, s',\\\\hat{r})$$ then define it. Likewise $$\\\\mathbb{P}(s,s')$$ should be $$\\\\mathbb{P}(s,a,s')$$ (and also defined).\", \"i_am_also_a_little_uncomfortable_with_the_switch_from\": \"$$max_{b \\\\in \\\\mathcal{A}} | Q(s',b) - Q*(s',b)|$$ in the second to last line of p13, which refers to the maximum Q value associated with some state s', to $$||Q-Q*||_{\\\\infty}$$ in the next line which is the maximum over all states and actions. The equality should probably be an inequality there too.\\n\\nThroughout this the notation could be much better defined, including how to interpret the curly F and how it acts in the conditional part of an expectation and variance.\\n\\nFinally, there is a bit too free a use of the word \\\"easily\\\" here. If it were easy, then the authors could do it more clearly I think. Otherwise, please refer to the appropriate result in the literature.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting but seems to tackle a too narrow problem\", \"review\": \"The paper aims at studying the setting of perturbed rewards in a deep RL setting. Studying the effect of noise in the reward function is interesting. The paper is quite well-written. However the paper studies a rather simple setting, the limitations could be discussed more clearly and there are one or two elements unclear (see below).\\n\\nThe paper assumes first the interesting case where the generation of the perturbed reward is a function of S*R into the perturbed reward space. But then the confusion matrix does *not* take into account the state, which is justified by \\\"to let our presentation stay focused (...)\\\". I believe these elements should at least be clearly discussed. Indeed, in that setting, the theorems given seem to be variations of existing results and it is difficult to understand what is the message behind the theorems.\\n\\nIn addition, it is assumed that the confusion matrix C is known or estimated from data but it's not clear to me how this can be done in practice. In equation 4, how do you have access to the predicted true rewards?\", \"additional_comments\": [\"The discount factor can be 0 but can not, in general, be equal to 1. So the equation in paragraph 2.1 \\\"0 < \\u03b3 \\u2264 1\\\" is wrong.\", \"The paper mention that \\\"an underwhelming amount of reinforcement learning studies have focused on the settings with perturbed and noisy rewards\\\" but there are some works on the subject (e.g., https://arxiv.org/abs/1805.03359) and a discussion about the differences with the related work would be interesting.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Syeben09FQ | Evaluating GANs via Duality | [
"Paulina Grnarova",
"Kfir Y Levy",
"Aurelien Lucchi",
"Nathanael Perraudin",
"Thomas Hofmann",
"Andreas Krause"
] | Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem. This is especially troublesome given the lack of an evaluation metric that can reliably detect non-convergent behaviors. We leverage the notion of duality gap from game theory in order to propose a novel convergence metric for GANs that has low computational cost. We verify the validity of the proposed metric for various test scenarios commonly used in the literature. | [
"Generative Adversarial Networks",
"GANs",
"game theory"
] | https://openreview.net/pdf?id=Syeben09FQ | https://openreview.net/forum?id=Syeben09FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SygwPQgLlE",
"Hyl8kexEyE",
"ByeTMuzaA7",
"B1xtw3-50Q",
"BJesks4t07",
"Bye_KjCLCm",
"rkeV_HNVCX",
"H1eMeNcf0m",
"SJx-uhFn6X",
"Skgo3KthpX",
"BJgjorY2TX",
"HkeCF4Y3a7",
"HJearNF36X",
"H1gYbPGh3X",
"B1x8hZ4q27",
"S1xlTDG9nX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545106270620,
1543925726262,
1543477268732,
1543277665351,
1543224034626,
1543068543653,
1542894955590,
1542788074147,
1542392937127,
1542392243328,
1542391203351,
1542390918215,
1542390852653,
1541314305374,
1541190062129,
1541183415850
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1049/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1049/AnonReviewer1"
],
[
"~Frans_A_Oliehoek1"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"~Frans_A_Oliehoek1"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"~Frans_A_Oliehoek1"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1049/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1049/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1049/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1049/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers still argue for rejection for the submitted paper. The AC thinks that this paper should be published at some point, but for now it is a \\\"revise and resubmit\\\".\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Revise and resubmit\"}",
"{\"title\": \"Practical evaluation of the duality gap\", \"comment\": \"1) Thank you for your suggestion. We will change the term \\\"evaluation\\\" into \\\"monitoring\\\" .\\n\\n2) There are two good reasons to use the duality gap as we compute it in practice:\\n a) If we compute an approximate worst case D/G (say up to some \\\\epsilon), then this affects the duality-gap up to a factor of \\\\epsilon. Therefore, an approximate solution translates to an approximate duality gap.\\n b) As we show in the experiments, our approximation to the duality gap actually gives good handle on the \\\"quality\\\" of the generator.\\n This is not limited to toy examples, but rather also to real-world settings, such as MNIST and cosmological images (as we describe in section 5).\"}",
"{\"title\": \"Two issues are still needed to be addressed\", \"comment\": \"I understand the importance of convergence analysis and stopping criterion for GAN and I acknowledge the duality gap has not been introduced in the GAN community. However, this is *NOT* the reason to ignore the existing literature on purpose. I am glad the authors add the related references.\", \"i_think_the_following_two_issues_are_still_needed_to_be_addressed\": \"1, The term \\\"evaluation\\\" seems mis-interpreted by the authors. As the authors agreed in the reply, the dual gap is only valid for the fixed objective. It is meaningless to use such criterion for \\\"evaluating\\\" different GANs with different objectives, i.e., used for comparing difference GANs. From this sense, the duality gap is different from FID and inception score. It will be better if the authors switch to other terminology rather than saying \\\"evaluation GANs\\\".\\n\\n2, My question \\\"how the evaluate such criterion in practice in GAN scenario is not clearly explained\\\" is not answered satisfiedly. I carefully read the Appendix C about the optimality issue in computing the true duality gap. I cannot agree with the claim \\\"This suggests that as long as one uses the same number of optimization steps when comparing different models, the suboptimality of the solution is empirically not an issue\\\". Leave the empirical results are only on synthetic dataset, such claim is even conflict with the motivation of the paper. Actually, with an even weaker assumption that one can obtain the epsilon-optimal solution to the discriminator, [1] already shows the SGD algorithm converges to the stationary point. There is no need to compute the duality gap for screening the convergence. This is in fact, an extremely strong assumption! \\n\\nConsider the difficulty of obtaining the global optimum, I think it will be good enough if the author can characterize the meaning of the ``duality gap'' without the optimal solution. Otherwise, only introducing the \\\"duality gap\\\" concept from convex-concave optimization is not significant for a separate paper.\\n\\n\\n[1] Maziar Sanjabi, Jimmy Ba, Meisam Razaviyayn, and Jason D. Lee. On the Convergence and Robustness of Training GANs with Regularized Optimal Transport. NIPS 2018.\"}",
"{\"comment\": \"Thanks. I appreciate the continued discussion.\", \"let_me_try_and_wrap_up\": \"-stochasticity: I think there was some confusion about terminology, we also did sample based evaluation (so it was stochastic), but draw new data points (directly from mixture of Gaussian distribution). Indeed, we did not explicitly describe the techniques of splitting in 3 data sets. That is a perfectly valid statement.\\n\\n-Point 6: My objection was to the statement that our paper does not discuss limitations in applying exploitability (\\\"notable differences are [...] we discuss how the suboptimality of the solution affects the quality of the score\\\"). But our paper is extremely explicit about this.\\n\\nI agree that your paper is also explicit on this front in appendix C. I still feel that certain formulations in section 4 are open to misinterpretation. (\\\"we discuss the appropriate way to estimate the metric using samples\\\", \\\"we describe a method for an efficient and practical computation of the duality gap.\\\", \\\"For all experiments we report both the duality gap (DG) and the minimax loss\\\"), but that is for you to consider.\\n\\n-objective. This is a matter of perspective. I would argue that the designer should pick one true, 'test' objective (say the vanilla GAN objective with log). If WGANs (or any other variant) learns better, it should perform well on this test objective (even if it was trained on another). Exploitability gives a way to compare a GAN and a WGAN model on the test objective.\\n\\nAgain, thank you for the discussion, I find it very interesting work, and I would welcome any later discussions via email or other form it that would be of interest. \\n\\nBest,\\n-Frans\", \"title\": \"Probably the last comments needed\"}",
"{\"title\": \"Thank you and further clarifications\", \"comment\": \"Thank you for your interest and suggestions. Please find our comments below:\\n\\n- Regarding stochasticity\\nHere we mean the stochasticity in estimating the performance measure. It seems like you assume to have a direct access to exact objective values in your paper. However, in practice, we can only estimate these via samples. In this case, we show that one needs to carefully split the training set in 3 parts in order to maintain an unbiased estimate of the expected duality gap. See an elaborate discussion of this in Section 4 of our manuscript.\\n\\n- Regarding your objection to point (6)\\nWe discuss this in Appendix C \\u201cAnalysis of the quality of the empirical DG\\u201d. In particular, see the paragraphs \\u201cCollapsed worst case generator \\u201d and \\u201cSuboptimal solutions due to the optimization\\u201d, where we discuss what happens in the case of mode collapse of the worst case generator and the effect of the number of optimization steps on the quality of the solution.\\n\\nWe disagree that we are not clear that the theoretical gap is not what is obtained in practice. From our manuscript: \\u201cThe theoretical assumption appearing in the proof in Appendix A is that the discriminator and generator have unbounded capacity and we can obtain the true minimizer and maximizer when computing u_worst and v_worst, respectively. This, however, is not tractable in practice. Furthermore, it is well known that one common problem in GANs is mode collapse. This raises the question of how the duality gap metric would be affected if the worst generator that we compute is collapsed itself.\\u201d Moreover, in Section 4 we dedicate an entire section for the **estimation** of the theoretical gap in practice. Our extensive experiments do show that this estimation is very efficient in evaluating models, yet it is practical to compute. \\n\\n- Regarding comparison of different objectives\\nIf one wants to compare two models, e.g. a vanilla GAN and a WGAN, one has to consider that the two objective functions are different. Specifically, the first one contains a log, whereas the latter does not and therefore the range of the values will be different. Furthermore, the landscape of neural networks is still the object of intensive research and we therefore did not want to make any pre-emptive claims without having done a more thorough investigation (which we intend to do as future work) .\"}",
"{\"comment\": \"Thank you for discussing.\\n\\n\\nI think 3,4,5 are very fair points. Indeed I am quite excited about seeing a thorough evaluation of this metric!\\n\\nThe discussion of different data partitions (point 1), is useful. Since we were using with a synthetic mixture of Gaussian tasks only, this was not an issue for us: we could simple generate new point for all these on-the-fly. \\n\\nHowever, I am unsure what you mean by \\\"it seems that your work does not take into account the stochasticity of GANs\\\". I cannot imagine how one could have a GAN that is not stochastic? So we certainly deal with that stochasticity. (Also, we tackle the GAN using mixed strategies, if that is what is intended)?\\n\\nOne of the points I object to is (6). I cannot find the location in the paper where the sub-optimality of of the practically applied algorithm is discussed? Actually, I think that most of the formulations in your submission seem to suggest that you can compute the duality gap (and even efficiently!). Of course, this is not the case: computing the duality GAP requires computing 2 best responses (solving 2 non-convex optimization problems), and we can not find the optimal solutions in general. In contrast, our paper is extremely up front about this: this is why we introduce \\\"resource bounded best responses\\\", which provide as good as a best response as one can compute given finite resources.\\n\\nAs such, I believe what you actually compute is what one could call the \\\"resource bounded duality GAP (RB-DG)\\\", which is precisely our measure of (resource bounded) exploitability, eq. (11) in our arxiv paper https://arxiv.org/abs/1806.07268 ?\\n\\nAs for point 2, I think proposing such techniques is useful, but it is not quite clear to me where the merit of these techniques are evaluated. As above, I think it could be useful to reformulate the terminology, though. Really \\\"practical and efficient estimation of duality gap for GANs\\\" does (as far as we know) not exist?\\n\\n\\nI think that in terms of motivating the metric, there are some point that we cover in section 6 / appendix B.2 of our arxiv paper, that could be useful to adopt:\\n\\n-the reason why worst case generator performance is not directly useful, is because we do not know that value of the game (only in the infinite capacity setting the value of v*=log 4 hold, for a finite parametric setting this value is unknown, however). As such, these metrics would allow comparing different generators, but are not useful for knowing if one is far from an equilibrium. (even if one could compute this quantity exactly!)\\n\\n-I think exploitability is extremely important to be certain about the performance of the generator:\\n\\\"In particular, the exploitability of the classifier actually may provide information about the quality of the generator: if the generator holds up well against a perfect classifier, it should be close to the data distribution.\\\"\\n\\n-I actually disagree with a statement in the conclusion of your paper\\n\\\"Of course, a downside is that - as most loss functions - the values obtained from these metrics are architecture and objective dependent, and can therefore not directly be compared\\\"\\nIn contrast, this is one of the main strength, we wrote:\\n\\\"However, [resource bounded exploitability] is still useful for comparing different found solution pairs [...] as long as we use the same computational resources to compute approximate best responses against them. Negative values of [resource bounded exploitability] should be interpreted as \\u201crobust up to our computational resources to attack it\\u201d.\\\"\", \"title\": \"would like to see this published, but some formulations could be adapted\"}",
"{\"title\": \"We discuss the differences below\", \"comment\": \"Thank you for pointing out your relevant paper, which we will cite in the revised version. The exploitability measure is indeed the mixed strategy formulation of the duality gap metric. We do agree that the metric is a very natural metric for convergence in minimax games, and is well known in optimization as also pointed out by Reviewer 1.\", \"some_of_the_notable_differences_are\": \"1. *Stochasticity*. It seems that your work does not take into account the stochasticity of GANs. This aspect makes the computation of our metric more difficult as there are subtleties on how one needs to use the following 3 (disjoint) sets: a) training, b) adversary finding and c) test set in order to obtain an unbiased estimate. We discuss this in detail in Section 4.\\n\\n2. *Practical computation*. One crucial aspect of our work is to discuss an efficient practical computation of the metric for GANs. We suggest to initialize the models with the last version of the generator/discriminator, which makes the optimization more efficient. We also empirically demonstrate its efficiency in terms of computation time. We also explore a further approximation by using snapshots from the history.\\n\\n3. *Empirically demonstrating the desirable properties of the metric/Showing the metric works*. While in your paper, you evaluate the algorithms using the exploitability metric, we evaluate the evaluation method i.e. the metric. We did extensive experiments showcasing the duality gap metric detects convergent and non-convergent behavior, stable mode collapse, sample quality and can be applied to any domain and any minimax GAN formulation (e.g. WGAN). \\n\\n4. *Demonstrating how a practitioner can use the metric*. We demonstrate how the curves look like in specific GAN scenarios, and show how the metric can be used as a monitoring, debugging and tuning tool.\\n\\n5. *Large scale experiments and comparison to baselines*. In our work we perform an extensive experimental study on the following real datasets: CIFAR10, MNIST and a cosmology dataset. Furthermore, we compare against commonly used strong baselines (FID and Inception score) and discuss the differences. We also compare against domain-specific metrics developed by experts for the cosmology dataset and show high correlation.\\n\\n6. *Discussing cons and suboptimality of the approximation*. As the practical solution is an approximation of the theoretical metric, we discuss how the suboptimality of the solution affects the quality of the score. In particular, we evaluate and discuss what happens when the most adversarial G for a fixed D collapses. We believe these are important aspects for the properties of the practical version of the metric. At the same time, we also show that duality-gap gives a direct handle to mode collapse. This is formalized in Proposition 1 of our paper.\\n\\nAgain, thank you for highlighting your paper. We will add a discussion emphasizing the differences between the two approaches in the revised version.\"}",
"{\"comment\": \"I actually think that this paper is on the right track to propose a measure of convergence to GANs. So much in fact, that we have proposed the same measure, which we call exploitability, published at BNAIC/BeNeLearn:\", \"https\": \"//arxiv.org/abs/1806.07268\\n\\nI would happily discuss any potential differences (e.g., we formulate GANs in terms of mixed strategies inherently), but my impression is that the notions are the same?\", \"title\": \"duality = exploitability?\"}",
"{\"title\": \"[Part 2/2] The metric gives a natural solution to many open challenges in GANs\", \"comment\": \"We summarize the properties and advantages of our approach in the table shown below, including a comparison to Inception score (INC) and FID.\\n\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Property\\\\Metric | INC | FID | minimax |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Sensitivity to mode collapse | moderate | high | high |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Sensitivity to mode invention | low | high | high |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Sensitivity to intra-mode collapse | low | high | high |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Sensitivity to visual quality and transformations | moderate | high | high |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Computational: Fast | yes | yes | yes |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Computational: Needs labeled data or a pretrained classifier | yes | yes | no |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n| Computational: Can be applied to any domain without change | no | no | yes |\\n+---------------------------------------------------------------------------------------------+----------------+--------+--------------+\\n\\n\\nWe hope to have addressed your concerns and that our reply is detailed and informative enough so that the reviewer can reconsider their judgement. We are looking forward to the reply.\", \"references\": \"[1] Ermon et al. Generative Adversarial Networks, [cs236, Stanford], <http://cs236.stanford.edu/assets/slides/cs236_lecture9.pdf#page=19>\\n[2] Lil\\u2019Log https://lilianweng.github.io/lil-log/2017/08/20/from-GAN-to-WGAN.html#lack-of-a-proper-evaluation-metric\\n[3] Mescheder et al. Which Training Methods for GANs fo actually converge? [ICML 2018] arXiv:1801.04406\"}",
"{\"title\": \"[Part 1/2] The metric gives a natural solution to many open challenges in GANs\", \"comment\": \"Thank you for the review. We appreciate the comments on the nice flow of the paper and the carefully designed experimental section. Your two concerns are (1) \\u201cI was not very familiar with GANs, thus I'm not sure on the significance of paper\\u201d and (2) \\u201cI'm not quite impressed by the advantages of proposed metrics\\u201d, which we address below:\\n\\n1. Significance\\nGANs are a 2-player minimax game, which makes their objective function different than the more commonly encountered likelihood optimization problems, thus yielding new challenges in terms of optimization and evaluation. For evaluating likelihood-based models, a common metric to use is the test loss, whereas for minimax problems it is not clear what the equivalent would be [1]. \\nIn particular, with the absence of such a metric, practitioners are facing several problems, such as (i) determining whether the model has converged, (ii) determining when to stop training (see Fig. 12 and 13), (iii) having a meaningful curve throughout training as the discriminator and generator losses are not intuitive, (iv) comparing different runs and (v) debugging the model in the sense of (un)stable mode collapse, non-convergence etc. See for example [1]: \\u201cGenerative adversarial networks are not born with a good objection function that can inform us about the training progress. Without a good evaluation metric, it is like working in the dark. No good sign to tell when to stop; No good indicator to compare the performance of multiple models.\\u201d. Current methods for stopping criteria rely on visual inspection and/or using some sample-quality metric as a proxy. However this is not principled and it is unclear how to compute the metrics in non-image domains. Thus another open challenge with GANs is (vi) having a useful metric that is domain independent.\\n\\nIn this work, we argue that such a natural metric exists, namely the duality gap and its minimax part. We show how to compute it in practice in an efficient way and demonstrate its desirable properties across different GAN pitfalls, domains and GAN objectives. Thus, in our opinion, this work is of significance for the GAN community, not only for practical purposes as it gives a solution to the previously mentioned open problems (i-vi) both in theory and practice, but also from research perspective as it gives a reliable non-convergence metric to help analyse which methods actually converge, which is one of the central issues of GANs. Note that current practical analyses mainly focus on 2-dimensional problems where the solution can be visually inspected due to the lack of such a metric [2]. \\n\\n2. Advantages of the proposed metrics\\nFrom a theoretical perspective, the DG is very natural for the detection of non-convergent behaviors, it is always non-negative and is zero if and only if the model has reached a (Nash) equilibrium.\\n\\nThe experimental results presented in the paper provide a thorough evaluation of the metric introduced in our submission. We included various tests that focus on common pitfalls encountered with GANs and demonstrated that the proposed metric can detect these corner cases. In particular:\\n\\nIn experiment 5.1 we demonstrate that the *DG yields a meaningful curve* throughout training and detects convergent and non-convergent behaviours. Please note that the commonly used metrics such as FID and Inception score cannot be applied to these datasets.\\n\\nIn experiment 5.2 we show that the *DG detects stable mode collapse* and can distinguish between stable and unstable collapses.\\n\\nIn experiment 5.3 we empirically demonstrate that the *minimax metric detects visual sample quality (adding noise, Gaussian swirl and blur) and is very sensitive to change of modes* (mode dropping, mode invention and intra-mode collapse). It works better than Inception score, and as well as FID. However, both the Inception score and FID rely on a pre-trained Imagenet classifier, whereas our metrics need no labeled data or a pre-trained classifier.\\n\\nFinally, in experiment 5.4 we show the *DG metric can be applied on another GAN minimax formulation (WGAN) and on another domain that is not natural images (cosmology data)*. We find that the metric is highly correlated with a domain specific measure of performance used in cosmology. Note that the domain-specific metric requires expert knowledge and its computation is very slow, unlike the DG. Furthermore, the Inception score and FID cannot be applied on this data as they require an imagenet classifier (i.e. trained with labeled natural images).\"}",
"{\"title\": \"Our usage of duality in GANs is in terms of evaluation, not optimization\", \"comment\": \"Thank you for reviewing our paper. Please find our replies inline:\\n\\n1. \\u201cThough training GAN can have some useful applications, the contribution of the submission is pretty moderate.\\u201d\\nFirst, we would like to stress that the contribution of our paper is not to train GANs. Instead, our contribution is to propose a reliable convergence metric for GANs that can be computed efficiently. The need for such a convergence metric has been pointed out for example in the context of analysing convergence of GANs [1] and understanding when to stop the GAN training [2, 3] (see also Fig. 12 and 13). Indeed, most empirical convergence analyses are for 2-dimensional problems due to the lack of such metric (See for example Mescheder et al. [ICML 2018]: \\u201cMeasuring convergence for GANs is hard for high dimensional problems, because we lack a metric that can reliably detect non-convergent behavior. We therefore first examine the behavior [...] on simple 2D examples where we can assess convergence using an estimate of the Wasserstein-1-distance.\\u201d [1]). Furthermore, since GANs are framed as a 2-player minimax game the stopping criteria is unclear in comparison to the more traditional likelihood training [2]. In this work we argue that there is a convergence metric suitable for the general GAN game.\\nIn particular, our main contributions are:\\n- Proposing the duality gap as a natural convergence metric and the minimax metric as a performance metric in GANs\\n- Show how an unbiased estimate of the metrics can be efficiently computed in practice without slowing down training\\n- Design experiments that target all of the common pitfalls of GANs (stable and unstable mode dropping/invention, intra-mode collapse, non-image domain, distortion of visual quality etc.) and demonstrate empirically that the metrics are able to capture and detect all of those\\n\\nThus the two metrics show very desirable properties both in theory and practice. We believe that the DG metric is very helpful as a monitoring tool for any practitioner training a GAN. The benefits are: (i) knowing whether the model has converged; (ii) knowing when to stop training; (iii) have a meaningful curve throughout training that reflects the performance of the model (i.e. whether it\\u2019s improving or not); (iv) comparison of different runs and hyperparameter searches and (v) debugging. As computing the metric is very efficient in practice this comes at no significant computational cost, and unlike other metrics requires no labels or a pre-trained classifier and can be applied to any minimax GAN formulation and any domain as demonstrated empirically.\\nFurther, it allows for pushing the research of the non-convergence issue on GANs on problems that are beyond 2-dimensional where they can visually be analysed. We have updated the write-up to make our contribution clearer, both with respect to existing work, as well as the importance of a convergence metric for the community.\\n\\n2. \\u201cDuality-inspired approaches, embedded also in optimization have already been proposed\\u201d\\nAlthough the reference cited by the reviewer does discuss duality for GANs, it does so in a very different context since it discusses a Lagrangian view to train GANs while we are interested in using the duality GAP as a convergence measure (and not as a training criterion). One problem we focus on in our submission is to demonstrate how to efficiently compute such measure during training, we therefore do not modify the training objective. We added a brief discussion in the related work section.\\n\\n3. \\u201cThe notion of generator and discriminator networks with unbounded capacity (which is an assumption in 'Proposition 1') lacks formal definition\\u201d\\nAs noted by the reviewer, we re-used the notion originally introduced in the GAN paper. Informally, we consider the capacity as the flexibility of a model to learn a variety of functions. More formally, we regard the capacity as the size of the space that can be approximated with the generator and discriminator. In most cases, neural networks are universal approximators and can therefore approximate any function (i.e. they are dense in the target space), thus leading us to assume they have \\u201cunbounded capacity\\u201d.\\n\\nWe hope that we have cleared out any confusion and are looking forward to the reviewer\\u2019s reply.\", \"references\": \"[1] Mescheder et al. Which Training Methods for GANs fo actually converge? [ICML 2018] arXiv:1801.04406\\n[2] Chiu et al. GAN Foundations, [CSC254, University of Toronto], <https://www.cs.toronto.edu/~duvenaud/courses/csc2541/slides/gan-foundations.pdf#page=9>\\n[3] Ermon et al. Generative Adversarial Networks, [cs236, Stanford], <http://cs236.stanford.edu/assets/slides/cs236_lecture9.pdf#page=19>\"}",
"{\"title\": \"[Part 2/2] An important open issue in GANs is the lack of a convergence metric: We introduce the duality gap to GANs and show how to compute and use it efficiently in practice\", \"comment\": \"3. \\u201c...DG is only able to screen the optimization convergence and the solution quality w.r.t. the same objective\\u2026\\u201d\\nYes, we did discuss this aspect in the conclusion. Note that convergence curves could potentially be normalized but this requires further investigation that we plan on doing as a future work. Given a fixed objective, the DG yields a curve that can be used for: debugging, hyperparameter tuning, understanding whether the model has converged or whether it suffers from stable or unstable collapse, and of course, as mentioned earlier, it serves as a stopping criterion.\\n\\nFurther, a recent interest in the field is to understand which regularizer stabilizes GAN training by keeping the objective fixed and changing the regularizer [1, 6, 7]. This is yet another example where the DG would be meaningful for exactly examining the effect of the regularizer on top of a meaningful curve. Hence, the metric, as is, is useful both in practice and for pushing the research efforts forward. \\n\\n4. \\u201chow to evaluate such criterion in practice in GAN?\\u201d\\nPlease refer to Section 4 \\u201cEstimating the DG metric for GANs\\u201d, where we explain the details of the practical computation. We also include some subtleties on how to accurately use train/val/test set in order to get an unbiased estimate for the estimation of the DG for GANs.\\n\\u201cWithout the optimal solution, what is the meaning of the ``duality gap'' should be explained. (theory/empirical) What will happen if we only obtain the suboptimal solutions which themselves are model collapsed?\\u201d. \\nThank you for raising this point. We added a section to the appendix addressing this question. Both theoretically and empirically we analyse how the suboptimality of the solution affects its quality. In particular, we focus on (a) the case where the worst generator used for the computation of the maxmin of DG is itself collapsed, and (b) investigate how the number of optimization steps affects the solution. In summary, we find that this is not an issue, both in terms of theory and practice. Please see Appendix Section C for more details.\\n\\n5. \\u201cthe min-max is the variational form of some divergences, which the GANs are directly optimizing\\u201d\\nThe estimation of divergences is difficult, whereas we show we can efficiently approximate DG. \\n\\nTo conclude, based on the review we have updated the manuscript to more clearly emphasize its contribution which we believe was the main concern raised by the reviewer.\", \"references\": \"[1] Mescheder et al. Which Training Methods for GANs fo actually converge? [ICML 2018] arXiv:1801.04406\\n[2] Soumith Chintala. How to train a GAN?, NIPS Tutorial, 2016\\n[3] Chiu et al. GAN Foundations, [CSC254, University of Toronto], <https://www.cs.toronto.edu/~duvenaud/courses/csc2541/slides/gan-foundations.pdf#page=9>\\n[4] Salimans et al. Improved Techniques for Training GANs [NIPS 2016] arXiv:1606.03498\\n[5] Ermon et al. Generative Adversarial Networks, [cs236, Stanford], <http://cs236.stanford.edu/assets/slides/cs236_lecture9.pdf#page=19>\\n[6] Fedus et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. [ICLR 2018], arXiv:1710.08446\\n[7] Kurach and Lucic et al. The GAN Landscape: Losses, Architectures, Regularization, and Normalization. arXiv:1807.04720\"}",
"{\"title\": \"[Part 1/2] An important open issue in GANs is the lack of a convergence metric: We introduce the duality gap to GANs and show how to compute and use it efficiently in practice\", \"comment\": \"We thank the reviewer for the thoughtful comments. In the following we address their concerns and questions:\\n\\n1. \\u201cPlease justify the novelty and validity\\u201d\\nFirst, we would like to emphasize that the lack of a convergence metric for GANs is an open issue in the community. As discussed in the introduction, the need for such a metric is crucial and affects several important aspects such as:\\n\\n- *Convergence analysis*. Over the past years, GANs have been the subject of intense research in the community, giving rise to a plethora of GAN models as well as training methods. In practice, it has been observed that GANs might not converge under certain settings. While this non-convergence behavior can in practice be visually recognized for some low-dimensional examples (such as a 2D mixture of Gaussian), this is in general more difficult in high-dimensional spaces due to the lack of a convergence metric. This problem is actually often discussed in the litterature, see e.g. Mescheder et al. [ICML 2018]: \\u201cMeasuring convergence for GANs is hard for high dimensional problems, because we lack a metric that can reliably detect non-convergent behavior. We therefore first examine the behavior [...] on simple 2D examples where we can assess convergence using an estimate of the Wasserstein-1-distance.\\u201d [1]\\n- *Stopping criteria and meaningful curve*. GANs are known to be hard to train in practice [2]. One of the common challenges practitioners are facing is when to stop training. See for example [3]: \\u201cGAN foundations: cons: Unclear stopping criteria\\u201d. In particular, it is well known that the curves of the discriminator and generator losses oscillate and are non-informative as to whether the model is improving or not (See Fig. 12 and 13). This is especially troublesome when a GAN is trained on non-image data in which case one might not be able to use visual inspection or FID/Inception score as a proxy.\\n- *Domain-independent evaluation metric*. Commonly used evaluation metrics such as FID and Inception Score are mainly suitable for natural images as they rely on a pretrained Imagenet classifier. This is also a problem that is commonly discussed in the literature, see e.g. [4]: \\u201cGenerative adversarial networks are a promising [...] that has so far been held back by unstable training and by the lack of a proper evaluation metric.\\u201d. Instead the metric suggested in the paper is does not require any specific type of data and was for example shown empirically to generalize to cosmological data.\\n\\nHence, the metric we propose is a more generic tool that can serve as a) a monitoring tool to help practitioners throughout training, b) a domain-independent metric that can help spread the use of GANs to non-image domains.\\n\\nThe duality gap (DG) and the minimax value are natural metrics for this, as they are well known to capture exactly that. As rightfully pointed out by the reviewer, the duality gap is a well-known notion in optimization and our contribution is its introduction as a metric for GANs. An important aspect we discuss in the paper is with regard to an efficient way to estimate the duality gap without slowing down training. Note that although the two metrics may seem \\u201ctoo natural\\u201d from an optimization point of view, they are simply **not** used in the community, despite the need for them as we discussed earlier.\", \"see_for_example_salimans_et_al\": \"\\u201cGenerative adversarial networks lack an objective function, which makes it difficult to compare performance of different models.\\u201d [4] and \\u201cGAN optimization challenges: No robust stopping criteria in practice (unlike likelihood based learning)\\u201d [5]. In this work, we argue that such a metric does exist and it indeed comes naturally from the objective function. This is also what our experiments demonstrate.\\n\\n2. \\u201cThe paper ignores rich literatures in optimization...\\u201d\\nYes, we do agree, but note that (to the best of our knowledge) almost all the existing literature focuses on solving minimax problems with convex-concave objectives and therefore existing proof guarantees do not apply to GANs. Our contribution does not relate to optimising GANs, but instead in showing that the duality gap can be empirically computed and yields good estimates of the convergence of a GAN. We revised the text to clearly emphasize this and also included the suggested reference.\"}",
"{\"title\": \"Please justify the novelty and validation, and explain the computation details\", \"review\": \"In this paper, the authors proposed the duality gap as the criterion for evaluating the training of GAN. To justify the proposed criterion, the authors designed empirical experiments on both synthetic and real-world datasets to demonstrate the ability of the duality gap for detecting divergence, mode collapse, sample equality, as well as the generalization to other application domains besides image generation. Comparing with the existing criteria, e.g., FID and INC, the duality gap shows better ability and computational efficiency.\\n\\n\\nHowever, the paper ignores rich literatures in optimization that uses the duality gap as the criterion for characterizing the convergence of algorithms for min-max saddle point problem, e.g., [1]. In fact, in optimization community, using duality gap to screening the convergence on saddle point problem is a common knowledge. [1] even provides the finite-step convergence rate when the saddle point problem is convex-concave. This paper is only introducing that into machine learning community. Therefore, the novelty of the paper seems not enough. \\n\\nSecondly, the duality gap is only able to screen the optimization convergence and the solution quality w.r.t. **the same objective**. It is not valid to compare different GANs with different losses function using the duality gap. Theoretically, for any loss function derived from some divergences, e.g. [2], the global optimal solution can always achieve zero duality gap. In other words, for different GANs, with different objectives, the duality gap cannot distinguish which one is better. In such sense, the title is very misleading. \\n\\nThirdly, how the evaluate such criterion in practice in GAN scenario is not clearly explained. Considering the neural network parametrization of both the generator and discriminator, the argmax_v M(u, v) and argmin_u M(u, v) is not tractable. Without the optimal solution, what is the meaning of the ``duality gap'' should be explained. What will happen if we only obtain the suboptimal solutions which themselves are model collapsed? Without such discussion in both theoretical and/or empirical aspects, I am not very convincing about the conclusion. \\n\\nFinally, if one follows the Fenchel dual view of GAN in [2, 3], the min-max is the variational form of some divergences, which the GANs are directly optimizing. It is straightforwardly to see the better min-max value is, the smaller divergence between generated samples and ground-truth is, and thus, the better quality of the generator is. The fact that min-max objective is indeed able to characterize the quality of generator is obvious and well-known. Otherwise, there is not need to use such objective in the optimization to train the model. \\n\\n\\n[1] Nemirovski, A., Juditsky, A., Lan, G., and Shapiro, A. (2009). Robust stochastic approximation approach to stochastic programming. SIAM J. on Optimization, 19(4):1574\\u20131609.\\n\\n[2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672\\u20132680, 2014.\\n\\n[3] S. Nowozin, B. Cseke, and R. Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv:1606.00709, 2016\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"usage of duality in GANs, moderate contribution\", \"review\": \"The focus of the submission is GANs (generative adversarial network), a recent and popular min-max generative modelling approach. Training GANs is considered to be a challenging problem due to the min-max nature of the task. The authors propose two duality-inspired stopping criteria to monitor the efficiency and convergence of GAN learning.\\n\\nThough training GAN can have some useful applications, the contribution of the submission is pretty moderate. \\ni) Duality-inspired approaches, embedded also in optimization have already been proposed: see for example 'Xu Chen, Jiang Wang, Hao Ge. Training Generative Adversarial Networks via Primal-Dual Subgradient Methods: A Lagrangian Perspective on GAN. ICLR-2018.'.\\nii) The notion of generator and discriminator networks with unbounded capacity (which is an assumption in 'Proposition 1') lacks formal definition. I looked up the cited Goodfellow et al. (2014) work; it similarly does not define the concept. Based on the informal definition it is not clear whether they exist or are computationally tractable.\", \"minor_comments\": \"-MMD is a specific instance of integral probability metrics when in the latter the function space is chosen to be the unit ball of a reproducing kernel Hilbert space; they are not synonyms.\\n-mixed Nash equilibrium: E_{v\\\\sim D_1} should be E_{v\\\\sim D_2}.\\n-It might be better to call Table 1 as Figure 1.\\n-References: abbreviations and names should be capitalized (e.g., gan, mnist, wasserstein, nash, cifar). Lucic et al. (2017) has been accepted to NIPS-2018.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A nicely written work, but concerns on significance\", \"review\": \"This work proposes to use duality gap and minimax loss as measures for monitoring the progress of training GANs. The authors first showed a relationship between duality gap(DG) and Jensen-Shannon divergence and non-negativeness on DG. Then, a comprehensive discussion was presented on how to estimate and efficiently compute DG. A series of experiments were designed on synthetic data and real-world image data to show 1) how duality gap is sensitive to capture non-convergence during training and 2) how minimax loss efficiently reflects the sample quality from generator.\\n\\n\\nI was not very familiar with GANs, thus I'm not sure on the significance of paper and would like to see opinions from other reviews on this. For reviewing this paper, I also read the cited works such as Salimans (2016), Heusel (2017). Compared with them, the theoretical contribution of this work seems less significant. Also, I'm not quite impressed by the advantages of proposed metrics. However, this work is nicely written, the ideas are delivered clearly, experiments are nicely designed. I kind of enjoying reading this paper due to its clarity.\", \"other_concerns\": \"There are two D_1 in Equation Mixed Nash equilibrium.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.